For comprehensive and current results, perform a real-time search at Science.gov.

1

Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother's age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

Kershenbaum, Anne D; Langston, Michael A; Levine, Robert S; Saxton, Arnold M; Oyana, Tonny J; Kilbourne, Barbara J; Rogers, Gary L; Gittner, Lisaann S; Baktash, Suzanne H; Matthews-Juarez, Patricia; Juarez, Paul D

2014-12-01

2

Recent advances in informatics technology has made it possible to integrate, manipulate, and analyze variables from a wide range of scientific disciplines allowing for the examination of complex social problems such as health disparities. This study used 589 county-level variables to identify and compare geographical variation of high and low preterm birth rates. Data were collected from a number of publically available sources, bringing together natality outcomes with attributes of the natural, built, social, and policy environments. Singleton early premature county birth rate, in counties with population size over 100,000 persons provided the dependent variable. Graph theoretical techniques were used to identify a wide range of predictor variables from various domains, including black proportion, obesity and diabetes, sexually transmitted infection rates, mother’s age, income, marriage rates, pollution and temperature among others. Dense subgraphs (paracliques) representing groups of highly correlated variables were resolved into latent factors, which were then used to build a regression model explaining prematurity (R-squared = 76.7%). Two lists of counties with large positive and large negative residuals, indicating unusual prematurity rates given their circumstances, may serve as a starting point for ways to intervene and reduce health disparities for preterm births. PMID:25464130

Kershenbaum, Anne D.; Langston, Michael A.; Levine, Robert S.; Saxton, Arnold M.; Oyana, Tonny J.; Kilbourne, Barbara J.; Rogers, Gary L.; Gittner, Lisaann S.; Baktash, Suzanne H.; Matthews-Juarez, Patricia; Juarez, Paul D.

2014-01-01

3

Computer Science and Technology Publications. NBS Publications List 84.

ERIC Educational Resources Information Center

This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

4

Fourth International Symposium Computational Methods in Toxicology and Pharmacology

Fourth International Symposium Computational Methods in Toxicology and Pharmacology Integrating Toxicology, EPA, NC, USA. NEW PUBLIC DATA & INTERNET RESOURCES IMPACTING PREDICTIVE TOXICOLOGY 20:00 - 22

Ferreira, MÃ¡rcia M. C.

5

Computational Methods for Crashworthiness

NASA Technical Reports Server (NTRS)

Presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Crashworthiness held at Langley Research Center on 2-3 Sep. 1992 are included. The presentations addressed activities in the area of impact dynamics. Workshop attendees represented NASA, the Army and Air Force, the Lawrence Livermore and Sandia National Laboratories, the aircraft and automotive industries, and academia. The workshop objectives were to assess the state-of-technology in the numerical simulation of crash and to provide guidelines for future research.

Noor, Ahmed K. (compiler); Carden, Huey D. (compiler)

1993-01-01

6

Closing the "Digital Divide": Building a Public Computing Center

ERIC Educational Resources Information Center

The public computing center offers an economical and environmentally friendly model for providing additional public computer access when and where it is needed. Though not intended to be a replacement for a full-service branch, the public computing center does offer a budget-friendly option for quickly expanding high-demand services into the…

Krebeck, Aaron

2010-01-01

7

Systems Science Methods in Public Health

Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example in preparing for global pandemics. However, systems science study designs and analytic methods remain underutilized and are not widely featured in public health curricula or training. In this review we present an argument for the utility of systems science methods in public health, introduce three important systems science methods (system dynamics, network analysis, and agent-based modeling), and provide three case studies where these methods have been used to answer important public health science questions in the areas of infectious disease, tobacco control, and obesity. PMID:22224885

Luke, Douglas A.; Stamatakis, Katherine A.

2012-01-01

8

Public participation: more than a method?

While it is important to support the development of methods for public participation, we argue that this should not be at the expense of a broader consideration of the role of public participation. We suggest that a rights based approach provides a framework for developing more meaningful approaches that move beyond public participation as synonymous with consultation to value the contribution of lay knowledge to the governance of health systems and health research. PMID:25337604

Boaz, Annette; Chambers, Mary; Stuttaford, Maria

2014-01-01

9

Computational methods working group

During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.

Gabriel, T. A.

1997-09-01

10

Special Publication 500-293 US Government Cloud Computing

Special Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume II and Dawn Leaf NIST Cloud Computing Program Information Technology Laboratory #12;This page left Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume II Release 1.0 (Draft

11

Computational Methods in Drug Discovery

Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

2014-01-01

12

Computational methods for local regression

Local regression is a nonparametric method in which the regression surface is estimated by fitting parametric functions locally in the space of the predictors using weighted least squares in a moving fashion similar to the way that a time series is smoothed by moving averages. Three computational methods for local regression are presented. First, fast surface fitting and evaluation is

William S. Cleveland; E. Grosse

1991-01-01

13

Computational Methods Development at Ames

NASA Technical Reports Server (NTRS)

This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

Kwak, Dochan; Smith, Charles A. (Technical Monitor)

1998-01-01

14

Public Review Draft: A Method for Assessing Carbon Stocks, Carbon

Public Review Draft: A Method for Assessing Carbon Stocks, Carbon Sequestration, and Greenhouse, and Zhu, Zhiliang, 2010, Public review draft; A method for assessing carbon stocks, carbon sequestration

15

Access Control in Publicly Verifiable Outsourced Computation James Alderman

Publicly Verifiable Outsourced Computation (PVC) allows devices with restricted re- sources to delegate. Thus there is a need to apply access control mechanisms in PVC environments. In this work, we define a new framework for Publicly Verifiable Outsourced Computation with Access Control (PVC-AC) that applies

16

47 CFR 61.20 - Method of filing publications.

Code of Federal Regulations, 2010 CFR

... 2010-10-01 false Method of filing publications. 61.20 Section 61.20 Telecommunication...Nondominant Carriers § 61.20 Method of filing publications. (a) Publications sent for filing must be addressed to...

2010-10-01

17

47 CFR 61.14 - Method of filing publications.

Code of Federal Regulations, 2010 CFR

... 2010-10-01 false Method of filing publications. 61.14 Section 61.14 Telecommunication...Electronic Filing § 61.14 Method of filing publications. (a) Publications filed electronically must be addressed...

2010-10-01

18

47 CFR 61.32 - Method of filing publications.

Code of Federal Regulations, 2010 CFR

... 2010-10-01 false Method of filing publications. 61.32 Section 61.32 Telecommunication...Dominant Carriers § 61.32 Method of filing publications. (a) Publications sent for filing must be addressed to...

2010-10-01

19

Method for tracking core-contributed publications.

Accurately tracking core-contributed publications is an important and often difficult task. Many core laboratories are supported by programmatic grants (such as Cancer Center Support Grant and Clinical Translational Science Awards) or generate data with instruments funded through S10, Major Research Instrumentation, or other granting mechanisms. Core laboratories provide their research communities with state-of-the-art instrumentation and expertise, elevating research. It is crucial to demonstrate the specific projects that have benefited from core services and expertise. We discuss here the method we developed for tracking core contributed publications. PMID:23204927

Loomis, Cynthia A; Curchoe, Carol Lynn

2012-12-01

20

Special Publication 500-293 US Government Cloud Computing

Special Publication 500-293 (Draft) US Government Cloud Computing Technology Roadmap Volume I Release 1.0 (Draft) High-Priority Requirements to Further USG Agency Cloud Computing Adoption Lee Badger Sokol, Jin Tong, Fred Whiteside and Dawn Leaf NIST Cloud Computing Program Information Technology

21

SEMINAR ON COMPUTATIONAL LINGUISTICS. PUBLIC HEALTH SERVICE PUBLICATION NUMBER 1716.

ERIC Educational Resources Information Center

IN OCTOBER 1966 A SEMINAR WAS HELD IN BETHESDA, MARYLAND ON THE USE OF COMPUTERS IN LANGUAGE RESEARCH. THE ORGANIZERS OF THE CONFERENCE, THE CENTER FOR APPLIED LINGUISTICS AND THE NATIONAL INSTITUTES OF HEALTH, ATTEMPTED TO BRING TOGETHER EMINENT REPRESENTATIVES OF THE MAJOR SCHOOLS OF CURRENT LINGUISTIC RESEARCH. THE PAPERS PRESENTED AT THE…

PRATT, ARNOLD W.; AND OTHERS, Eds.

22

PA 540: #35741 RESEARCH METHODS FOR PUBLIC ADMINISTRATION

1 PA 540: #35741 RESEARCH METHODS FOR PUBLIC ADMINISTRATION FALL SEMESTER, 2013 Class Meeting of these methods. Fowler, Floyd J., 2008. Survey Research Methods, 4th. (Sage Publications). $35 Gerring, John

Illinois at Chicago, University of

23

A Computer-Assisted Instruction in Teaching Abstract Statistics to Public Affairs Undergraduates

ERIC Educational Resources Information Center

This article attempts to demonstrate the applicability of a computer-assisted instruction supported with simulated data in teaching abstract statistical concepts to political science and public affairs students in an introductory research methods course. The software is called the Elaboration Model Computer Exercise (EMCE) in that it takes a great…

Ozturk, Ali Osman

2012-01-01

24

[Statistical methods for detecting and adjusting for publication bias].

Publication bias in meta-analysis occurs if the probability of publication depends on the estimated treatment effect and the precision/size of a study. In this article we describe the funnel plot as a graphical method to evaluate the presence of publication bias. Furthermore, tests of publication bias based on funnel plots are explained, and methods are presented that estimate treatment effects adjusted for publication bias. The article closes with a critical discussion of the various statistical methods. PMID:20701110

Schwarzer, Guido; Rücker, Gerta

2010-01-01

25

Evolution as Computation Evolutionary Theory (accepted for publication)

1/21/05 1 Evolution as Computation Evolutionary Theory (accepted for publication) By: John E: jemayf@iastate.edu Key words: Evolution, Computation, Complexity, Depth Running head: Evolution of evolution must include life and also non-living processes that change over time in a manner similar

Mayfield, John

26

Geometric methods in quantum computation

Recent advances in the physical sciences and engineering have created great hopes for new computational paradigms and substrates. One such new approach is the quantum computer, which holds the promise of enhanced computational power. Analogous to the way a classical computer is built from electrical circuits containing wires and logic gates, a quantum computer is built from quantum circuits containing

Jun Zhang

2003-01-01

27

Wildlife software: procedures for publication of computer software

Computers and computer software have become an integral part of the practice of wildlife science. Computers now play an important role in teaching, research, and management applications. Because of the specialized nature of wildlife problems, specific computer software is usually required to address a given problem (e.g., home range analysis). This type of software is not usually available from commercial vendors and therefore must be developed by those wildlife professionals with particular skill in computer programming. Current journal publication practices generally prevent a detailed description of computer software associated with new techniques. In addition, peer review of journal articles does not usually include a review of associated computer software. Thus, many wildlife professionals are usually unaware of computer software that would meet their needs or of major improvements in software they commonly use. Indeed most users of wildlife software learn of new programs or important changes only by word of mouth.

Samuel, M.D.

1990-01-01

28

Component Analysis Methods for Computer Vision and

1 Component Analysis Methods for Computer Vision and Pattern Recognition Fernando De la TorreFernando De la Torre Computer Vision and Pattern Recognition Easter SchoolComputer Vision and Pattern Vision and Pattern Recognition Easter SchoolComputer Vision and Pattern Recognition Easter School March

Botea, Adi

29

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2014 CFR

...2014-07-01 2014-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2014-07-01

30

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2013 CFR

...2013-07-01 2013-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2013-07-01

31

32 CFR 310.52 - Computer matching publication and review requirements.

Code of Federal Regulations, 2010 CFR

...2010-07-01 2010-07-01 false Computer matching publication and review requirements... PRIVACY PROGRAM DOD PRIVACY PROGRAM Computer Matching Program Procedures § 310.52 Computer matching publication and review...

2010-07-01

32

Teaching Practical Public Health Evaluation Methods

ERIC Educational Resources Information Center

Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the…

Davis, Mary V.

2006-01-01

33

Computational methods for stealth design

A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

Cable, V.P. (Lockheed Advanced Development Co., Sunland, CA (United States))

1992-08-01

34

Methods and applications in computational protein design

In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we ...

Biddle, Jason Charles

2010-01-01

35

Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

ERIC Educational Resources Information Center

The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

Jayakar, Krishna; Park, Eun-A

2012-01-01

36

Optimization Methods for Computer Animation.

ERIC Educational Resources Information Center

Emphasizing the importance of economy and efficiency in the production of computer animation, this master's thesis outlines methodologies that can be used to develop animated sequences with the highest quality images for the least expenditure. It is assumed that if computer animators are to be able to fully exploit the available resources, they…

Donkin, John Caldwell

37

Methods Towards Invasive Human Brain Computer Interfaces

Methods Towards Invasive Human Brain Computer Interfaces Thomas Navin Lal1 , Thilo Hinterberger2 there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The field has. Birbaumer et al. [1, 9] developed a Brain Computer Interface (BCI), called the Thought Translation Device

38

Computational methods of neutron transport

This books presents a balanced overview of the major methods currently available for obtaining numerical solutions in neutron and gamma ray transport. It focuses on methods particularly suited to the complex problems encountered in the analysis of reactors, fusion devices, radiation shielding, and other nuclear systems. Derivations are given for each of the methods showing how the transport equation is

E. E. Lewis; W. F. Miller

1984-01-01

39

Multiprocessor computer overset grid method and apparatus

A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

Barnette, Daniel W. (Veguita, NM); Ober, Curtis C. (Los Lunas, NM)

2003-01-01

40

Theoretical and computational methods in statistical mechanics

Theoretical and computational methods in statistical mechanics Shmuel Friedland Univ. Illinois and computational methods in statistical mechanicsBerkeley, October 26, 2009 1 / 32 #12;Overview Motivation: Ising in statistical mechanicsBerkeley, October 26, 2009 2 / 32 #12;Figure: Uri Natan Peled, Photo - December 2006

Friedland, Shmuel

41

Survey of Public IaaS Cloud Computing API

NASA Astrophysics Data System (ADS)

Recently, Cloud computing is spread rapidly and many Cloud providers start their Cloud services. One of the Cloud computing problems is Cloud provider Lock In” for users. Actually, Cloud computing management APIs such as ordering or provisioning are different in each Cloud provider, so that users need to study and implement new APIs when they change Cloud providers. OGF and DMTF start the discussions of standardization of Cloud computing APIs, but there is no standard now. In this technical note, to clarify what APIs cloud providers should provide, we study common APIs for Cloud computing. We survey and compare Cloud computing APIs such as Rackspace Cloud Server, Sun Cloud, GoGrid, ElasticHosts, Amazon EC2 and FlexiScale which are currently provided as public IaaS Cloud APIs in the market. From the survey, the common APIs should support REST access style and provide account management, virtual server management, storage management, network management and resource usage management capabilities. We also show an example of OSS to provide these common APIs compared to normal hosting services OSS.

Yamato, Yoji; Moriya, Takaaki; Ogawa, Takeshi; Akahani, Junichi

42

Simulation methods for advanced scientific computing

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

Booth, T.E.; Carlson, J.A.; Forster, R.A. [and others

1998-11-01

43

Computational Methods for Biomolecular Electrostatics

An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951

Dong, Feng; Olsen, Brett; Baker, Nathan A.

2008-01-01

44

ERIC Educational Resources Information Center

Since the introduction of computers into the public school arena over forty years ago, educators have been convinced that the integration of computer technology into the public school classroom will transform education. Joining educators are state and federal governments. Public schools and others involved in the process of computer technology…

Zuniga, Ramiro

2009-01-01

45

Computational Methods for Failure Analysis and Life Prediction

NASA Technical Reports Server (NTRS)

This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered.

Noor, Ahmed K. (compiler); Harris, Charles E. (compiler); Housner, Jerrold M. (compiler); Hopkins, Dale A. (compiler)

1993-01-01

46

Computational methods in radionuclide dosimetry

NASA Astrophysics Data System (ADS)

The various approaches in radionuclide dosimetry depend on the size and spatial relation of the sources and targets considered in conjunction with the emission range of the radionuclide used. We present some of the frequently reported computational techniques on the basis of the source/target size. For whole organs, or for sources or targets bigger than some centimetres, the acknowledged standard was introduced 30 years ago by the MIRD committee and is still being updated. That approach, based on the absorbed fraction concept, is mainly used for radioprotection purposes but has been updated to take into account the dosimetric challenge raised by therapeutic use of vectored radiopharmaceuticals. At this level, the most important computational effort is in the field of photon dosimetry. On the millimetre scale, photons can often be disregarded, and or electron dosimetry is generally reported. Heterogeneities at this level are mainly above the cell level, involving groups of cell or a part of an organ. The dose distribution pattern is often calculated by generalizing a point source dose distribution, but direct calculation by Monte Carlo techniques is also frequently reported because it allows media of inhomogeneous density to be considered. At the cell level, and electron (low-range or Auger) are the predominant emissions examined. Heterogeneities in the dose distribution are taken into account, mainly to determine the mean dose at the nucleus. At the DNA level, Auger electrons or -particles are considered from a microdosimetric point of view. These studies are often connected with radiobiological experiments on radionuclide toxicity.

Bardiès, M.; Myers, M. J.

1996-10-01

47

An Introduction To Computer Simulation Methods Examples

NSDL National Science Digital Library

Ready to run Launcher package containing examples for An Introduction to Computer Simulation Methods by Harvey Gould, Jan Tobochnik, and Wolfgang Christian. Source code for examples in this textbook is distributed in the Open Source Physics Eclipse Workspace.

Christian, Wolfgang; Gould, Harvey; Tobochnik, Jan

2008-05-17

48

Computational Chemistry Using Modern Electronic Structure Methods

ERIC Educational Resources Information Center

Various modern electronic structure methods are now days used to teach computational chemistry to undergraduate students. Such quantum calculations can now be easily used even for large size molecules.

Bell, Stephen; Dines, Trevor J.; Chowdhry, Babur Z.; Withnall, Robert

2007-01-01

49

MATH 991-Spring 2002 Computational and Mathematical Methods in Neuroscience: Vision

1 MATH 991-Spring 2002 Computational and Mathematical Methods in Neuroscience: Vision Number Scientific Publications; For computation and mathematics: Computer Vision, by Reinhard Klette et al, Springer: Understanding Vision, Edited by Glyn Humphreys. Pub. Blackwell Sci. Pub. This text is out-of-print, so copies

Sprott, Julien Clinton

50

Computational evaluation of the Traceback Method.

Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation, using technology that was developed for Information Retrieval and Computational Linguistics. In this article we advocate the use of such technology for the evaluation of formal models of language acquisition. We focus on the Traceback Method, proposed in several recent studies as a model of early language acquisition, explaining some of the phenomena associated with children's ability to generalize previously heard utterances and generate novel ones. We present a rigorous computational evaluation that reveals some flaws in the method, and suggest directions for improving it. PMID:23343571

Kol, Sheli; Nir, Bracha; Wintner, Shuly

2014-01-01

51

Public Health 439 (section 20) Qualitative Research Methods

Public Health 439 (section 20) Qualitative Research Methods Spring 2012 1 PUB HLTH 439 (section 20) Qualitative Research Methods Spring 2011 Day/Time: Tuesdays, 6:00 Â 9:00 PM Classroom Location: McGaw 2 research deals with words, spoken and written. This course will focus on qualitative research methods

Chisholm, Rex L.

52

77 FR 26509 - Notice of Public Meeting-Cloud Computing Forum & Workshop V

Federal Register 2010, 2011, 2012, 2013, 2014

...Notice of Public Meeting--Cloud Computing Forum & Workshop V AGENCY: National...SUMMARY: NIST announces the Cloud Computing Forum & Workshop V to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...

2012-05-04

53

76 FR 62373 - Notice of Public Meeting-Cloud Computing Forum & Workshop IV

Federal Register 2010, 2011, 2012, 2013, 2014

...Notice of Public Meeting--Cloud Computing Forum & Workshop IV AGENCY...SUMMARY: NIST announces the Cloud Computing Forum & Workshop IV to be held...the U.S. Government (USG) Cloud Computing Technology Roadmap...

2011-10-07

54

This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. PMID:23877155

Yokohama, Noriya

2013-07-01

55

Leg stiffness measures depend on computational method.

Leg stiffness is often computed from ground reaction force (GRF) registrations of vertical hops to estimate the force-resisting capacity of the lower-extremity during ground contact, with leg stiffness values incorporated in a spring-mass model to describe human motion. Individual biomechanical characteristics, including leg stiffness, were investigated in 40 healthy males. Our aim is to report and discuss the use of 13 different computational methods for evaluating leg stiffness from a double-legged repetitive hopping task, using only GRF registrations. Four approximations for the velocity integration constant were combined with three mathematical expressions, giving 12 methods for computing stiffness using double integrations. One frequency-based method that considered ground contact times was also trialled. The 13 methods thus defined were used to compute stiffness in four extreme cases, which were the stiffest, and most compliant, consistent and variable subjects. All methods provided different stiffness measures for a given individual, but the between-method variations in stiffness were consistent across the four atypical subjects. The frequency-based method apparently overestimated the actual stiffness values, whereas double integrations' measures were more consistent. In double integrations, the choice of the integration constant and mathematical expression considerably affected stiffness values, as variations during hopping were more or less emphasized. Stating a zero centre of mass position at take-off gave more consistent results, and taking a weighted-average of the force or displacement curve was more forgiving to variations in performance. In any case, stiffness values should always be accompanied by a detailed description of their evaluation methods, as our results demonstrated that computational methods affect calculated stiffness. PMID:24188972

Hébert-Losier, Kim; Eriksson, Anders

2014-01-01

56

Method and system for benchmarking computers

A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

Gustafson, John L. (Ames, IA)

1993-09-14

57

Computational Methods for Structural Mechanics and Dynamics

NASA Technical Reports Server (NTRS)

Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

1989-01-01

58

Computational Methods for Failure Analysis and Life Prediction

This conference publication contains the presentations and discussions from the joint UVA/NASA Workshop on Computational Methods for Failure Analysis and Life Prediction held at NASA Langley Research Center 14-15 Oct. 1992. The presentations focused on damage failure and life predictions of polymer-matrix composite structures. They covered some of the research activities at NASA Langley, NASA Lewis, Southwest Research Institute, industry, and universities. Both airframes and propulsion systems were considered. Separate abstracts have been prepared for papers from this report.

Noor, A.K.; Harris, C.E.; Housner, J.M.; Hopkins, D.A.

1993-10-01

59

An Efficient Method to Factorize the RSA Public Key Encryption

The security of public key encryption such as RSA scheme relied on the integer factoring problem. The security of RSA algorithm based on positive integer N, which is the product of two prime numbers, the factorization of N is very intricate. In this paper a factorization method is proposed, which is used to obtain the factor of positive integer N.

B. R. Ambedkar; Ashwani Gupta; Pratiksha Gautam; S. S. Bedi

2011-01-01

60

Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

NASA Technical Reports Server (NTRS)

Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

2013-01-01

61

A method to compute periodic sums

NASA Astrophysics Data System (ADS)

In a number of problems in computational physics, a finite sum of kernel functions centered at N particle locations located in a box in three dimensions must be extended by imposing periodic boundary conditions on box boundaries. Even though the finite sum can be efficiently computed via fast summation algorithms, such as the fast multipole method (FMM), the periodized extension is usually treated via a different algorithm, Ewald summation, accelerated via the fast Fourier transform (FFT). A different approach to compute this periodized sum just using a blackbox finite fast summation algorithm is presented in this paper. The method splits the periodized sum into two parts. The first, comprising the contribution of all points outside a large sphere enclosing the box, and some of its neighbors, is approximated inside the box by a collection of kernel functions (“sources”) placed on the surface of the sphere or using an expansion in terms of spectrally convergent local basis functions. The second part, comprising the part inside the sphere, and including the box and its immediate neighborhood, is treated via available summation algorithms. The coefficients of the sources are determined by least squares collocation of the periodicity condition of the total potential, imposed on a circumspherical surface for the box. While the method is presented in general, details are worked out for the case of evaluating electrostatic potentials and forces. Results show that when used with the FMM, the periodized sum can be computed to any specified accuracy, at an additional cost of the order of the free-space FMM. Several technical details and efficient algorithms for auxiliary computations are provided, as are numerical comparisons.

Gumerov, Nail A.; Duraiswami, Ramani

2014-09-01

62

Computational Thermochemistry and Benchmarking of Reliable Methods

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

63

Parallel computer methods for eigenvalue extraction

NASA Technical Reports Server (NTRS)

A new numerical algorithm for the solution of large-order eigenproblems typically encountered in linear elastic finite element systems is presented. The architecture of parallel processing is used in the algorithm to achieve increased speed and efficiency of calculations. The algorithm is based on the frontal technique for the solution of linear simultaneous equations and the modified subspace eigenanalysis method for the solution of the eigenproblem. The advantages of this new algorithm in parallel computer architecture are discussed.

Akl, Fred

1988-01-01

64

Numerical methods for problems in computational aeroacoustics

NASA Astrophysics Data System (ADS)

A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev pseudospectral method is further improved by developing Runge-Kutta methods for the temporal discretization which maximize imaginary stability intervals. Two new Runge-Kutta methods, which allow time steps almost twice as large as the maximal order schemes, while holding dissipation and dispersion fixed, are developed. In the process of studying dispersion and dissipation, it is determined that maximizing dispersion minimizes dissipation, and vice versa. In order to determine accurate and efficient absorbing boundary conditions, absorbing layers are studied and compared with one way wave equations. The matched layer technique for Maxwell equations is equivalent to the absorbing layer technique for the acoustic wave equation introduced by Kosloff and Kosloff. The numerical implementation of the perfectly matched layer for the acoustic wave equation with a large damping parameter results in a small portion of the wave transmitting into the absorbing layer. A large damping parameter also results in a large portion of the wave reflecting back into the domain. The perfectly matched layer is implemented on a single domain for the solution of the second order wave equation, and when implemented in this manner shows no advantage over the matched layer. Solutions of the second order wave equation, with the absorbing boundary condition imposed either by the matched layer or by the one way wave equations, are compared. The comparison shows no advantage of the matched layer over the one way wave equation for the absorbing boundary condition. Hence there is no benefit to be gained by using the matched layer, which necessarily increases the size of the computational domain.

Mead, Jodi Lorraine

1998-12-01

65

Groupware for Urban Planning and Computer-based Public Participation Pr. Robert Laurini 1 Robert Laurini Laboratory of Information Systems Engineering INSA de Lyon - University of Lyon GROUPWARE FOR URBAN PLANNING AND COMPUTER-BASED PUBLIC PARTICIPATION Groupware for Urban Planning Â· I - What

Laurini, Robert

66

Computational Statistical Methods for Social Network Models

We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

2013-01-01

67

Review of Computational Stirling Analysis Methods

NASA Technical Reports Server (NTRS)

Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.

Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.

2004-01-01

68

Saving lives: a computer simulation game for public education about emergencies

One facet of the Information Revolution in which the nation finds itself involves the utilization of computers, video systems, and a variety of telecommunications capabilities by those who must cope with emergency situations. Such technologies possess a significant potential for performing emergency public education and transmitting key information that is essential for survival. An ''Emergency Public Information Competitive Challenge Grant,'' under the aegis of the Federal Emergency Management Agency (FEMA), has sponsored an effort to use computer technology - both large, time-sharing systems and small personal computers - to develop computer games which will help teach techniques of emergency management to the public at large. 24 references.

Morentz, J.W.

1985-01-01

69

Evolutionary Computing Methods for Spectral Retrieval

NASA Technical Reports Server (NTRS)

A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

2009-01-01

70

Invariant subspace method for eigenvalue computation

The dynamic system being studied is first divided into subsystems with each subsystem representing some physical part of the total system. The eigenvalues and eigenvectors of the subsystems are computed using standard library routines. The change in the eigenvalues between the subsystems and the total system caused by the interconnection between the subsystems and the total system caused by the interconnection between the subsystems is found using a method based on invariant subspaces. The greatest change occurs in the global eigenvalues, those which influence the response of more than one of the subsystems. These eigenvalues are of particular interest as they are the type that could cause interarea oscillations.

Stadnicki, D.J. (ESCA Corp., Bellevue, WA (United States)); Ness, J.E. Van (Northwestern Univ., Evanston, IL (United States))

1993-05-01

71

Computational methods for optical molecular imaging

Summary A new computational technique, the matched interface and boundary (MIB) method, is presented to model the photon propagation in biological tissue for the optical molecular imaging. Optical properties have significant differences in different organs of small animals, resulting in discontinuous coefficients in the diffusion equation model. Complex organ shape of small animal induces singularities of the geometric model as well. The MIB method is designed as a dimension splitting approach to decompose a multidimensional interface problem into one-dimensional ones. The methodology simplifies the topological relation near an interface and is able to handle discontinuous coefficients and complex interfaces with geometric singularities. In the present MIB method, both the interface jump condition and the photon flux jump conditions are rigorously enforced at the interface location by using only the lowest-order jump conditions. This solution near the interface is smoothly extended across the interface so that central finite difference schemes can be employed without the loss of accuracy. A wide range of numerical experiments are carried out to validate the proposed MIB method. The second-order convergence is maintained in all benchmark problems. The fourth-order convergence is also demonstrated for some three-dimensional problems. The robustness of the proposed method over the variable strength of the linear term of the diffusion equation is also examined. The performance of the present approach is compared with that of the standard finite element method. The numerical study indicates that the proposed method is a potentially efficient and robust approach for the optical molecular imaging. PMID:20485461

Chen, Duan; Wei, Guo-Wei; Cong, Wen-Xiang; Wang, Ge

2010-01-01

72

77 FR 74829 - Notice of Public Meeting-Cloud Computing and Big Data Forum and Workshop

Federal Register 2010, 2011, 2012, 2013, 2014

...Public Meeting--Cloud Computing and Big Data Forum and Workshop AGENCY: National...announces a Cloud Computing and Big Data Forum and Workshop to be held on Tuesday...workshop. The NIST Cloud Computing and Big Data Forum and Workshop will bring...

2012-12-18

73

Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing

Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing Qian Wang1, {wjlou}@ece.wpi.edu Abstract. Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allow- ing a third

74

A case for DoD application of public cloud computing services

Cloud computing offers tremendous opportunities for private industry, governments, and even individuals to access massive amounts of compute resources on-demand at very low cost. Recent advancements in bandwidth availability, virtualization, security services and general public awareness have contributed to this information technology (IT) business model. Cloud computing provides on-demand scalability, reduces costs, decreases barriers to entry, and enables organizations to

Kris E Barcomb; Jeffrey W Humphries; Robert F Mills

2011-01-01

75

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang, IEEE, and Jin Li Abstract--Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party

Hou, Y. Thomas

76

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing Cong Wang, Qian large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging efficient. I. INTRODUCTION Cloud Computing has been envisioned as the next- generation architecture

Hou, Y. Thomas

77

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

1 Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing Qian Wang, IEEE, and Jin Li Abstract--Cloud Computing has been envisioned as the next-generation architecture the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party

Hou, Y. Thomas

78

A Classification of Recent Australasian Computing Education Publications

ERIC Educational Resources Information Center

A new classification system for computing education papers is presented and applied to every computing education paper published between January 2004 and January 2007 at the two premier computing education conferences in Australia and New Zealand. We find that while simple reports outnumber other types of paper, a healthy proportion of papers…

Computer Science Education, 2007

2007-01-01

79

Computational Methods in Biomechanics and Physics A Dissertation

Computational Methods in Biomechanics and Physics A Dissertation Presented to the Faculty Doctor of Philosophy By Serguei Lapin May 2005 #12;Computational Methods in Biomechanics and Physics;Computational Methods in Biomechanics and Physics. An Abstract of a Dissertation Presented to the Faculty

Lapin, Sergey

80

Domain decomposition methods in computational fluid dynamics

NASA Technical Reports Server (NTRS)

The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

Gropp, William D.; Keyes, David E.

1991-01-01

81

Modules and methods for all photonic computing

A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

2001-01-01

82

Diffusing the Cloud: Cloud Computing and Implications for Public Policy

Cloud Computing is rapidly emerging as the new information technology platform. It is, however, much more than simply a new\\u000a set of technologies and business models. Cloud Computing is transforming how consumers, companies, and governments store information,\\u000a how they process that information, and how they utilize computing power. It can be an engine of innovation, a platform for\\u000a entrepreneurship, and

Kenji E. Kushida; Jonathan Murray; John Zysman

2011-01-01

83

Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors

We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...

Lee, I-Jen

84

Lecture Notes in Computer Science 4504 Commenced Publication in 1973

of interests between service-oriented computing, semantic technology, and intelligent multiagent systems, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology David Martin Ingo MÃ¼ller Suzette Stoutenburg Katia P. Sycara (Eds.) Service-Oriented Computing: Agents

Huang, Jingshan

85

Lecture Notes in Computer Science 7535 Commenced Publication in 1973

(Eds.) Parameterized and Exact Computation 7th International Symposium, IPEC 2012 Ljubljana, Slovenia at the 7th Interna- tional Symposium on Parameterized and Exact Computation, IPEC 2012 (ipec2012.isoftcloud.gr), held on September 12Â14, 2012 as part of the ALGO 2012 (algo12.fri.uni-lj.si) conference in Ljubljana

Dimitrios, Thilikos

86

The Use of Public Computing Facilities by Library Patrons: Demography, Motivations, and Barriers

ERIC Educational Resources Information Center

Public libraries play an important part in the development of a community. Today, they are seen as more than store houses of books; they are also responsible for the dissemination of online, and offline information. Public access computers are becoming increasingly popular as more and more people understand the need for internet access. Using a…

DeMaagd, Kurt; Chew, Han Ei; Huang, Guanxiong; Khan, M. Laeeq; Sreenivasan, Akshaya; LaRose, Robert

2013-01-01

87

Fiscal federalism and local public finance: A computable general equilibrium (CGE) framework

This paper attempts to make an argument for the feasibility and usefulness of a computable general equilibrium approach to studying fiscal federalism and local public finance. It begins by presenting a general model of fiscal federalism that has at its base a local public goods model with (1) multiple types of mobile agents who are endowed with preferences, private good

Thomas Nechyba

1996-01-01

88

Computational Evaluation of the Traceback Method

ERIC Educational Resources Information Center

Several models of language acquisition have emerged in recent years that rely on computational algorithms for simulation and evaluation. Computational models are formal and precise, and can thus provide mathematically well-motivated insights into the process of language acquisition. Such models are amenable to robust computational evaluation,…

Kol, Sheli; Nir, Bracha; Wintner, Shuly

2014-01-01

89

The Computer as an Aid to Public Relations Writing.

ERIC Educational Resources Information Center

Teachers of public relations and other communication areas, with endorsement from the Association for Education in Journalism and Mass Communication (AEJMC), should request the data processing industry to develop assisted instruction programs in journalistic writing. Such action would provide a clearly defined need for a significant market and…

Rayfield, Robert E.

90

Lecture Notes in Computer Science 7902 Commenced Publication in 1973

2013 It's as Easy as ABC: Introducing Anthropology-Based Computing . . . . . 1 John N.A. Brown Extreme Fuzzy Rule-Based Ensembles Using Diversity Induction and Evolutionary Algorithms-Based Classifier

Eirin Lopez, Jose Maria

91

Studying Organizational Computing Infrastructures: Multi-method Approaches

This paper provides guidelines for developing multi-method research approaches, provides several examples of their use, and discusses experiences with conducting a multi-method study of one organization's computing infrastructure changes. The focus on organizationa l computing infrastructures is due to the contemporary belief that these are increasingly critical to organizational success. However, understanding the value of an organization's computing infrastructure is

Steve Sawyer

2000-01-01

92

Scientific Methods in Computer Science Gordana Dodig-Crnkovic

Scientific Methods in Computer Science Gordana Dodig-Crnkovic Department of Computer Science analyzes scientific aspects of Computer Science. First it defines science and scientific method in general. It gives a dis- cussion of relations between science, research, development and technology. The existing

Cunningham, Conrad

93

Studies on the zeros of Bessel functions and methods for their computation

NASA Astrophysics Data System (ADS)

The zeros of Bessel functions play an important role in computational mathematics, mathematical physics, and other areas of natural sciences. Studies addressing these zeros (their properties, computational methods) can be found in various sources. This paper offers a detailed overview of the results concerning the real zeros of the Bessel functions of the first and second kinds and general cylinder functions. The author intends to publish several overviews on this subject. In this first publication, works dealing with real zeros are analyzed. Primary emphasis is placed on classical results, which are still important. Some of the most recent publications are also discussed.

Kerimov, M. K.

2014-09-01

94

to appear in Behavior Research Methods, Instruments and Computers A Computational Model

to appear in Behavior Research Methods, Instruments and Computers A Computational Model,version1-1Apr2008 Author manuscript, published in "Behavior Research Methods 38, 4 (2006) 628-637" #12;to appear in Behavior Research Methods, Instruments and Computers Abstract This paper describes

Paris-Sud XI, UniversitÃ© de

95

Public involvement is a process that involves the public in the decision making of an organization, for example a municipality or a corporation. It has developed into a widely accepted and recommended policy in environment altering projects. The EU Water Framework Directive (WFD) took force in 2000 and stresses the importance of public involvement in composing river basin management plans. Therefore, the need to develop public involvement methods for different situations and circumstances is evident. This paper describes how various public involvement methods have been applied in a development project involving the most heavily regulated lake in Finland. The objective of the project was to assess the positive and negative impacts of regulation and to find possibilities for alleviating the adverse impacts on recreational use and the aquatic ecosystem. An exceptional effort was made towards public involvement, which was closely connected to planning and decision making. The applied methods were (1) steering group work, (2) survey, (3) dialogue, (4) theme interviews, (5) public meeting and (6) workshops. The information gathered using these methods was utilized in different stages of the project, e.g., in identifying the regulation impacts, comparing alternatives and compiling the recommendations for regulation development. After describing our case and the results from the applied public involvement methods, we will discuss our experiences and the feedback from the public. We will also critically evaluate our own success in coping with public involvement challenges. In addition to that, we present general recommendations for dealing with these problematic issues based on our experiences, which provide new insights for applying various public involvement methods in multi-objective decision making projects.

Vaentaenen, Ari [Department of Sociology, FIN 20014 University of Turku (Finland)]. E-mail: armiva@utu.fi; Marttunen, Mika [Department for Expert Services, Finnish Environment Institute, P.O. Box 140 FIN 00251 Helsinki (Finland)]. E-mail: Mika.Marttunen@ymparisto.fi

2005-04-15

96

Computer methods in vibrational ecology problems

NASA Astrophysics Data System (ADS)

In the paper formulations of direct vibrational ecology problems are described. Linearly spaced source of acoustic waves in the solid media is considered (e.g., tunnel in soil, simulated by the thin elastic shell). In this case the shell is exited by the inner force changing in time and spatial variables. The proposed computer prognosis method is realized in VibLab software of the Russian Tunneling Association for two versions of algorithms: in homogeneous and stratified two-dimensional media. Three general stratification types are considered: two waveguides (subsurface and waveguide with the axis on the depth H) and stratification with linearly increased wave speed. On the basis of these approximations the amendments, respectively, liquid soil case were calculated. A vibrational field in the ground is described by cylinder waves; modulation of this field type is assessed by a simple final elements scheme. The aspects of application of the proposed algorithms in inverse problems of media parameters determined are also discussed in the report. [Work supported by the Russian Foundation for Basic Research Grants No. 01-02-16127 and No. 02-02-17143.

Rybak, Samuil A.; Makhortykh, Sergey A.; Kostarev, Stanislav A.; Gatina, Aleksandra R.

2003-10-01

97

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2014 CFR

...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

2014-07-01

98

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2013 CFR

...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

2013-07-01

99

Federal Register 2010, 2011, 2012, 2013, 2014

...Public Meeting--Intersection of Cloud Computing and Mobility Forum and Workshop...INFORMATION: NIST hosted six prior Cloud Computing Forum & Workshop events in May...Government's experience with cloud computing, report on the status of...

2013-09-04

100

36 CFR 1254.32 - What rules apply to public access use of the Internet on NARA-supplied computers?

Code of Federal Regulations, 2010 CFR

...access use of the Internet on NARA-supplied computers? 1254.32 Section 1254.32 Parks...access use of the Internet on NARA-supplied computers? (a) Public access computers (workstations) are available for...

2010-07-01

101

Small Towns and Small Computers: Can a Match Be Made? A Public Policy Seminar.

ERIC Educational Resources Information Center

A public policy seminar discussed how to match small towns and small computers. James K. Coyne, Special Assistant to the President and Director of the White House Office of Private Sector Initiatives, offered opening remarks and described a database system developed by his office to link organizations and communities with small computers to…

National Association of Towns and Townships, Washington, DC.

102

ERIC Educational Resources Information Center

The article integrates information on three topics--the quantitative backgrounds and computing needs of social science students; cooperation among social science instructors, students, and computer center user consultants; and attitudes of instructors in public administration and political science doctoral programs. (Author/DB)

Hy, Ronald John; And Others

1981-01-01

103

ERIC Educational Resources Information Center

Despite support for technology in schools, there is little evidence indicating whether using computers in public elementary mathematics classrooms is associated with improved outcomes for students. This exploratory study examined data from the Early Childhood Longitudinal Study, investigating whether students' frequency of computer use was related…

Kao, Linda Lee

2009-01-01

104

Novel Methods for Communicating Plasma Science to the General Public

NASA Astrophysics Data System (ADS)

The broader implications of Plasma Science remains an elusive topic that the general public rarely discusses, regardless of their relevance to energy, the environment, and technology. Recently, we have looked beyond print media for methods to reach large numbers of people in creative and informative ways. These have included video, art, images, and music. For example, our submission to the ``What is a Flame?'' contest was ranked in the top 15 out of 800 submissions. Images of plasmas have won 3 out of 5 of the Princeton University ``Art of Science'' competitions. We use a plasma speaker to teach students of all ages about sound generation and plasma physics. We report on the details of each of these and future videos and animations under development.

Zwicker, Andrew; Merali, Aliya; Wissel, S. A.; Delooper, John

2012-10-01

105

SAR/QSAR methods in public health practice

Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.

Demchuk, Eugene, E-mail: edemchuk@cdc.gov; Ruiz, Patricia; Chou, Selene; Fowler, Bruce A.

2011-07-15

106

Computational methods and opportunities for phosphorylation network medicine

Protein phosphorylation, one of the most ubiquitous post-translational modifications (PTM) of proteins, is known to play an essential role in cell signaling and regulation. With the increasing understanding of the complexity and redundancy of cell signaling, there is a growing recognition that targeting the entire network or system could be a necessary and advantageous strategy for treating cancer. Protein kinases, the proteins that add a phosphate group to the substrate proteins during phosphorylation events, have become one of the largest groups of ‘druggable’ targets in cancer therapeutics in recent years. Kinase inhibitors are being regularly used in clinics for cancer treatment. This therapeutic paradigm shift in cancer research is partly due to the generation and availability of high-dimensional proteomics data. Generation of this data, in turn, is enabled by increased use of mass-spectrometry (MS)-based or other high-throughput proteomics platforms as well as companion public databases and computational tools. This review briefly summarizes the current state and progress on phosphoproteomics identification, quantification, and platform related characteristics. We review existing database resources, computational tools, methods for phosphorylation network inference, and ultimately demonstrate the connection to therapeutics. Finally, many research opportunities exist for bioinformaticians or biostatisticians based on developments and limitations of the current and emerging technologies.

Chen, Yian Ann; Eschrich, Steven A.

2014-01-01

107

Lecture Notes in Computer Science 4974 Commenced Publication in 1973

, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN Intelligence (IDSIA) Lugano, Switzerland gianni@idsia.ch Rolf Drechsler Institute of Computer Science

Di Caro, Gianni

108

Computers in Public Schools: Changing the Image with Image Processing.

ERIC Educational Resources Information Center

The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

109

The ACLS Survey of Scholars: Views on Publications, Computers, Libraries.

ERIC Educational Resources Information Center

Reviews results of a survey by the American Council of Learned Societies (ACLS) of 3,835 scholars in the humanities and social sciences who are working both in colleges and universities and outside the academic community. Areas highlighted include professional reading, authorship patterns, computer use, and library use. (LRW)

Morton, Herbert C.; Price, Anne Jamieson

1986-01-01

110

Improved Computational Methods for Ray Tracing

This paper describes algorithmic procedures that have been implemented to reduce the computational expense of producing ray-traced images. The selection of bounding volumes is examined to reduce the computational cost of the ray-intersection test. The use of object coherence, which relies on a hierarchical description of the environment, is then presented. Finally, since the building of the ray- intersection trees

Hank Weghorst; Gary Hooper; Donald P. Greenberg

1984-01-01

111

ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING

The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...

112

Computational Methods for Simulating Quantum Computers H. De Raedt

-Formula Algorithms 17 F. Comments 19 IV. Quantum Algorithms 19 A. Elementary Gates 21 1. Hadamard Gate 21 2. Swap computer is a complicated many-body system that interacts with its environment. In quantum statistical mechanics and quantum chemistry, it is well known that simulating an interacting quantum many-body system

113

Computational methods in sequence and structure prediction

NASA Astrophysics Data System (ADS)

This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed a software package which is capable of designing novel protein structures at the atomic resolution. This software package allows us to perform protein structure design with a flexible backbone. The backbone flexibility includes loop region relaxation as well as a secondary structure collective mode relaxation scheme. (Abstract shortened by UMI.)

Lang, Caiyi

114

Objectives Large-scale incidents such as the 2009 H1N1 outbreak, the 2011 European Escherichia coli outbreak, and Hurricane Sandy demonstrate the need for continuous improvement in emergency preparation, alert, and response systems globally. As questions relating to emergency preparedness and response continue to rise to the forefront, the field of industrial and systems engineering (ISE) emerges, as it provides sophisticated techniques that have the ability to model the system, simulate, and optimize complex systems, even under uncertainty. Methods We applied three ISE techniques—Markov modeling, operations research (OR) or optimization, and computer simulation—to public health emergency preparedness. Results We present three models developed through a four-year partnership with stakeholders from state and local public health for effectively, efficiently, and appropriately responding to potential public health threats: (1) an OR model for optimal alerting in response to a public health event, (2) simulation models developed to respond to communicable disease events from the perspective of public health, and (3) simulation models for implementing pandemic influenza vaccination clinics representative of clinics in operation for the 2009–2010 H1N1 vaccinations in North Carolina. Conclusions The methods employed by the ISE discipline offer powerful new insights to understand and improve public health emergency preparedness and response systems. The models can be used by public health practitioners not only to inform their planning decisions but also to provide a quantitative argument to support public health decision making and investment. PMID:25355986

Yaylali, Emine; Taheri, Javad

2014-01-01

115

International Symposium on Computational Methods in

, CMTPI-2007 NC Group/NVK "VIST" Vavilova Street, 24, Block 4 Moscow, 119334 , Russia Tel.: +007 495 995 and toxicology. Â· Commercial and non-commercial computational tools and databases in the Internet

Ferreira, MÃ¡rcia M. C.

116

Computational complexity for the two-point block method

NASA Astrophysics Data System (ADS)

In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

See, Phang Pei; Majid, Zanariah Abdul

2014-12-01

117

Computational protein design methods for synthetic biology.

Computational protein design, a process that searches for mutants with desired improved properties, plays a central role in the conception of many synthetic biology devices including biosensors, bioproduction, or regulation circuits. To that end, a rational workflow for computational protein design is described here consisting of (a) searching in the sequence, structure or chemical spaces for the desired function and associated protein templates; (b) finding the list of potential hot regions to mutate in the parent proteins; and (c) performing in silico screening of mutants with predicted improved properties. PMID:25487090

Carbonell, Pablo; Trosset, Jean-Yves

2015-01-01

118

Methods Towards Invasive Human Brain Computer Interfaces

During the last ten years there has been growing interest in the develop- ment of Brain Computer Interfaces (BCIs). The eld has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial elec- troencephalography (EEG). However, reported bit rates are still low. One reason for this is

Thomas Navin Lal; Thilo Hinterberger; Guido Widman; Michael Schröder; N. Jeremy Hill; Wolfgang Rosenstiel; Christian Erich Elger; Bernhard Schölkopf; Niels Birbaumer

2004-01-01

119

Methods towards invasive human brain computer interfaces

Abstract During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The eld,has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low

T. N. Lal; T. Hinterberger; G. Widman; N. J. Hill; W. Rosenstiel; C. E. Elger; N. Birbaum

2005-01-01

120

Parallel computation with the spectral element method

Spectral element models for the shallow water equations and the Navier-Stokes equations have been successfully implemented on a data parallel supercomputer, the Connection Machine model CM-5. The nonstaggered grid formulations for both models are described, which are shown to be especially efficient in data parallel computing environment.

Ma, Hong

1995-12-01

121

Classical versus Computer Algebra Methods in Elementary Geometry

ERIC Educational Resources Information Center

Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

Pech, Pavel

2005-01-01

122

Reduced Switching Frequency Computed PWM Method for Multilevel Converter Control

. An experimental 11-level H-bridge multilevel converter with a first-on first-off switching strategy (usedReduced Switching Frequency Computed PWM Method for Multilevel Converter Control Zhong Du, Leon M computed PWM methods for 11-level multilevel converters to eliminate the specified harmonics in the output

Tolbert, Leon M.

123

Domain identification in impedance computed tomography by spline collocation method

NASA Technical Reports Server (NTRS)

A method for estimating an unknown domain in elliptic boundary value problems is considered. The problem is formulated as an inverse problem of integral equations of the second kind. A computational method is developed using a splice collocation scheme. The results can be applied to the inverse problem of impedance computed tomography (ICT) for image reconstruction.

Kojima, Fumio

1990-01-01

124

Overview of computational structural methods for modern military aircraft

NASA Technical Reports Server (NTRS)

Computational structural methods are essential for designing modern military aircraft. This briefing deals with computational structural methods (CSM) currently used. First a brief summary of modern day aircraft structural design procedures is presented. Following this, several ongoing CSM related projects at Northrop are discussed. Finally, shortcomings in this area, future requirements, and summary remarks are given.

Kudva, J. N.

1992-01-01

125

12 CFR 227.25 - Unfair balance computation method.

Code of Federal Regulations, 2010 CFR

...2010-01-01 2010-01-01 false Unfair balance computation method. 227.25 Section...Account Practices Rule § 227.25 Unfair balance computation method. (a) General rule...bank must not impose finance charges on balances on a consumer credit card account...

2010-01-01

126

Systems Science Methods in Public Health: Dynamics, Networks, and Agents

Complex systems abound in public health. Complex systems are made up of heterogeneous elements that interact with one another, have emergent properties that are not explained by understanding the individual elements of the system, persist over time, and adapt to changing circumstances. Public health is starting to use results from systems science studies to shape practice and policy, for example

Douglas A. Luke; Katherine A. Stamatakis

127

A Meshless Method for Computational Stochastic Mechanics

NASA Astrophysics Data System (ADS)

This paper presents a stochastic meshless method for probabilistic analysis of linear-elastic structures with spatially varying random material properties. Using Karhunen-Loève (K-L) expansion, the homogeneous random field representing material properties was discretized by a set of orthonormal eigenfunctions and uncorrelated random variables. Two numerical methods were developed for solving the integral eigenvalue problem associated with K-L expansion. In the first method, the eigenfunctions were approximated as linear sums of wavelets and the integral eigenvalue problem was converted to a finite-dimensional matrix eigenvalue problem that can be easily solved. In the second method, a Galerkin-based approach in conjunction with meshless discretization was developed in which the integral eigenvalue problem was also converted to a matrix eigenvalue problem. The second method is more general than the first, and can solve problems involving a multi-dimensional random field with arbitrary covariance functions. In conjunction with meshless discretization, the classical Neumann expansion method was applied to predict second-moment characteristics of the structural response. Several numerical examples are presented to examine the accuracy and convergence of the stochastic meshless method. A good agreement is obtained between the results of the proposed method and the Monte Carlo simulation. Since mesh generation of complex structures can be far more time-consuming and costly than the solution of a discrete set of equations, the meshless method provides an attractive alternative to the finite element method for solving stochastic-mechanics problems.

Rahman, S.; Xu, H.

2005-03-01

128

NASA Astrophysics Data System (ADS)

Lung cancer is one of the main public health issues in developed countries. Lung cancer typically manifests itself as non-calcified pulmonary nodules that can be detected reading lung Computed Tomography (CT) images. To assist radiologists in reading images, researchers started, a decade ago, the development of Computer Aided Detection (CAD) methods capable of detecting lung nodules. In this work, a CAD composed of two CAD subprocedures is presented: , devoted to the identification of parenchymal nodules, and , devoted to the identification of the nodules attached to the pleura surface. Both CADs are an upgrade of two methods previously presented as Voxel Based Neural Approach CAD . The novelty of this paper consists in the massive training using the public research Lung International Database Consortium (LIDC) database and on the implementation of new features for classification with respect to the original VBNA method. Finally, the proposed CAD is blindly validated on the ANODE09 dataset. The result of the validation is a score of 0.393, which corresponds to the average sensitivity of the CAD computed at seven predefined false positive rates: 1/8, 1/4, 1/2, 1, 2, 4, and 8 FP/CT.

Camarlinghi, Niccolò

2013-09-01

129

Reduced Switching Frequency Computed PWM Method for Multilevel Converter Control

This paper presents two computed PWM methods for 11-level multilevel converters to eliminate the specified harmonics in the output voltage to decrease total harmonic distortion (THD). The first method uses the fundamental switching scheme to eliminate low order harmonics, and uses the active harmonic elimination method to eliminate higher order harmonics. The second method uses these schemes in the reverse

Zhong Du; Leon M. Tolbert; John N. Chiasson

2005-01-01

130

COMSAC: Computational Methods for Stability and Control. Part 1

NASA Technical Reports Server (NTRS)

Work on stability and control included the following reports:Introductory Remarks; Introduction to Computational Methods for Stability and Control (COMSAC); Stability & Control Challenges for COMSAC: a NASA Langley Perspective; Emerging CFD Capabilities and Outlook A NASA Langley Perspective; The Role for Computational Fluid Dynamics for Stability and Control:Is it Time?; Northrop Grumman Perspective on COMSAC; Boeing Integrated Defense Systems Perspective on COMSAC; Computational Methods in Stability and Control:WPAFB Perspective; Perspective: Raytheon Aircraft Company; A Greybeard's View of the State of Aerodynamic Prediction; Computational Methods for Stability and Control: A Perspective; Boeing TacAir Stability and Control Issues for Computational Fluid Dynamics; NAVAIR S&C Issues for CFD; An S&C Perspective on CFD; Issues, Challenges & Payoffs: A Boeing User s Perspective on CFD for S&C; and Stability and Control in Computational Simulations for Conceptual and Preliminary Design: the Past, Today, and Future?

Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

2004-01-01

131

Computational simulation methods for fiber reinforced composites

Trefftz-finite element method (Trefftz-FEM), adaptive cross approximation BEM (ACA BEM) and continuous source function method\\u000a (CSFM) are used for the simulation of composites reinforced by short fibers (CRSF) with the aim of showing the possibilities\\u000a of reducing the problem of complicated and important interactions in such composite materials.

Vladimír Kompiš; Zuzana Mur?inková; Sergey Rjasanow; Richards Grzibovskis; Qinghua Qin

2010-01-01

132

Computing with DNA 413 413 From: Methods in Molecular Biology, vol. 132: Bioinformatics Methods of molecular biology to solve a diffi- cult computational problem. Adleman's experiment solved an instance computations. The main idea was the encoding of data in DNA strands and the use of tools from molecular biology

Kari, Lila

133

Computer Methods and Programs in Biomedicine 24 (1987) 179-188 Computer-assisted centrifugal separations by centrifugal elutriation, we constructed an on-line computer-controlled multiparametric light-controlled elutriation. Cell separation; Centrifugal elutriation; Multiparametric light-scatter analysis; Stand

134

A Fractional-Step Method Of Computing Incompressible Flow

NASA Technical Reports Server (NTRS)

Method of computing time-dependent flow of incompressible, viscous fluid involves numerical solution of Navier-Stokes equations on two- or three-dimensional computational grid based on generalized curvilinear coordinates. Equations of method derived in primitive-variable formulation. Dependent variables are pressure at center of each cell of computational grid and volume fluxes across faces of each cell. Volume fluxes replace Cartesian components of velocity; these fluxes correspond to contravariant components of velocity multiplied by volume of computational cell, in staggered grid. Choice of dependent variables enables simple extension of previously developed staggered-grid approach to generalized curvilinear coordinates and facilitates enforcement of conservation of mass.

Kwak, Dochan; Rosenfeld, Moshe; Vinokur, Marcel

1993-01-01

135

A Novel College Network Resource Management Method using Cloud Computing

NASA Astrophysics Data System (ADS)

At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.

Lin, Chen

136

Phase Field Method: Spinodal Decomposition Computer Laboratory

NSDL National Science Digital Library

In this lab, spinodal decomposition is numerically implemented in FiPy. A simple example python script (spinodal.py) summarizes the concepts. This lab is intended to complement the "Phase Field Method: An Introduction" lecture

Garcãa, R. E.

2008-08-25

137

ERIC Educational Resources Information Center

Explores the Malaysian computer science and information technology publication productivity as indicated by data collected from three Web-based databases. Relates possible reasons for the amount and pattern of contributions to the size of researcher population, the availability of refereed scholarly journals, and the total expenditure allocated to…

Gu, Yinian

2002-01-01

138

Under consideration for publication in J. Fluid Mech. 1 Computations of fully nonlinear hydroelastic

et al. (2009) examined numerically the same nonlinear problem of moving load on ice, through a highUnder consideration for publication in J. Fluid Mech. 1 Computations of fully nonlinear with the two-dimensional problem of nonlinear gravity waves traveling at the interface between a thin ice sheet

Parau, Emilian I.

139

Accepted for publication in International Journal of Computer Vision Color Subspaces as Photometric reflectance phenomena such as specular reflections confound many vision problems since they produce image-based vision techniques to a broad class of spec- ular, non-Lambertian scenes. Using implementations of recent

Jaffe, Jules

140

Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to *

Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to * *the extreme. Linux Magazine, 1(1):56-59, 1999. Forrest M. Hoffman. Concepts in Beowulfery. Linux Magazine, 4(1):40-41, January* * 2002a. Forrest M. Hoffman. Configuring a Beowulf Cluster. Linux Magazine

Hoffman, Forrest M.

141

Publications Forrest M. Hoffman and William W. Hargrove. Cluster computing: Linux taken to the extreme. Linux Magazine, 1(1):56Â59, 1999. Forrest M. Hoffman. Concepts in Beowulfery. Linux Magazine, 4(1):40Â41, January 2002a. Forrest M. Hoffman. Configuring a Beowulf Cluster. Linux Magazine, 4(2):42Â45, February

Hoffman, Forrest M.

142

DataSteward: Using Dedicated Compute Nodes for Scalable Data Management on Public Clouds

DataSteward: Using Dedicated Compute Nodes for Scalable Data Management on Public Clouds Radu on clouds to build on their inherent elasticity and scalability. One of the critical needs in order to deal by cloud providers suffer from high latencies, trading performance for availability. One alternative

Paris-Sud XI, UniversitÃ© de

143

Computational Methods for Jet Noise Simulation

NASA Technical Reports Server (NTRS)

The purpose of our project is to develop, analyze, and test novel numerical technologies central to the long term goal of direct simulations of subsonic jet noise. Our current focus is on two issues: accurate, near-field domain truncations and high-order, single-step discretizations of the governing equations. The Direct Numerical Simulation (DNS) of jet noise poses a number of extreme challenges to computational technique. In particular, the problem involves multiple temporal and spatial scales as well as flow instabilities and is posed on an unbounded spatial domain. Moreover, the basic phenomenon of interest, the radiation of acoustic waves to the far field, involves only a minuscule fraction of the total energy. The best current simulations of jet noise are at low Reynolds number. It is likely that an increase of one to two orders of magnitude will be necessary to reach a regime where the separation between the energy-containing and dissipation scales is sufficient to make the radiated noise essentially independent of the Reynolds number. Such an increase in resolution cannot be obtained in the near future solely through increases in computing power. Therefore, new numerical methodologies of maximal efficiency and accuracy are required.

Goodrich, John W. (Technical Monitor); Hagstrom, Thomas

2003-01-01

144

ERIC Educational Resources Information Center

Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

2013-01-01

145

Low-Rank Incremental Methods for Computing Dominant Singular Subspaces

Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.

Baker, Christopher G [ORNL; Gallivan, Dr. Kyle A [Florida State University; Van Dooren, Dr. Paul [Universite Catholique de Louvain

2012-01-01

146

Developing a multimodal biometric authentication system using soft computing methods.

Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability.This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision. PMID:25502384

Malcangi, Mario

2015-01-01

147

Lattice gas methods for computational aeroacoustics

NASA Technical Reports Server (NTRS)

This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

Sparrow, Victor W.

1995-01-01

148

A computational geometry method for localization using differences of distances

We present a computational geometry method for the problem of estimating the location of a source in the plane using measurements of distance-differences to it. Compared to existing solutions to this well-studied problem, this method is: (a) computationally more efficient and adaptive in that its precision can be controlled as a function of the number of computational operations, and (b) robust with respect to measurement and computational errors, and is not susceptible to numerical instabilities typical of existing linear algebraic or quadratic methods. This method employs a binary search on a distance-difference curve in the plane using a second distance-difference as the objective function. We show the correctness of this method by establishing the unimodality of directional derivative of the objective function within each of a small number of regions of the plane, wherein a suitable binary search is supported. The computational complexity of this method is O (log (1/{gamma})), where the computed solution is guaranteed to be within a distance {gamma} of the actual location of the source. We present simulation results to compare this method with existing DTOA localization methods.

Xu, X. S. [University of Tennessee, Knoxville (UTK); Rao, Nageswara S [ORNL; Sahni, Sartaj K [ORNL

2010-02-01

149

Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

NASA Technical Reports Server (NTRS)

A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

Terrile, Richard J.; Guillaume, Alexandre

2011-01-01

150

Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS.

As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

Jalali, Arash; Olabode, Olusegun A; Bell, Christopher M

2012-01-01

151

Leveraging Cloud Computing to Address Public Health Disparities: An Analysis of the SPHPS

As the use of certified electronic health record technology (CEHRT) has continued to gain prominence in hospitals and physician practices, public health agencies and health professionals have the ability to access health data through health information exchanges (HIE). With such knowledge health providers are well positioned to positively affect population health, and enhance health status or quality-of-life outcomes in at-risk populations. Through big data analytics, predictive analytics and cloud computing, public health agencies have the opportunity to observe emerging public health threats in real-time and provide more effective interventions addressing health disparities in our communities. The Smarter Public Health Prevention System (SPHPS) provides real-time reporting of potential public health threats to public health leaders through the use of a simple and efficient dashboard and links people with needed personal health services through mobile platforms for smartphones and tablets to promote and encourage healthy behaviors in our communities. The purpose of this working paper is to evaluate how a secure virtual private cloud (VPC) solution could facilitate the implementation of the SPHPS in order to address public health disparities. PMID:23569644

Jalali, Arash; Olabode, Olusegun A.; Bell, Christopher M.

2012-01-01

152

Determinant Computation on the GPU using the Condensation Method

Determinant Computation on the GPU using the Condensation Method Sardar Anisul Haque1, Marc Moreno@csd.uwo.ca Abstract. We report on a GPU implementation of the condensation method designed by Abdelmalek Salem. Our results suggest that a GPU implementation of the condensation method has a large potential

Moreno Maza, Marc

153

Experience Papers Introducing Research Methods to Computer Science Honours

Experience Papers Introducing Research Methods to Computer Science Honours Students Vashti Galpina research methods Honours course to increase our students' exposure to research and to help them cope better of the problems we encountered with the Honours research reports prior to the introduction of the research methods

Galpin, Vashti

154

Transonic Flow Computations Using Nonlinear Potential Methods

NASA Technical Reports Server (NTRS)

This presentation describes the state of transonic flow simulation using nonlinear potential methods for external aerodynamic applications. The presentation begins with a review of the various potential equation forms (with emphasis on the full potential equation) and includes a discussion of pertinent mathematical characteristics and all derivation assumptions. Impact of the derivation assumptions on simulation accuracy, especially with respect to shock wave capture, is discussed. Key characteristics of all numerical algorithm types used for solving nonlinear potential equations, including steady, unsteady, space marching, and design methods, are described. Both spatial discretization and iteration scheme characteristics are examined. Numerical results for various aerodynamic applications are included throughout the presentation to highlight key discussion points. The presentation ends with concluding remarks and recommendations for future work. Overall. nonlinear potential solvers are efficient, highly developed and routinely used in the aerodynamic design environment for cruise conditions. Published by Elsevier Science Ltd. All rights reserved.

Holst, Terry L.; Kwak, Dochan (Technical Monitor)

2000-01-01

155

Monte Carlo methods: a computational pattern for our pattern language

The Monte Carlo methods are an important set of algorithms in computer science. They involve estimating results by statistically sampling a parameter space with a thousands to millions of experiments. The algorithm requires a small set of parameters as input, with which it generates a large amount of computation, and outputs a concise set of aggregated results. The large amount

Jike Chong; Ekaterina Gonina; Kurt Keutzer

2010-01-01

156

Computational Methods for Atmospheric Science, ATS607 Colorado State University

Computational Methods for Atmospheric Science, ATS607 Colorado State University Department of Atmospheric Science, Spring 2014 Wednesdays and Fridays @ 2:15 Â 3:30 Room: ENGR Research Center (ERC): Giordano and Nakanishi, Computational Physics, 2nd Edition Grading: Homework ..................... 50

157

A Method of Computational Correction for Optical Distortion

- 1 - A Method of Computational Correction for Optical Distortion in Head-Mounted Displays Jannick lighter optical systems. This is of great importance for head-mounted displays where the weight warpings. 1. Introduction A head-mounted display (HMD), head tracker, computer graphics system

North Carolina at Chapel Hill, University of

158

GAP Noise Computation By The CE/SE Method

NASA Technical Reports Server (NTRS)

A typical gap noise problem is considered in this paper using the new space-time conservation element and solution element (CE/SE) method. Implementation of the computation is straightforward. No turbulence model, LES (large eddy simulation) or a preset boundary layer profile is used, yet the computed frequency agrees well with the experimental one.

Loh, Ching Y.; Chang, Sin-Chung; Wang, Xiao Y.; Jorgenson, Philip C. E.

2001-01-01

159

Structural Finite Element Method Based on Cloud Computing

The structural finite element method has been widely used in modern architecture high-performance computing. And a lot of finite element tools based on parallel computing model have emerged. This paper analyzes and compares the difference between MPI and Map Reduce programming model. The combination of MPI and Map Reduce has been presented for the parallel finite element analysis based on

Xin Zou; Xiao-qun Liu; Hong Fan; Zhen-li Cao

2012-01-01

160

Computational methods for Traditional Chinese Medicine: A survey

Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such

Suryani Lukman; Yulan He; Siu-Cheung Hui

2007-01-01

161

Python for Education: Computational Methods for Nonlinear Systems

We describe a novel, interdisciplinary, computational methods course that uses Python and associated numerical and visualization libraries to enable students to implement simulations for a number of different course modules. Problems in complex networks, biomechanics, pattern formation, and gene regulation are highlighted to illustrate the breadth and flexibility of Python-powered computational environments.

Christopher R. Myers; James. P. Sethna

2007-04-24

162

Enhancing Particle Methods for Fluid Simulation in Computer Graphics

Enhancing Particle Methods for Fluid Simulation in Computer Graphics by Hagit Schechter B (Computer Science) The University Of British Columbia (Vancouver) April 2013 c Hagit Schechter, 2013 #12. Bridson. Chapter 2 was published as: H. Schechter and R. Bridson. Evolving sub-grid turbulence for smoke

Bridson, Robert

163

Computer based safety training: an investigation of methods

Background: Computer based methods are increasingly being used for training workers, although our understanding of how to structure this training has not kept pace with the changing abilities of computers. Information on a computer can be presented in many different ways and the style of presentation can greatly affect learning outcomes and the effectiveness of the learning intervention. Many questions about how adults learn from different types of presentations and which methods best support learning remain unanswered. Aims: To determine if computer based methods, which have been shown to be effective on younger students, can also be an effective method for older workers in occupational health and safety training. Methods: Three versions of a computer based respirator training module were developed and presented to manufacturing workers: one consisting of text only; one with text, pictures, and animation; and one with narration, pictures, and animation. After instruction, participants were given two tests: a multiple choice test measuring low level, rote learning; and a transfer test measuring higher level learning. Results: Participants receiving the concurrent narration with pictures and animation scored significantly higher on the transfer test than did workers receiving the other two types of instruction. There were no significant differences between groups on the multiple choice test. Conclusions: Narration with pictures and text may be a more effective method for training workers about respirator safety than other popular methods of computer based training. Further study is needed to determine the conditions for the effective use of this technology. PMID:15778259

Wallen, E; Mulloy, K

2005-01-01

164

2.093 Computer Methods in Dynamics, Fall 2002

Formulation of finite element methods for analysis of dynamic problems in solids, structures, fluid mechanics, and heat transfer. Computer calculation of matrices and numerical solution of equilibrium equations by direct ...

Bathe, Klaus-Jürgen

165

Computer method for identification of boiler transfer functions

NASA Technical Reports Server (NTRS)

Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

Miles, J. H.

1972-01-01

166

Method for transferring data from an unsecured computer to a secured computer

A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

Nilsen, Curt A. (Castro Valley, CA)

1997-01-01

167

Methods of Reducing the Cost of Public Housing. Revised Edition.

ERIC Educational Resources Information Center

An in-depth study of public housing in New York focuses almost exclusively upon the cost analysis aspect of decision. The costs of various construction techniques, design arrangements, and materials have been collected and analyzed. The stated aim of the report is to reduce cost as much as possible, with user comfort being a secondary…

Callender, John H.; Aureli, Giles

168

Network Analysis in Public Health: History, Methods, and Applications

Network analysis is an approach to research that is uniquely suited to describing, exploring, and understanding structural and relational aspects of health. It is both a methodological tool and a theoretical paradigm that allows us to pose and answer important ecological questions in public health. In this review we trace the history of network analysis, provide a methodological overview of

Douglas A. Luke; Jenine K. Harris

2007-01-01

169

Computational methods to identify new antibacterial targets.

The development of resistance to all current antibiotics in the clinic means there is an urgent unmet need for novel antibacterial agents with new modes of action. One of the best ways of finding these is to identify new essential bacterial enzymes to target. The advent of a number of in silico tools has aided classical methods of discovering new antibacterial targets, and these programs are the subject of this review. Many of these tools apply a cheminformatic approach, utilizing the structural information of either ligand or protein, chemogenomic databases, and docking algorithms to identify putative antibacterial targets. Considering the wealth of potential drug targets identified from genomic research, these approaches are perfectly placed to mine this rich resource and complement drug discovery programs. PMID:24974974

McPhillie, Martin J; Cain, Ricky M; Narramore, Sarah; Fishwick, Colin W G; Simmons, Katie J

2015-01-01

170

Computational Simulations and the Scientific Method

NASA Technical Reports Server (NTRS)

As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

Kleb, Bil; Wood, Bill

2005-01-01

171

Methods for operating parallel computing systems employing sequenced communications

A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

Benner, R.E.; Gustafson, J.L.; Montry, G.R.

1999-08-10

172

Convergence acceleration of the Proteus computer code with multigrid methods

NASA Technical Reports Server (NTRS)

Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

Demuren, A. O.; Ibraheem, S. O.

1992-01-01

173

Relationship between phase and energy methods for disparity computation.

The phase and energy methods for computing binocular disparity maps from stereograms are motivated differently, have different physiological relevances, and involve different computational steps. Nevertheless, we demonstrate that at the final stages where disparity values are made explicit, the simplest versions of the two methods are exactly equivalent. The equivalence also holds when the quadrature-pair construction in the energy method is replaced with a more physiologically plausible phase-averaging step. The equivalence fails, however, when the phase-difference receptive field model is replaced by the position-shift model. Additionally, intermediate results from the two methods are always quite distinct. In particular, the energy method generates a distributed disparity representation similar to that found in the visual cortex, while the phase method does not. Finally, more elaborate versions of the two methods are in general not equivalent. We also briefly compare these two methods with some other stereo models in the literature. PMID:10636942

Qian, N; Mikaelian, S

2000-02-01

174

Optimal Joint Multiple Resource Allocation Method for Cloud Computing Environments

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. To provide cloud computing services economically, it is important to optimize resource allocation under the assumption that the required resource can be taken from a shared resource pool. In addition, to be able to provide processing ability and storage capacity, it is necessary to allocate bandwidth to access them at the same time. This paper proposes an optimal resource allocation method for cloud computing environments. First, this paper develops a resource allocation model of cloud computing environments, assuming both processing ability and bandwidth are allocated simultaneously to each service request and rented out on an hourly basis. The allocated resources are dedicated to each service request. Next, this paper proposes an optimal joint multiple resource allocation method, based on the above resource allocation model. It is demonstrated by simulation evaluation that the p...

Kuribayashi, Shin-ichi

2011-01-01

175

Computational methods for some non-linear wave equations

Computational methods based on a linearized implicit scheme and a predictor-corrector method are proposed for the solution of the Kadomtsev–Petviashvili (KP) equation and its generalized from (GKP). The methods developed for the KP equation are applied with minor modifications to the generalized case. An inportant advantage to be gained from the use of the linearized implicit method over the predictor-corrector

Q. Cao; K. Djidjeli; W. G. Price; E. H. Twizell

1999-01-01

176

Computational methods for some non-linear wave equations

Computational methods based on a linearized implicit scheme and a predictor-corrector method are proposed for the solution of the Kadomtsev-Petviashvili (KP) equation and its generalized from (GKP). The methods developed for the KP equation are applied with minor modifications to the generalized case. An important advantage to be gained from the use of the linearized implicit method over the predictor-corrector

Q. CAO; K. DJIDJELI; W. G. PRICE; E. H. TWIZELL

1999-01-01

177

GRACE: Public Health Recovery Methods following an Environmental Disaster

Different approaches are necessary when Community Based Participatory Research (CBPR) of environmental illness is initiated after an environmental disaster within a community. Often such events are viewed as golden scientific opportunities to do epidemiological studies. However, we believe that in such circumstances, community engagement and empowerment needs to be integrated into the public health service efforts in order for both those and any science to be successful, with special care being taken to address the immediate health needs of the community first rather than the pressing needs to answer important scientific questions. We will demonstrate how we have simultaneously provided valuable public health service, embedded generalizable scientific knowledge, and built a successful foundation for supplemental CBPR through our on-going recovery work after the chlorine gas disaster in Graniteville, South Carolina. PMID:20439226

Svendsen, ER; Whittle, N; Wright, L; McKeown, RE; Sprayberry, K; Heim, M; Caldwell, R; Gibson, JJ; Vena, J.

2014-01-01

178

AN ALGEBRAIC METHOD FOR PUBLIC-KEY CRYPTOGRAPHY

Algebraic key establishment protocols based on the di-culty of solv- ing equations over algebraic structures are described as a theoretical basis for constructing public{key cryptosystems. A protocol is a multi{party algorithm, deflned by a sequence of steps, speci- fying the actions required of two or more parties in order to achieve a specifled objective. Furthermore, a key establishment protocol is

Iris Anshel; Michael Anshel; Dorian Goldfeld

1999-01-01

179

Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

2013-01-01

180

Secure encapsulation and publication of biological services in the cloud computing environment.

Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

2013-01-01

181

Monte Carlo Methods: A Computational Pattern for Our Pattern Language

Monte Carlo Methods: A Computational Pattern for Our Pattern Language Jike Chong University@eecs.berkeley.edu Kurt Keutzer University of California, Berkeley keutzer@eecs.berkeley.edu ABSTRACT The Monte Carlo for a particular data working set. This paper presents the Monte Carlo Methods software pro- gramming pattern

California at Berkeley, University of

182

Efficient Computation of Truncated Power Direct Approach versus Newton's Method

(explicit) compu- tation. Keywords: Implicit functions; computation of Taylor polynomials; Newton-Raphson Figure 1: Graphical representation of the Newton-Raphson method starting at x0 2. Iterating or the Newton-Raphson method. It turns out that this scheme converges locally (i. e. if x0 is near enough

Koepf, Wolfram

183

Linear and nonlinear methods for brain-computer interfaces

At the recent Second International Meeting on Brain-Computer Interfaces (BCIs) held in June 2002 in Rensselaerville, NY, a formal debate was held on the pros and cons of linear and nonlinear methods in BCI research. Specific examples applying EEG data sets to linear and nonlinear methods are given and an overview of the various pros and cons of each approach

Klaus-Robert Müller; Charles W. Anderson; Gary E. Birch

2003-01-01

184

Solution-adaptive finite element method in computational fracture mechanics

NASA Technical Reports Server (NTRS)

Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

Min, J. B.; Bass, J. M.; Spradley, L. W.

1993-01-01

185

Method for implementation of recursive hierarchical segmentation on parallel computers

NASA Technical Reports Server (NTRS)

A method, computer readable storage, and apparatus for implementing a recursive hierarchical segmentation algorithm on a parallel computing platform. The method includes setting a bottom level of recursion that defines where a recursive division of an image into sections stops dividing, and setting an intermediate level of recursion where the recursive division changes from a parallel implementation into a serial implementation. The segmentation algorithm is implemented according to the set levels. The method can also include setting a convergence check level of recursion with which the first level of recursion communicates with when performing a convergence check.

Tilton, James C. (Inventor)

2005-01-01

186

Methods and systems for providing reconfigurable and recoverable computing resources

NASA Technical Reports Server (NTRS)

A method for optimizing the use of digital computing resources to achieve reliability and availability of the computing resources is disclosed. The method comprises providing one or more processors with a recovery mechanism, the one or more processors executing one or more applications. A determination is made whether the one or more processors needs to be reconfigured. A rapid recovery is employed to reconfigure the one or more processors when needed. A computing system that provides reconfigurable and recoverable computing resources is also disclosed. The system comprises one or more processors with a recovery mechanism, with the one or more processors configured to execute a first application, and an additional processor configured to execute a second application different than the first application. The additional processor is reconfigurable with rapid recovery such that the additional processor can execute the first application when one of the one more processors fails.

Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

2010-01-01

187

Proposed congestion control method for cloud computing environments

As cloud computing services rapidly expand their customer base, it has become important to share cloud resources, so as to provide them economically. In cloud computing services, multiple types of resources, such as processing ability, bandwidth and storage, need to be allocated simultaneously. If there is a surge of requests, a competition will arise between these requests for the use of cloud resources. This leads to the disruption of the service and it is necessary to consider a measure to avoid or relieve congestion of cloud computing environments. This paper proposes a new congestion control method for cloud computing environments which reduces the size of required resource for congested resource type instead of restricting all service requests as in the existing networks. Next, this paper proposes the user service specifications for the proposed congestion control method, and clarifies the algorithm to decide the optimal size of required resource to be reduced, based on the load offered to the system. I...

Kuribayashi, Shin-ichi

2012-01-01

188

Robust regression methods for computer vision: A review

Regression analysis (fitting a model to noisy data) is a basic technique in computer vision, Robust regression methods that remain reliable in the presence of various types of noise are therefore of considerable importance. We review several robust estimation techniques and describe in detail the least-median-of-squares (LMedS) method. The method yields the correct result even when half of the data

Peter Meer; Doron Mintz; Azriel Rosenfeld; Dong Yoon Kim

1991-01-01

189

Extrapolation methods for accelerating PageRank computations

We present a novel algorithm for the fast computation of PageRank, a hyperlink-based estimate of the ''importance'' of Web pages. The original PageRank algorithm uses the Power Method to compute successive iterates that converge to the principal eigenvector of the Markov matrix representing the Web link graph. The algorithm presented here, called Quadratic Extrapolation, accelerates the convergence of the Power

Sepandar D. Kamvar; Taher H. Haveliwala; Christopher D. Manning; Gene H. Golub

2003-01-01

190

Computational Methods for Protein Structure Prediction and Fold Recognition

Amino acid sequence analysis provides important insight into the structure of proteins,which in turn greatly facilitates the\\u000a understanding of its biochemical and cellular function. Efforts to use computational methods in predicting protein structure\\u000a based only on sequence information started 30 years ago (Nagano 1973; Chou and Fasman 1974).However, only during the last\\u000a decade, has the introduction of new computational techniques

Iwona Cymerman; Marcin Feder; Marcin Paw?owski; Michal Kurowski; Janusz Bujnicki

191

Computation of Trailing Edge Noise with a Discontinuous Galerkin Method

\\u000a Trailing edge noise of a semi-infinite, thin, flat plate situated in low Mach number flow is computed in two spatial dimensions.\\u000a The Acoustic Perturbation Equations (APE), which are employed as governing equations, are discretized via a Discontinuous\\u000a Galerkin Method (DGM). Results are compared with theory and Finite Difference (FD) computations. Next to the radiated sound\\u000a field, special attention is paid

M. Bauer

192

Fully consistent CFD methods for incompressible flow computations

NASA Astrophysics Data System (ADS)

Nowadays collocated grid based CFD methods are one of the most efficient tools for computations of the flows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure-velocity coupling on collocated grids use the so-called momentum interpolation method of Rhie and Chow [1]. As known, the method and some of its widely spread modifications result in solutions, which are dependent of time step at convergence. In this paper the magnitude of the dependence is shown to contribute about 0.5% into the total error in a typical turbulent flow computation. Nevertheless if coarse grids are used, the standard interpolation methods result in much higher non-consistent behavior. To overcome the problem, a recently developed interpolation method, which is independent of time step, is used. It is shown that in comparison to other time step independent method, the method may enhance the convergence rate of the SIMPLEC algorithm up to 25 %. The method is verified using turbulent flow computations around a NACA 64618 airfoil and the roll-up of a shear layer, which may appear in wind turbine wake.

Kolmogorov, D. K.; Shen, W. Z.; Sørensen, N. N.; Sørensen, J. N.

2014-06-01

193

Data analysis through interactive computer animation method (DATICAM)

DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process.

Curtis, J.N.; Schwieder, D.H.

1983-01-01

194

Reconnection methods for an arbitrary polyhedral computational grid

The paper suggests a method for local reconstructions of a 3D irregular computational grid and the algorithm of its program implementation. Two grid reconstruction operations are used as basic: paste of two cells having a common face and cut of a certain cell into two by a given plane. This paper presents criteria to use one or another operation, the criteria are analyzed. A program for local reconstruction of a 3D irregular grid is used to conduct two test computations and the computed results are given.

Rasskazova, V.V.; Sofronov, I.D.; Shaporenko, A.N. [Russian Federal Nuclear Center (Russian Federation); Burton, D.E.; Miller, D.S. [Lawrence Livermore National Lab., CA (United States)

1996-08-01

195

[Lot quality assurance sampling: methods and applications in public health].

Lot Quality Assurance Sampling (LQAS), developed to meet industrial quality control needs, has been applied to health surveys. The WHO used this method to assess immunization coverage. The sampling strategy was developed to classify lots as acceptable or unacceptable. Lot sampling is an efficient, simple and time-efficient procedure for quality assurance. Under certain conditions, efficiency can be improved with double sampling. We describe the method and its theoretical basis and illustrate applications of LQAS in epidemiological surveillance and quality control of medical records. The advantages and disadvantages of this method are presented. PMID:11011306

Jutand, M; Salamon, R

2000-08-01

196

Research Support in Hungary Machine scheduling LED public lighting Microsimulation in public, 2011 Industrial Innovation Problems #12;Research Support in Hungary Machine scheduling LED public Innovation Problems #12;Research Support in Hungary Machine scheduling LED public lighting Microsimulation

BalÃ¡zs, BÃ¡nhelyi

197

Computational methods for multi-domain geophysical flows

NASA Astrophysics Data System (ADS)

Approximate methods for multi-domain computation of incompressible flows with a free surface are developed and tested. A semi-implicit model is constructed and applied to the computation of surface waves and gravity currents. The model includes an adaptation of the constrained interpolation method to staggered grids. A new averaging scheme is introduced for the conversion of corner point velocities to cell-centered values, and a weighting scheme is proposed to account for large variations of the advection velocity. The concept of adjustable hydrostatics is introduced and is shown to produce satisfactory results in nearly-hydrostatic flows providing significant computational savings. Although the solution of the pressure-Poisson equation is not exact, a conservative velocity field is obtained. This is a powerful tool in cases where computational speed is a factor and the flow conditions allow the associated approximation. Two new methods are developed for multi-domain computations. First, an explicit interface is developed based on sequential regularization. An initial-value problem is solved at the interface between computational domains, so no information needs to be exchanged in the form of boundary conditions between the subdomains. The second interface is based on a Receding Boundary Method. By continuously moving the overlapping boundaries between subdomains, the method is able to reduce the errors introduced by the use of an open boundary condition at the common boundary. This eliminates the need for communication during each time step, and allows for an independent solution in the two subdomains. The results are accurate and the communications can be reduced by a factor of 5-10 times. The technique is extended to three space dimensions and in analysis of its behavior is conducted by numerical experimentation.

Kao, Kuo-Cheng

198

NASA Astrophysics Data System (ADS)

Computational methods are increasingly important to 21st century research and education; bioinformatics and climate change are just two examples of this trend. In this context computer scientists play an important role, facilitating the development and use of the methods and tools used to support computationally-based approaches. The undergraduate curriculum in computer science is one place where computational tools and methods can be introduced to facilitate the development of appropriately prepared computer scientists. To facilitate the evolution of the pedagogy, this dissertation identifies, develops, and organizes curriculum materials, software laboratories, and the reference design for an inexpensive portable cluster computer, all of which are specifically designed to support the teaching of computational methods to undergraduate computer science students. Keywords. computational science, computational thinking, computer science, undergraduate curriculum.

Peck, Charles Franklin

199

Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2004-01-01

200

Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

2003-01-01

201

A fast semidirect method for computing transonic aerodynamic flows

NASA Technical Reports Server (NTRS)

A fast, semidirect, iterative computational method, previously introduced for finite-difference solution of subsonic and slightly supercritical flow over airfoils, is extended both to apply to strongly supercritical conditions and to include full second-order accuracy in computing inviscid flows over airfoils. The nonlinear small-disturbance equations are solved iteratively by a direct, linear, elliptic solver. General, fully conservative, type-dependent difference equations are formulated, including parabolic- and shock-point transition operators that provide consistency with the integral conservation laws. These equations specialize to either first-order or to fully second-order-accurate equations. Various free parameters are evaluated for rapid convergence of the first-order scheme. Resulting pressure distributions and computing times are compared with the improved Murman-Cole line-relaxation method.

Martin, E. D.

1975-01-01

202

Computer controlled fluorometer device and method of operating same

A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.

Kolber, Z.; Falkowski, P.

1990-07-17

203

Computational Methods for CLIP-seq Data Processing.

RNA-binding proteins (RBPs) are at the core of post-transcriptional regulation and thus of gene expression control at the RNA level. One of the principal challenges in the field of gene expression regulation is to understand RBPs mechanism of action. As a result of recent evolution of experimental techniques, it is now possible to obtain the RNA regions recognized by RBPs on a transcriptome-wide scale. In fact, CLIP-seq protocols use the joint action of CLIP, crosslinking immunoprecipitation, and high-throughput sequencing to recover the transcriptome-wide set of interaction regions for a particular protein. Nevertheless, computational methods are necessary to process CLIP-seq experimental data and are a key to advancement in the understanding of gene regulatory mechanisms. Considering the importance of computational methods in this area, we present a review of the current status of computational approaches used and proposed for CLIP-seq data. PMID:25336930

Reyes-Herrera, Paula H; Ficarra, Elisa

2014-01-01

204

Computer controlled fluorometer device and method of operating same

A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.

Kolber, Zbigniew (Shoreham, NY); Falkowski, Paul (Stony Brook, NY)

1990-01-01

205

Method and system for environmentally adaptive fault tolerant computing

NASA Technical Reports Server (NTRS)

A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

2010-01-01

206

Computational methods for planning and evaluating geothermal energy projects

In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources.Two characteristic problems are considered: (1)the economic evaluation

M. G. Goumas; V. A. Lygerou; L. E. Papayannakis

1999-01-01

207

Computer-Assisted Deformity Correction Using the Ilizarov Method

\\u000a The Taylor spatial frame is a fixation device used to implement the Ilizarov method of bone deformity correction to gradually\\u000a distract an osteotomized bone at regular intervals, according to a prescribed schedule. We improve the accuracy of Ilizarov’s\\u000a method of osteogenesis by preoperatively planning the correction, intraoperatively measuring the location of the frame relative\\u000a to the patient, and computing the

Amber L. Simpson; Burton Ma; Dan P. Borschneck; Randy E. Ellis

2005-01-01

208

A higher order iterative method for computing the Drazin inverse.

A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747

Soleymani, F; Stanimirovi?, Predrag S

2013-01-01

209

Computational Methods for Structural Mechanics and Dynamics, part 1

NASA Technical Reports Server (NTRS)

The structural analysis methods research has several goals. One goal is to develop analysis methods that are general. This goal of generality leads naturally to finite-element methods, but the research will also include other structural analysis methods. Another goal is that the methods be amenable to error analysis; that is, given a physical problem and a mathematical model of that problem, an analyst would like to know the probable error in predicting a given response quantity. The ultimate objective is to specify the error tolerances and to use automated logic to adjust the mathematical model or solution strategy to obtain that accuracy. A third goal is to develop structural analysis methods that can exploit parallel processing computers. The structural analysis methods research will focus initially on three types of problems: local/global nonlinear stress analysis, nonlinear transient dynamics, and tire modeling.

Stroud, W. Jefferson (editor); Housner, Jerrold M. (editor); Tanner, John A. (editor); Hayduk, Robert J. (editor)

1989-01-01

210

European Community on Computational Methods in Applied Sciences

leading specialists in analysis and design of complex engineering structures and systems, coming from aerospace, civil and mechanical engineering, material science, and in the design and analysis of numerical,Germany Information Booklet & Book of Abstracts Sponsored by #12;Multi-scale Computational Methods for Solids

211

Computational methods for calculating geometric parameters of tectonic plates

Present day and ancient plate tectonic configurations can be modelled in terms of non-overlapping polygonal regions, separated by plate boundaries, on the unit sphere. The computational methods described in this article allow an evaluation of the area and the inertial tensor components of a polygonal region on the unit sphere, as well as an estimation of the associated errors. These

Antonio Schettino

1999-01-01

212

COMPUTATIONAL METHODS FOR LEAST SQUARES PROBLEMS AND CLINICAL TRIALS

COMPUTATIONAL METHODS FOR LEAST SQUARES PROBLEMS AND CLINICAL TRIALS A DISSERTATION SUBMITTED (rather than estimating the sensitivity of the data). Clinical trials generate much incomplete data. In the second part of the thesis we study clinical trials with time-to-event endpoints, in which the most

Stanford University

213

EQUILIBRIUM AND NONEQUILIBRIUM FOUNDATIONS OF FREE ENERGY COMPUTATIONAL METHODS

Statistical mechanics provides a rigorous framework for the numerical estimation of free energy differences in complex systems such as biomolecules. This paper presents a brief review of the statistical mechanical identities underlying a number of techniques for computing free energy differences. Both equilibrium and nonequilibrium methods are covered.

C. JARZYNSKI

2001-03-01

214

Convergence acceleration of the Proteus computer code with multigrid methods

NASA Technical Reports Server (NTRS)

This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

Demuren, A. O.; Ibraheem, S. O.

1995-01-01

215

A discrete ordinate response matrix method for massively parallel computers

A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by

U. R. Hanebutte; E. E. Lewis

1991-01-01

216

Computational methods for the prediction of protein interactions

Establishing protein interaction networks is crucial for understanding cellular operations. Detailed knowledge of the ‘interactome’, the full network of protein–protein interactions, in model cellular systems should provide new insights into the structure and properties of these systems. Parallel to the first massive application of experimental techniques to the determination of protein interaction networks and protein complexes, the first computational methods,

Alfonso Valencia; Florencio Pazos

2002-01-01

217

Computer analysis of dielectric waveguides - A finite-difference method

The present numerical computation program for the modes of dielectric guiding structures has its basis in finite differences, and is applicable to a wide range of problems. Attention is given to solutions for circular and rectangular dielectric waveguides, which are compared to those obtained by other methods, and the limitations of the commonly used approximate formulas developed by Marcatili (1969)

E. Schweig; W. B. Bridges

1984-01-01

218

Publicity and public relations

NASA Technical Reports Server (NTRS)

This paper addresses approaches to using publicity and public relations to meet the goals of the NASA Space Grant College. Methods universities and colleges can use to publicize space activities are presented.

Fosha, Charles E.

1990-01-01

219

Department Research Ethics Resources at Stanford University Type CS CS181 Computers, ethics, and public policy, Margaret Johnson Class Law LAW288: Governance and Ethics: Anti-corruption law, compliance, and enforcement Class MS&E MS&E197: Ethics and Public Policy (Also PubPol103B, STS110) Class OSPBER45 OSPBER45

220

This bibliography provides a list of references on finite element and related methods analysis in reactor physics computations. These references have been published in scientific journals, conference proceedings, technical reports, thesis/dissertations and as chapters in reference books from 1971 to the present. Both English and non-English references are included. All references contained in the bibliography are sorted alphabetically by the first author`s name and a subsort by date of publication. The majority of the references relate to reactor physics analysis using the finite element method. Related topics include the boundary element method, the boundary integral method, and the global element method. All aspects of reactor physics computations relating to these methods are included: diffusion theory, deterministic radiation and neutron transport theory, kinetics, fusion research, particle tracking in finite element grids, and applications. For user convenience, many of the listed references have been categorized. The list of references is not all inclusive. In general, nodal methods were purposely excluded, although a few references do demonstrate characteristics of finite element methodology using nodal methods (usually as a non-conforming element basis). This area could be expanded. The author is aware of several other references (conferences, thesis/dissertations, etc.) that were not able to be independently tracked using available resources and thus were not included in this listing.

Carpenter, D.C.

1998-01-01

221

Computation of Pressurized Gas Bearings Using CE/SE Method

NASA Technical Reports Server (NTRS)

The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

2003-01-01

222

The spectral-element method, Beowulf computing, and global seismology.

The propagation of seismic waves through Earth can now be modeled accurately with the recently developed spectral-element method. This method takes into account heterogeneity in Earth models, such as three-dimensional variations of seismic wave velocity, density, and crustal thickness. The method is implemented on relatively inexpensive clusters of personal computers, so-called Beowulf machines. This combination of hardware and software enables us to simulate broadband seismograms without intrinsic restrictions on the level of heterogeneity or the frequency content. PMID:12459579

Komatitsch, Dimitri; Ritsema, Jeroen; Tromp, Jeroen

2002-11-29

223

An effective method for computing the noise in biochemical networks

NASA Astrophysics Data System (ADS)

We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost.

Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

2013-02-01

224

An effective method for computing the noise in biochemical networks.

We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost. PMID:23464139

Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

2013-02-28

225

An effective method for computing the noise in biochemical networks

We present a simple yet effective method, which is based on power series expansion, for computing exact binomial moments that can be in turn used to compute steady-state probability distributions as well as the noise in linear or nonlinear biochemical reaction networks. When the method is applied to representative reaction networks such as the ON-OFF models of gene expression, gene models of promoter progression, gene auto-regulatory models, and common signaling motifs, the exact formulae for computing the intensities of noise in the species of interest or steady-state distributions are analytically given. Interestingly, we find that positive (negative) feedback does not enlarge (reduce) noise as claimed in previous works but has a counter-intuitive effect and that the multi-OFF (or ON) mechanism always attenuates the noise in contrast to the common ON-OFF mechanism and can modulate the noise to the lowest level independently of the mRNA mean. Except for its power in deriving analytical expressions for distributions and noise, our method is programmable and has apparent advantages in reducing computational cost. PMID:23464139

Zhang, Jiajun; Nie, Qing; He, Miao; Zhou, Tianshou

2013-01-01

226

Design and Analysis of Computational Methods for Structural Acoustics

NASA Astrophysics Data System (ADS)

The application of finite element methods to problems in structural acoustics (the vibration of an elastic structure coupled to an acoustic medium) is considered. New methods are developed which yield dramatic improvement in accuracy over the standard Galerkin finite element approach. The goal of the new methods is to decrease the computational burden required to achieve a desired accuracy level at a particular frequency thereby enabling larger scale, higher frequency computations for a given platform. A new class of finite element methods, Galerkin Generalized Least-Squares (GGLS) methods, are developed and applied to model the in vacuo and fluid-loaded vibration response of Reissner-Mindlin plates. Through judicious selection of the design parameters inherent to GGLS methods, this formulation provides a consistent framework for enhancing the accuracy of finite elements. An optimal GGLS method is designed such that the complex wave-number finite element dispersion relations are identical to the analytic relations. Complex wave-number dispersion analysis and numerical experiments demonstrate the dramatic superiority of the new optimal method over the standard finite element approach for coupled and uncoupled plate vibrations. The new method provides for a dramatic decrease in discretization requirements over previous methods. The canonical problem of a baffled, fluid-loaded, finite cylindrical shell is also studied. The finite element formulation for this problem is developed and the results are compared to an analytic solution based on an expansion of the displacement using in vacuo mode shapes. A novel high resolution parameter estimation technique, based on Prony's method, is used to obtain the complex wave-number dispersion relations for the finite structure. The finite element dispersion relations enable the analyst to pinpoint the source of errors and form discretization rules. The stationary phase approximation is used to obtain the dependence of the far field pressure on the surface displacement. This analysis allows for the study of the propagation of errors into the far field as well as the determination of important mechanisms of sound radiation.

Grosh, Karl

227

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2011 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2011-07-01

228

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2013 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2013-07-01

229

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2010 CFR

...false Methods of computing annual volume of sales. 779.342 Section 779...Establishments Computing Annual Dollar Volume and Combination of Exemptions § 779.342 Methods of computing annual volume of sales. The tests as to...

2010-07-01

230

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2012 CFR

2012-07-01

231

29 CFR 779.342 - Methods of computing annual volume of sales.

Code of Federal Regulations, 2014 CFR

2014-07-01

232

Domain decomposition methods for the parallel computation of reacting flows

NASA Technical Reports Server (NTRS)

Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

Keyes, David E.

1988-01-01

233

Applications of meshless methods for damage computations with finite strains

NASA Astrophysics Data System (ADS)

Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis.

Pan, Xiaofei; Yuan, Huang

2009-06-01

234

1 Abstract-- CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is a simple test that is easy for humans but extremely difficult for computers to solve. CAPTCHA has been to protect their resources from attacks initiated by automatic scripts. By design, CAPTCHA is unable

Zou, Cliff C.

235

Reducing Total Power Consumption Method in Cloud Computing Environments

The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

Kuribayashi, Shin-ichi

2012-01-01

236

Experiences using DAKOTA stochastic expansion methods in computational simulations.

Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

Templeton, Jeremy Alan; Ruthruff, Joseph R.

2012-01-01

237

Statistical methods for dealing with publication bias in meta-analysis.

Publication bias is an inevitable problem in the systematic review and meta-analysis. It is also one of the main threats to the validity of meta-analysis. Although several statistical methods have been developed to detect and adjust for the publication bias since the beginning of 1980s, some of them are not well known and are not being used properly in both the statistical and clinical literature. In this paper, we provided a critical and extensive discussion on the methods for dealing with publication bias, including statistical principles, implementation, and software, as well as the advantages and limitations of these methods. We illustrated a practical application of these methods in a meta-analysis of continuous support for women during childbirth. Copyright © 2014 John Wiley & Sons, Ltd. PMID:25363575

Jin, Zhi-Chao; Zhou, Xiao-Hua; He, Jia

2015-01-30

238

A hierarchical method for molecular docking using cloud computing.

Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs. PMID:23017886

Kang, Ling; Guo, Quan; Wang, Xicheng

2012-11-01

239

Improved diffraction computation with a hybrid C-RCWA-method

NASA Astrophysics Data System (ADS)

The Rigorous Coupled Wave Approach (RCWA) is acknowledged as a well established diffraction simulation method in electro-magnetic computing. Its two most essential applications in the semiconductor industry are in optical scatterometry and optical lithography simulation. In scatterometry, it is the standard technique to simulate spectra or diffraction responses for gratings to be characterized. In optical lithography simulation, it is an effective alternative to supplement or even to replace the FDTD for the calculation of light diffraction from thick masks as well as from wafer topographies. Unfortunately, the RCWA shows some serious disadvantages particularly for the modelling of grating profiles with shallow slopes and multilayer stacks with many layers such as extreme UV masks with large number of quarter wave layers. Here, the slicing may become a nightmare and also the computation costs may increase dramatically. Moreover, the accuracy is suffering due to the inadequate staircase approximation of the slicing in conjunction with the boundary conditions in TM polarization. On the other hand, the Chandezon Method (C-Method) solves all these problems in a very elegant way, however, it fails for binary patterns or gratings with very steep profiles where the RCWA works excellent. Therefore, we suggest a combination of both methods as plug-ins in the same scattering matrix coupling frame. The improved performance and the advantages of this hybrid C-RCWA-Method over the individual methods is shown with some relevant examples.

Bischoff, Joerg

2009-03-01

240

Research on the Statistical Method of Energy Consumption for Public Buildings in China

ICEBO2006, Shenzhen, China Building Commissioning for Energy Efficiency and Comfort Vol.VII-2-1 Research on the Statistical Method of Energy Consumption for Public Buildings in China Shuqin Chen Nianping Li... and analysis on energy consumption of public buildings in Tianjin and Kobe. Gas and heat, 2004, 24 (1):13-16. (In Chinese) [6] Shuqin Chen, Nianping Li, Jun Guan et al. A study on thermal environment and energy consumption of urban residential buildings...

Chen, S.; Li, N.

2006-01-01

241

Computational biology in the cloud: methods and new insights from computing at scale.

The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available. PMID:23424149

Kasson, Peter M

2013-01-01

242

Investigation of Ultrasonic Wave Scattering Effects using Computational Methods

NASA Astrophysics Data System (ADS)

Advances in computational power and expanded access to computing clusters has made mathematical modeling of complex wave effects possible. We have used multi-core and cluster computing to implement analytical and numerical models of ultrasonic wave scattering in fluid and solid media (acoustic and elastic waves). We begin by implementing complicated analytical equations that describe the force upon spheres immersed in inviscid and viscous fluids due to an incident plane wave. Two real-world applications of acoustic force upon spheres are investigated using the mathematical formulations: emboli removal from cardiopulmonary bypass circuits using traveling waves and the micromanipulation of algal cells with standing waves to aid in biomass processing for algae biofuels. We then move on to consider wave scattering situations where analytical models do not exist: scattering of acoustic waves from multiple scatterers in fluids and Lamb wave scattering in solids. We use a numerical method called finite integration technique (FIT) to simulate wave behavior in three dimensions. The 3D simulations provide insight into experimental results for situations where 2D simulations would not be sufficient. The diverse set of scattering situations explored in this work show the broad applicability of the underlying principles and the computational tools that we have developed. Overall, our work shows that the movement towards better availability of large computational resources is opening up new ways to investigate complicated physics phenomena.

Campbell Leckey, Cara Ann

2011-12-01

243

Characterization of Meta-Materials Using Computational Electromagnetic Methods

NASA Technical Reports Server (NTRS)

An efficient and powerful computational method is presented to synthesize a meta-material to specified electromagnetic properties. Using the periodicity of meta-materials, the Finite Element Methodology (FEM) is developed to estimate the reflection and transmission through the meta-material structure for a normal plane wave incidence. For efficient computations of the reflection and transmission over a wide band frequency range through a meta-material a Finite Difference Time Domain (FDTD) approach is also developed. Using the Nicholson-Ross method and the Genetic Algorithms, a robust procedure to extract electromagnetic properties of meta-material from the knowledge of its reflection and transmission coefficients is described. Few numerical examples are also presented to validate the present approach.

Deshpande, Manohar; Shin, Joon

2005-01-01

244

Theoretical and computational methods for three-body processes

This thesis discusses the development and application of theoretical and computational methods to study three-body processes. The main focus is on the calculation of three-body resonances and bound states. This broadly includes the study of Efimov states and resonances, three-body shape resonances, three- body Feshbach resonances, three-body pre-dissociated states in systems with a conical intersection, and the calculation of three-body

Juan David Blandon Zapata

2009-01-01

245

Precise computations of chemotactic collapse using moving mesh methods

We consider the problem of computing blow-up solutions of chemotaxis systems, or the so-called chemotactic collapse. In two spatial dimensions, such solutions can have approximate self-similar behaviour, which can be very challenging to verify in numerical simulations [cf. Betterton and Brenner, Collapsing bacterial cylinders, Phys. Rev. E 64 (2001) 061904]. We analyse a dynamic (scale-invariant) remeshing method which performs spatial

C. J. Budd; R. Carretero-González; R. D. Russell

2005-01-01

246

Exploring the antiviral activity of juglone by computational method.

Abstract Nature has been the best source of medicines for a long time. Many plant extracts have been used as drugs. Juglone occurs in all parts of the Juglandaceae family and is found extensively in the black walnut plants. It possesses antifungal, antimalarial, antibacterial and antiviral properties besides exhibiting cytotoxic effects. Juglone has gained interest by the researchers for its anticancer properties. This article elucidates the antiviral activity of the Juglone by the computational method. PMID:24846583

Vardhini, Shailima R D

2014-12-01

247

Imaging and Computational Methods for Exploring Sub-cellular Anatomy

and Computational Methods for Exploring Sub-Cellular Anatomy. (May 2009) David Matthew Mayerich, B.S., Southwestern Oklahoma State University; M.S., Texas A&M University Chair of Advisory Committee: Dr. John Keyser The ability to create large-scale high... . . . . . . . . . . . . . . 29 3.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4. Block Alignment . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.1. Lateral Sectioning . . . . . . . . . . . . . . . . . . . 34 3.4.1.1. Stair-Step Sectioning...

Mayerich, David

2010-01-16

248

Computer method for identification of boiler transfer functions

NASA Technical Reports Server (NTRS)

An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.

Miles, J. H.

1971-01-01

249

Cost Savings From the Provision of Specific Methods of Contraception in a Publicly Funded Program

Objectives. We examined the cost-effectiveness of contraceptive methods dispensed in 2003 to 955 000 women in Family PACT (Planning, Access, Care and Treatment), California's publicly funded family planning program. Methods. We estimated the number of pregnancies averted by each contraceptive method and compared the cost of providing each method with the savings from averted pregnancies. Results. More than half of the 178 000 averted pregnancies were attributable to oral contraceptives, one fifth to injectable methods, and one tenth each to the patch and barrier methods. The implant and intrauterine contraceptives were the most cost-effective, with cost savings of more than $7.00 for every $1.00 spent in services and supplies. Per $1.00 spent, injectable contraceptives yielded savings of $5.60; oral contraceptives, $4.07; the patch, $2.99; the vaginal ring, $2.55; barrier methods, $1.34; and emergency contraceptives, $1.43. Conclusions. All contraceptive methods were cost-effective—they saved more in public expenditures for unintended pregnancies than they cost to provide. Because no single method is clinically recommended to every woman, it is medically and fiscally advisable for public health programs to offer all contraceptive methods. PMID:18703437

Rostovtseva, Daria P.; Brindis, Claire D.; Biggs, M. Antonia; Hulett, Denis; Darney, Philip D.

2009-01-01

250

A short survey of computational analysis methods in analysing ChIP-seq data

Chromatin immunoprecipitation followed by massively parallel next-generation sequencing (ChIP-seq) is a valuable experimental strategy for assaying protein-DNA interaction over the whole genome. Many computational tools have been designed to find the peaks of the signals corresponding to protein binding sites. In this paper, three computational methods, ChIP-seq processing pipeline (spp), PeakSeq and CisGenome, used in ChIP-seq data analysis are reviewed. There is also a comparison of how they agree and disagree on finding peaks using the publically available Signal Transducers and Activators of Transcription protein 1 (STAT1) and RNA polymerase II (PolII) datasets with corresponding negative controls. PMID:21296745

2011-01-01

251

Computation of multi-material interactions using point method

Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

Zhang, Duan Z [Los Alamos National Laboratory; Ma, Xia [Los Alamos National Laboratory; Giguere, Paul T [Los Alamos National Laboratory

2009-01-01

252

A finite volume, Cartesian grid method for computational aeroacoustics

NASA Astrophysics Data System (ADS)

Computational Aeroacoustics (CAA) combines the disciplines from both aeroacoustics and computational fluid dynamics and deals with the sound generation and propagation in association with the dynamics of the fluid flow, and its interaction with the geometry of the surrounding structures. To conduct such computations, it is essential that the numerical techniques for acoustic problems contain low dissipation and dispersion error for a wide range of length and time scales, can satisfy the nonlinear conservation laws, and are capable of dealing with geometric variations. In this dissertation, we first investigate two promising numerical methods for treating convective transport: the dispersion-relation-preservation (DRP) scheme, proposed by Tam and Webb, and the space-time a-epsilon method, developed by Chang. Between them, it seems that for long waves, errors grow slower with the space-time a-epsilon scheme, while for short waves, often critical for acoustics computations, errors accumulate slower with the DRP scheme. Based on these findings, two optimized numerical schemes, the dispersion-relation-preserving (DRP) scheme and the optimized prefactored compact (OPC) scheme, originally developed using the finite difference approach, are recast into the finite volume form so that nonlinear physics can be better handled. Finally, the Cartesian grid, cut-cell method is combined with the high-order finite-volume schemes to offer additional capabilities of handling complex geometry. The resulting approach is assessed against several well identified test problems, demonstrating that it can offer accurate and effective treatment to some important and challenging aspects of acoustic problems.

Popescu, Mihaela

253

Tensor product decomposition methods for plasmas physics computations

NASA Astrophysics Data System (ADS)

Tensor product decomposition (TPD) methods are a powerful linear algebra technique for the efficient representation of high dimensional data sets. In the simplest 2-dimensional case, TPD reduces to the singular value decomposition (SVD) of matrices. These methods, which are closely related to proper orthogonal decomposition techniques, have been extensively applied in signal and image processing, and to some fluid mechanics problems. However, their use in plasma physics computation is relatively new. Some recent applications include: data compression of 6-dimensional gyrokinetic plasma turbulence data sets,footnotetextD. R. Hatch, D. del-Castillo-Negrete, and P. W. Terry. Submitted to Journal Comp. Phys. (2011). noise reduction in particle methods,footnotetextR. Nguyen, D. del-Castillo-Negrete, K. Schneider, M. Farge, and G. Chen: Journal of Comp. Phys. 229, 2821-2839 (2010). and multiscale analysis of plasma turbulence.footnotetextS. Futatani, S. Benkadda, and D. del-Castillo-Negrete: Phys. of Plasmas, 16, 042506 (2009) The goal of this presentation is to discuss a novel application of TPD methods to projective integration of particle-based collisional plasma transport computations.

Del-Castillo-Negrete, D.

2012-03-01

254

Automated uncertainty analysis methods in the FRAP computer codes. [PWR

A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts.

Peck, S.O.

1980-01-01

255

Evolutionary computational methods to predict oral bioavailability QSPRs.

This review discusses evolutionary and adaptive methods for predicting oral bioavailability (OB) from chemical structure. Genetic Programming (GP), a specific form of evolutionary computing, is compared with some other advanced computational methods for OB prediction. The results show that classifying drugs into 'high' and 'low' OB classes on the basis of their structure alone is solvable, and initial models are already producing output that would be useful for pharmaceutical research. The results also suggest that quantitative prediction of OB will be tractable. Critical aspects of the solution will involve the use of techniques that can: (i) handle problems with a very large number of variables (high dimensionality); (ii) cope with 'noisy' data; and (iii) implement binary choices to sub-classify molecules with behavior that are qualitatively different. Detailed quantitative predictions will emerge from more refined models that are hybrids derived from mechanistic models of the biology of oral absorption and the power of advanced computing techniques to predict the behavior of the components of those models in silico. PMID:11865672

Bains, William; Gilbert, Richard; Sviridenko, Lilya; Gascon, Jose-Miguel; Scoffin, Robert; Birchall, Kris; Harvey, Inman; Caldwell, John

2002-01-01

256

Review methods for image segmentation from computed tomography images

NASA Astrophysics Data System (ADS)

Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.

Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

2014-12-01

257

Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

Cai, Wei

2014-05-15

258

While it is important to support the development of methods for public participation, we argue that this should not be at the expense of a broader consideration of the role of public participation. We suggest that a rights based approach provides a framework for developing more meaningful approaches that move beyond public participation as synonymous with consultation to value the contribution of lay knowledge to the governance of health systems and health research. PMID:25337604

Boaz, Annette; Chambers, Mary; Stuttaford, Maria

2014-10-01

259

On implicit Runge-Kutta methods for parallel computations

NASA Technical Reports Server (NTRS)

Implicit Runge-Kutta methods which are well-suited for parallel computations are characterized. It is claimed that such methods are first of all, those for which the associated rational approximation to the exponential has distinct poles, and these are called multiply explicit (MIRK) methods. Also, because of the so-called order reduction phenomenon, there is reason to require that these poles be real. Then, it is proved that a necessary condition for a q-stage, real MIRK to be A sub 0-stable with maximal order q + 1 is that q = 1, 2, 3, or 5. Nevertheless, it is shown that for every positive integer q, there exists a q-stage, real MIRK which is I-stable with order q. Finally, some useful examples of algebraically stable MIRKs are given.

Keeling, Stephen L.

1987-01-01

260

A discrete ordinate response matrix method for massively parallel computers

A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S{sub 8} and S{sub 16} solutions have been obtained for fixed-source benchmark problems in X--Y geometry.

Hanebutte, U.R.; Lewis, E.E.

1991-12-31

261

A discrete ordinate response matrix method for massively parallel computers

A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S[sub 8] and S[sub 16] solutions have been obtained for fixed-source benchmark problems in X--Y geometry.

Hanebutte, U.R.; Lewis, E.E.

1991-01-01

262

COMSAC: Computational Methods for Stability and Control. Part 2

NASA Technical Reports Server (NTRS)

The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

2004-01-01

263

Fast computational method of beam scattering from sea surface

NASA Astrophysics Data System (ADS)

Microwave backscattering from sea surface at low grazing angle (LGA) is important in predicting radar detection of targets on or near the surface, since the probability of a false alarm depends upon the observed signal-to-clutter ratio. However, the large area illuminated by incident beam at LGA leads to a number of unknown when using numerical method to calculate backscattering from sea surface. In addition, the calculation of backscattering from hundreds of random sea surface samples is needed to obtain the statistical properties of random rough surface. The sparse matrix canonical grid (SMCG) and compute unified device architecture (CUDA) libraries are used to accelerate the computation. By using these techniques, Doppler spectral characteristics of electromagnetic scattering from sea surface is effectively calculated.

Su, Xiang; Wu, Zhensen; Zhang, Xiaoxiao

2014-10-01

264

Counting hard-to-count populations: the network scale-up method for public health

Estimating sizes of hidden or hard-to-reach populations is an important problem in public health. For example, estimates of the sizes of populations at highest risk for HIV and AIDS are needed for designing, evaluating and allocating funding for treatment and prevention programmes. A promising approach to size estimation, relatively new to public health, is the network scale-up method (NSUM), involving two steps: estimating the personal network size of the members of a random sample of a total population and, with this information, estimating the number of members of a hidden subpopulation of the total population. We describe the method, including two approaches to estimating personal network sizes (summation and known population). We discuss the strengths and weaknesses of each approach and provide examples of international applications of the NSUM in public health. We conclude with recommendations for future research and evaluation. PMID:21106509

Bernard, H Russell; Hallett, Tim; Iovita, Alexandrina; Johnsen, Eugene C; Lyerla, Rob; McCarty, Christopher; Mahy, Mary; Salganik, Matthew J; Saliuk, Tetiana; Scutelniciuc, Otilia; Shelley, Gene A; Sirinirund, Petchsri; Weir, Sharon

2010-01-01

265

Discrete Logarithms in Finite Fields Some Algorithms for Computing New Public Key Cryptosystem

NASA Astrophysics Data System (ADS)

Let p be a prime, Fp be a finite field, g be a primitive element of Fp and let h be a nonzero element of Fp. The discrete logarithm problem (DLP) is the problem of finding that an exponent k for which gk?h (mod p). The well-known problem of computing discrete logarithms has additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete logarithm algorithm were discovered. In this paper are discused some known algorithms in this area. Most public key cryptosystems have been constructed based on abelian groups. Here we introduce how the discrete logarithm problem over a group can be seen as a special instance of an action by an abelian semigroup on finite set. The proposed new public key cryptosystem generalized the semigroup action problem due to Rosenlicht (see [8]) and shows how every semigroup action by an abelian semigroup gives rise to a Diffie-Hellman key exchange.

Trendafilov, Ivan D.; Durcheva, Mariana I.

2010-10-01

266

In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities. PMID:24950175

Zhang, Jun; Hsieh, Jui-Hua; Zhu, Hao

2014-01-01

267

A FAST NEW PUBLIC CODE FOR COMPUTING PHOTON ORBITS IN A KERR SPACETIME

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. We present a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semianalytically for the first time. We discuss the method, its implementation in a freely available FORTRAN code, and its application to toy problems from the literature.

Dexter, Jason [Department of Physics, University of Washington, Seattle, WA 98195-1560 (United States); Agol, Eric [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)], E-mail: jdexter@u.washington.edu

2009-05-10

268

The Role of Public Extension in Introducing Environment-Friendly Farming Methods in Turkey.

ERIC Educational Resources Information Center

Currently, the Turkish extension service plays a minimal role in reducing adverse environmental effects of farming methods. Public investment in research and extension on sustainable agriculture is needed to ensure long-term production practices that maintain the food supply without damaging the environment. (SK)

Kumuk, T.; Akgungor, S.

1995-01-01

269

Fan Flutter Computations Using the Harmonic Balance Method

NASA Technical Reports Server (NTRS)

An experimental forward-swept fan encountered flutter at part-speed conditions during wind tunnel testing. A new propulsion aeroelasticity code, based on a computational fluid dynamics (CFD) approach, was used to model the aeroelastic behavior of this fan. This threedimensional code models the unsteady flowfield due to blade vibrations using a harmonic balance method to solve the Navier-Stokes equations. This paper describes the flutter calculations and compares the results to experimental measurements and previous results from a time-accurate propulsion aeroelasticity code.

Bakhle, Milind A.; Thomas, Jeffrey P.; Reddy, T.S.R.

2009-01-01

270

Assessment of nonequilibrium radiation computation methods for hypersonic flows

NASA Technical Reports Server (NTRS)

The present understanding of shock-layer radiation in the low density regime, as appropriate to hypersonic vehicles, is surveyed. Based on the relative importance of electron excitation and radiation transport, the hypersonic flows are divided into three groups: weakly ionized, moderately ionized, and highly ionized flows. In the light of this division, the existing laboratory and flight data are scrutinized. Finally, an assessment of the nonequilibrium radiation computation methods for the three regimes in hypersonic flows is presented. The assessment is conducted by comparing experimental data against the values predicted by the physical model.

Sharma, Surendra

1993-01-01

271

Numerical methods and computers used in elastohydrodynamic lubrication

NASA Technical Reports Server (NTRS)

Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

Hamrock, B. J.; Tripp, J. H.

1982-01-01

272

Computation of Sound Propagation by Boundary Element Method

NASA Technical Reports Server (NTRS)

This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

Guo, Yueping

2005-01-01

273

ERIC Educational Resources Information Center

Graphical methods for displaying data, as well as available computer software and hardware, are reviewed. The authors have emphasized the types of graphs which are most relevant to the needs of the National Center for Education Statistics (NCES) and its readers. The following types of graphs are described: tabulations, stem-and-leaf displays,…

Bessey, Barbara L.; And Others

274

An experiment in hurricane track prediction using parallel computing methods

NASA Technical Reports Server (NTRS)

The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

1994-01-01

275

Applications of Computational Methods for Dynamic Stability and Control Derivatives

NASA Technical Reports Server (NTRS)

Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

Green, Lawrence L.; Spence, Angela M.

2004-01-01

276

Radiation Transport Computation in Stochastic Media: Method and Application

NASA Astrophysics Data System (ADS)

Stochastic media, characterized by the stochastic distribution of inclusions in a background medium, are typical radiation transport media encountered in natural or engineering systems. In the community of radiation transport computation, there is always a demand of accurate and efficient methods that can account for the nature of the stochastic distribution. In this dissertation, we focus on methodology development for the radiation transport computation that is applied to neutronic analyses of nuclear reactor designs characterized by the stochastic distribution of particle fuel. Reactor concepts with the employment of a fuel design consisting of a random heterogeneous mixture of fissile material and non-fissile moderator are constantly proposed. Key physical quantities such as core criticality and power distribution, reactivity control design parameters, depletion and fuel burn-up need to be carefully evaluated. In order to meet these practical requirements, we first need to develop accurate and fast computational methods that can effectively account for the stochastic nature of double heterogeneity configuration. A Monte Carlo based method called Chord Length Sampling (CLS) method is considered to be a promising method for analyzing those TRISO-type fueled reactors. Although the CLS method has been proposed for more than two decades and much research has been conducted to enhance its applicability, further efforts are still needed to address some key research gaps that exist for the CLS method. (1) There is a general lack of thorough investigation of the factors that give rise to the inaccuracy of the CLS method found by many researchers. The accuracy of the CLS method depends on the optical and geometric properties of the system. In some specific scenarios, considerable inaccuracies have been reported. However, no research has been providing a clear interpretation of the reasons responsible for the inaccuracy in the reported scenarios. Furthermore, no any correction methods have been proposed or developed to improve the accuracy of the CLS in all the applied scenarios. (2) Previous CLS method only deals with the on-the-fly sample of fuel particles in analyzing TRISO-type fueled reactors. Within the fuel particle, which consists of a fuel kernel and a coating, conventional Monte Carlo simulations apply. This strategy may not fully achieve the highest computational efficiency since extra simulation time is taken for tracking neutrons in the coating region. The coating region has negligible neutronic effect on the overall reactor core performance. This indicates a possible strategy to further increase the computational efficiency by directly sampling fuel kernels on-the-fly in the CLS simulations. In order to test the new strategy, a new model of the chord length distribution function is needed, which requires new research effort to develop and test the new model. (3) The previous evaluations and applications of the CLS method have been limited to single-type single-size fuel particle systems, i.e. only one type of fuel particles with constant size is assumed in the fuel zone, which is the case for typical VHTR designs. In practice, however, for different application purposes, two or more types of TRISO fuel particles may be loaded in the same fuel zone, e.g. fissile fuel particles and fertile fuel particles are used for transmutation purpose in some reactors. Moreover, the fuel particle size may not be kept constant and can vary with a range. Typical design containing such fuel particles can be found in the FSV reactor. Therefore, it is desired to develop new computational model to treat multi-type poly-sized particle systems in the neutornic analysis. This requires extending the current CLS method to on-the-fly sample not only the location of the fuel particle, but also the type and size of the fuel particles in order to be applied to a broad range of reactor designs in neutronic analyses. New sampling functions need to be developed for the extended on-the-fly sampling strategy. This Ph.D. dissertation addressed these

Liang, Chao

277

Parallel computation of multigroup reactivity coefficient using iterative method

One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

Susmikanti, Mike [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Development of Nuclear Informatics, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia); Dewayatna, Winter [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)] [Center for Nuclear Fuel Technology, National Nuclear Energy Agency of Indonesia PUSPIPTEK Area, Tangerang (Indonesia)

2013-09-09

278

Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

NASA Astrophysics Data System (ADS)

This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

Battiti, Roberto

1990-01-01

279

ICRP Publication 116 on 'Conversion coefficients for radiological protection quantities for external radiation exposures', provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the 'conventional' energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116. PMID:25144220

Petoussi-Henss, Nina; Bolch, Wesley E; Eckerman, Keith F; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

2014-09-21

280

NASA Astrophysics Data System (ADS)

ICRP Publication 116 on ‘Conversion coefficients for radiological protection quantities for external radiation exposures’, provides fluence-to-dose conversion coefficients for organ-absorbed doses and effective dose for various types of external exposures (ICRP 2010 ICRP Publication 116). The publication supersedes the ICRP Publication 74 (ICRP 1996 ICRP Publication 74, ICRU 1998 ICRU Report 57), including new particle types and expanding the energy ranges considered. The coefficients were calculated using the ICRP/ICRU computational phantoms (ICRP 2009 ICRP Publication 110) representing the reference adult male and reference adult female (ICRP 2002 ICRP Publication 89), together with a variety of Monte Carlo codes simulating the radiation transport in the body. Idealized whole-body irradiation from unidirectional and rotational parallel beams as well as isotropic irradiation was considered for a large variety of incident radiations and energy ranges. Comparison of the effective doses with operational quantities revealed that the latter quantities continue to provide a good approximation of effective dose for photons, neutrons and electrons for the ‘conventional’ energy ranges considered previously (ICRP 1996, ICRU 1998), but not at the higher energies of ICRP Publication 116.

Petoussi-Henss, Nina; Bolch, Wesley E.; Eckerman, Keith F.; Endo, Akira; Hertel, Nolan; Hunt, John; Menzel, Hans G.; Pelliccioni, Maurizio; Schlattl, Helmut; Zankl, Maria

2014-09-01

281

On computing a class of integrals basic to the F sub N method in radiative transfer

NASA Astrophysics Data System (ADS)

Methods for computing a class of integrals basic to the F sub N method in radiative transfer are discussed. Recursion relations are derived and used to develop an improved computational scheme for calculating these integrals accurately in high degree.

Garcia, R. D. M.; Siewert, C. E.

1992-08-01

282

Computer technology to assist states, tribes, and clients in the design of wellhead and source water protection areas for public water supply wells is being developed through two distinct SubTasks: (Sub task 1) developing a web-based wellhead decision support system, WellHEDSS, t...

283

ERIC Educational Resources Information Center

To a large extent the Southwest can be described as a rural area. Under these circumstances, programs for public understanding of technology become, first of all, exercises in logistics. In 1982, New Mexico State University introduced a program to inform teachers about computer technology. This program takes microcomputers into rural classrooms…

Amodeo, Luiza B.; Martin, Jeanette

284

Research on Assessment Methods for Urban Public Transport Development in China

In recent years, with the rapid increase in urban population, the urban travel demands in Chinese cities have been increasing dramatically. As a result, developing comprehensive urban transport systems becomes an inevitable choice to meet the growing urban travel demands. In urban transport systems, public transport plays the leading role to promote sustainable urban development. This paper aims to establish an assessment index system for the development level of urban public transport consisting of a target layer, a criterion layer, and an index layer. Review on existing literature shows that methods used in evaluating urban public transport structure are dominantly qualitative. To overcome this shortcoming, fuzzy mathematics method is used for describing qualitative issues quantitatively, and AHP (analytic hierarchy process) is used to quantify expert's subjective judgment. The assessment model is established based on the fuzzy AHP. The weight of each index is determined through the AHP and the degree of membership of each index through the fuzzy assessment method to obtain the fuzzy synthetic assessment matrix. Finally, a case study is conducted to verify the rationality and practicability of the assessment system and the proposed assessment method.

Zou, Linghong; Guo, Hongwei

2014-01-01

285

Matching wind turbine rotors and loads: Computational methods for designers

NASA Astrophysics Data System (ADS)

A comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications was reported. A method was developed to convert the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) it is decided how turbine power is to be governed to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics are used to predict longterm energy output. Most systems are approximated by a graph and calculator approach. The method leads to energy predictions, and to insight into modeled processes. A computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out with in depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps; including three different load compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.

Seale, J. B.

1983-04-01

286

Computational Methods for MicroRNA Target Prediction

MicroRNAs (miRNAs) have been identified as one of the most important molecules that regulate gene expression in various organisms. miRNAs are short, 21–23 nucleotide-long, single stranded RNA molecules that bind to 3' untranslated regions (3' UTRs) of their target mRNAs. In general, they silence the expression of their target genes via degradation of the mRNA or by translational repression. The expression of miRNAs, on the other hand, also varies in different tissues based on their functions. It is significantly important to predict the targets of miRNAs by computational approaches to understand their effects on the regulation of gene expression. Various computational methods have been generated for miRNA target prediction but the resulting lists of candidate target genes from different algorithms often do not overlap. It is crucial to adjust the bioinformatics tools for more accurate predictions as it is equally important to validate the predicted target genes experimentally. PMID:25153283

Ekimler, Semih; Sahin, Kaniye

2014-01-01

287

Method for computer-aided alignment of complex optical system

NASA Astrophysics Data System (ADS)

For making complex optical system meet the design requirement, such as the space camera used in remote sensing and UVX lithophotography, especially for off-axis all-reflecting optical system, alignment technology is so necessary. In this paper, a method is presented. Based on the ideas of linearity instead of non-linearity and difference quotient instead of differential quotient, a mathematical model for computer-aided alignment is proposed. This model included the characteristics of the optical system, wavefront difference of its exit pupil and its misalignment of the misaligned optical system. Then comparing self-compiled software with alignment package of CODE V, as a result, this self-compiled software is much more valid than alignment package of CODE V. For a large aperture, long focal length and off-axis three-mirror optical system, computer-aided alignment is successful. Finally, the wavefront error of the middle field is 0.094 waves RMS and the wavefront error of +0.7 field is 0.106 waves RMS and the wavefront error of -0.7 field is 0.125 waves RMS at ?=632.8nm are obtained.

Yang, Xiaofei; Han, ChangYuan; Yu, Jingchi

2006-02-01

288

Density functional methods as computational tools in materials design

NASA Astrophysics Data System (ADS)

This article gives a brief overview of density functional theory and discusses two specific implementations: a numerical localized basis approach (DMol) and the pseudopotential plane-wave method. Characteristic examples include Cu, clusters, CO and NO dissociation on copper surfaces, Li-, K-, and O-endohedral fullerenes, tris-quaternary ammonium cations as zeolite template, and oxygen defects in bulk SiO2. The calculations reveal the energetically favorable structures (estimated to be within ± 0.02 Å of experiment), the energetics of geometric changes, and the electronic structures underlying the bonding mechanisms. A characteristic DMo1 calculation on a 128-node nCUBE 2 parallel computer shows a speedup of about 107 over a single processor. A plane-wave calculation on a unit cell with 64 silicon atoms using 1024 nCUBE 2 processors runs about five times faster than on a single-processor CRAY YMP.

Li, Y. S.; van Daelen, M. A.; Wrinn, M.; King-Smith, D.; Newsam, J. M.; Delley, B.; Wimmer, E.; Klitsner, T.; Sears, M. P.; Carlson, G. A.; Nelson, J. S.; Allan, D. C.; Teter, M. P.

1994-04-01

289

Computational and experimental methods to decipher the epigenetic code

A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area. PMID:25295054

de Pretis, Stefano; Pelizzola, Mattia

2014-01-01

290

1.1 This practice facilitates the interoperability of computed radiography (CR) imaging and data acquisition equipment by specifying image data transfer and archival storage methods in commonly accepted terms. This practice is intended to be used in conjunction with Practice E2339 on Digital Imaging and Communication in Nondestructive Evaluation (DICONDE). Practice E2339 defines an industrial adaptation of the NEMA Standards Publication titled Digital Imaging and Communications in Medicine (DICOM, see http://medical.nema.org), an international standard for image data acquisition, review, storage and archival storage. The goal of Practice E2339, commonly referred to as DICONDE, is to provide a standard that facilitates the display and analysis of NDE results on any system conforming to the DICONDE standard. Toward that end, Practice E2339 provides a data dictionary and a set of information modules that are applicable to all NDE modalities. This practice supplements Practice E2339 by providing information objec...

American Society for Testing and Materials. Philadelphia

2010-01-01

291

One-to-One Computing in Public Schools: Lessons from "Laptops for All" Programs

ERIC Educational Resources Information Center

The basic tenet of one-to-one computing is that the student and teacher have Internet-connected, wireless computing devices in the classroom and optimally at home as well. Also known as "ubiquitous computing," this strategy assumes that every teacher and student has her own computing device and obviates the need for moving classes to computer…

Abell Foundation, 2008

2008-01-01

292

Development of computational methods for heavy lift launch vehicles

NASA Technical Reports Server (NTRS)

The research effort has been focused on the development of an advanced flow solver for complex viscous turbulent flows with shock waves. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. A new computer program named CENS3D has been developed for viscous turbulent flows with discontinuities. Details of the code are described in Appendix A and Appendix B. With the developments of the numerical algorithm and dissipation model, the simulation of three-dimensional viscous compressible flows has become more efficient and accurate. The results of the research are expected to yield a direct impact on the design process of future liquid fueled launch systems.

Yoon, Seokkwan; Ryan, James S.

1993-01-01

293

Dynamically Adaptive Wavelet Collocation Method for Shock Computations

NASA Astrophysics Data System (ADS)

Most explicit TVD schemes make use of artificial viscosity to reduce oscillations and avoid the stability requirements that an explicitly written dissipation term would require when solving hyperbolic conservation equations. In this talk an adaptive wavelet collocation method for shock computation is described. The method for determining a shock's location is similar to Harten's multiresolution algorithm, but its implementation is more continuous. The presence of wavelet coefficients on the finest level of resolution indicates that the maximum allowed resolution has been reached and localized artificial viscous terms should be added to smooth the solution. The localized viscosity is constructed by creating a mask of the wavelet coefficients on the finest level that are greater than a given threshold parameter. The mask is smoothed to reduce oscillations that can be induced due to spatial discontinuities in the second derivative. The main advantage of this technique are its generality and zero losses away from shocks. Since the viscosity is written explicitly, sonic points are no longer problematic and there is no need to track wind direction or introduce flux splitting. One- and two-dimensional examples are given and discussed.

Regele, Jonathan

2005-11-01

294

Computational intelligence methods for information understanding and information management

Wlodzislaw Duch1,2 , Norbert Jankowski1 and Krzysztof Grbczewski1 1 Department of Informatics, Nicolaus Copernicus University, Torun, Poland, and 2 Department of Computer Science, School of Computer Engineering

Jankowski, Norbert

295

Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

2011-10-11

296

Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

Luttman, A.

2012-03-30

297

Non-unitary probabilistic quantum computing circuit and method

NASA Technical Reports Server (NTRS)

A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.

Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)

2009-01-01

298

How to Delegate and Verify in Public: Verifiable Computation from Attribute-based Encryption

In the modern age of cloud computing and smartphones, asymmetry in computing power seems to be the norm and computation to a large and powerful server (a "cloud", in modern parlance). Typically, the clients have a pay in the computation. One of the main security issues that arises in this setting is Â how can the clients trust

299

A Robust Sensor Selection Method for P300 Brain-Computer Interfaces

A Robust Sensor Selection Method for P300 Brain-Computer Interfaces H. Cecotti 1 , B. Rivet 1 , M Robust Sensor Selection Method for P300-BCI 2 1. Introduction A Brain-computer interface (BCI Neurosciences, Lyon, F-69000, France UniversitÃ© Lyon 1, Lyon, F-69000, France Abstract. A Brain-Computer

Paris-Sud XI, UniversitÃ© de

300

The Repeated Replacement Method: A Pure Lagrangian Meshfree Method for Computational Fluid Dynamics

In this paper we describe the repeated replacement method (RRM), a new meshfree method for computational fluid dynamics (CFD). RRM simulates fluid flow by modeling compressible fluids’ tendency to evolve towards a state of constant density, velocity, and pressure. To evolve a fluid flow simulation forward in time, RRM repeatedly “chops out” fluid from active areas and replaces it with new “flattened” fluid cells with the same mass, momentum, and energy. We call the new cells “flattened” because we give them constant density, velocity, and pressure, even though the chopped-out fluid may have had gradients in these primitive variables. RRM adaptively chooses the sizes and locations of the areas it chops out and replaces. It creates more and smaller new cells in areas of high gradient, and fewer and larger new cells in areas of lower gradient. This naturally leads to an adaptive level of accuracy, where more computational effort is spent on active areas of the fluid, and less effort is spent on inactive areas. We show that for common test problems, RRM produces results similar to other high-resolution CFD methods, while using a very different mathematical framework. RRM does not use Riemann solvers, flux or slope limiters, a mesh, or a stencil, and it operates in a purely Lagrangian mode. RRM also does not evaluate numerical derivatives, does not integrate equations of motion, and does not solve systems of equations. PMID:22866175

Walker, Wade A.

2012-01-01

301

Improved computational methods for simulating inertial confinement fusion

NASA Astrophysics Data System (ADS)

This dissertation describes the development of two multidimensional Lagrangian code for simulating inertial confinement fusion (ICF) on structured meshes. The first is DRACO, a production code primarily developed by the Laboratory for Laser Energetics. Several significant new capabilities were implemented including the ability to model radiative transfer using Implicit Monte Carlo [Fleck et al., JCP 8, 313 (1971)]. DRACO was also extended to operate in 3D Cartesian geometry on hexahedral meshes. Originally the code was only used in 2D cylindrical geometry. This included implementing thermal conduction and a flux-limited multigroup diffusion model for radiative transfer. Diffusion equations are solved by extending the 2D Kershaw method [Kershaw, JCP 39, 375 (1981)] to three dimensions. The second radiation-hydrodynamics code developed as part of this thesis is Cooper, a new 3D code which operates on structured hexahedral meshes. Cooper supports the compatible hydrodynamics framework [Caramana et al., JCP 146, 227 (1998)] to obtain round-off error levels of global energy conservation. This level of energy conservation is maintained even when two temperature thermal conduction, ion/electron equilibration, and multigroup diffusion based radiative transfer is active. Cooper is parallelized using domain decomposition, and photon energy group decomposition. The Mesh Oriented datABase (MOAB) computational library is used to exchange information between processes when domain decomposition is used. Cooper's performance is analyzed through direct comparisons with DRACO. Cooper also contains a method for preserving spherical symmetry during target implosions [Caramana et al., JCP 157, 89 (1999)]. Several deceleration phase implosion simulations were used to compare instability growth using traditional hydrodynamics and compatible hydrodynamics with/without symmetry modification. These simulations demonstrate increased symmetry preservation errors when traditional hydrodynamics is used. The symmetry preservation errors are not as significant when physical instability growth dominates numerical instability growth. In this case, traditional and compatible hydrodynamics produce similar results.

Fatenejad, Milad

302

Computational methods for microRNA target prediction.

MicroRNAs (miRNAs) are important players in gene regulation. The final and maybe the most important step in their regulatory pathway is the targeting. Targeting is the binding of the miRNA to the mature RNA via the RNA-induced silencing complex. Expression patterns of miRNAs are highly specific in respect to external stimuli, developmental stage, or tissue. This is used to diagnose diseases such as cancer in which the expression levels of miRNAs are known to change considerably. Newly identified miRNAs are increasing in number with every new release of miRBase which is the main online database providing miRNA sequences and annotation. Many of these newly identified miRNAs do not yet have identified targets. This is especially the case in animals where the miRNA does not bind to its target as perfectly as it does in plants. Valid targets need to be identified for miRNAs in order to properly understand their role in cellular pathways. Experimental methods for target validations are difficult, expensive, and time consuming. Having considered all these facts it is of crucial importance to have accurate computational miRNA target predictions. There are many proposed methods and algorithms available for predicting targets for miRNAs, but only a few have been developed to become available as independent tools and software. There are also databases which collect and store information regarding predicted miRNA targets. Current approaches to miRNA target prediction produce a huge amount of false positive and an unknown amount of false negative results, and thus the need for better approaches is evermore evident. This chapter aims to give some detail about the current tools and approaches used for miRNA target prediction, provides some grounds for their comparison, and outlines a possible future. PMID:24272439

Hamzeiy, Hamid; Allmer, Jens; Yousef, Malik

2014-01-01

303

Recent advances in computational structural reliability analysis methods

NASA Technical Reports Server (NTRS)

The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

1993-01-01

304

16.901 Computational Methods in Aerospace Engineering, Spring 2003

Introduction to computational techniques arising in aerospace engineering. Applications drawn from aerospace structures, aerodynamics, dynamics and control, and aerospace systems. Techniques include: numerical integration ...

Darmofal, David L.

305

Applications of Automatic Mesh Generation and Adaptive Methods in Computational

tissues), biomechanical modelÂ ing (biomechanics, ergonomics, and hemodynamics), molecular biology of computational medicine, but to investigate accuracy issues in bioelectric simulations -- issues that impact most

Utah, University of

306

Computational methods for deciphering genomic structures in prokaryotes.

??High-throughput sequencing technologies have generated huge amounts of genomic data. This wealth of genomic data provides computational biologists unprecedented opportunities to unveil the biological machinery… (more)

Che, Dongsheng

2008-01-01

307

A comparison of shielding calculation methods for multi-slice computed tomography (CT) systems.

Currently in the UK, shielding calculations for computed tomography (CT) systems are based on the BIR-IPEM (British Institute of Radiology and Institute of Physics in Engineering in Medicine) working group publication from 2000. Concerns have been raised internationally regarding the accuracy of the dose plots on which this method depends and the effect that new scanner technologies may have. Additionally, more recent shielding methods have been proposed by the NCRP (National Council on Radiation Protection) from the USA. Thermoluminescent detectors (TLDs) were placed in three CT scanner rooms at different positions for several weeks before being processed. Patient workload and dose data (DLP: the dose length product and mAs: the tube current-time product) were collected for this period. Individual dose data were available for more than 95% of patients scanned and the remainder were estimated. The patient workload data were used to calculate expected scatter radiation for each TLD location by both the NCRP and BIR-IPEM methods. The results were then compared to the measured scattered radiation. Calculated scattered air kerma and the minimum required lead shielding were found to be frequently overestimated compared to the measured air kerma, on average almost five times the measured scattered air kerma. PMID:19029585

Cole, J A; Platten, D J

2008-12-01

308

A study of public health indicators of morang Nepal by lot quality assurance sampling method.

This article presents the findings of public health status indicators of Morang District as studied by Lot Quality Assurance Sampling (LQAS) methods 2006. The contraceptive prevalence rate (CPR) and women receiving antenatal service from health workers were 42.0% and 46.0%, respectively. A total of 80.0% mothers were receiving iron tablets whereas 55.0% mothers gave history of taking Vita A during their last pregnancy. Nearly three-fifth (57.0%) of deliveries was conducted by health workers. Thirty-one percent of mothers fed their breast milk within 1 hour during last natal period. These figures were higher compared with previous years. PMID:17899962

Subba, Nawa Raj; Gurung, Gagan

2007-06-01

309

Students' Attitudes towards Control Methods in Computer-Assisted Instruction.

ERIC Educational Resources Information Center

Describes study designed to investigate dental students' attitudes toward computer-assisted teaching as applied in programs for oral radiology in Denmark. Programs using personal computers and slide projectors with varying degrees of learner and teacher control are described, and differences in attitudes between male and female students are…

Hintze, Hanne; And Others

1988-01-01

310

ACM Journal of Educational Resources in Computing, Vol. 7, No. 3, Art. 2. Publication Date application in the domain of cyber security education. Categories and Subject Descriptors: I.6 [Simulation Information Systems - Animation; Evaluation/Methodology; K.3 [Computers and Education]: Computer Uses

311

Federal Register 2010, 2011, 2012, 2013, 2014

...SERVICES Administration for Children and Families Privacy Act of...Law 100-503; Notice of a Computer Matching Program AGENCY: Office...System (PARIS) notice of a computer matching program between the...Public Law 100-503, the Computer Matching and Privacy...

2012-04-13

312

Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

ERIC Educational Resources Information Center

Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

Heald, Emerson F.

1978-01-01

313

Computational Solid Mechanics using a Vertex-based Finite Volume Method

Computational Solid Mechanics using a Vertex-based Finite Volume Method G. A. Taylor, C. Bailey and using nite volume (FV) methods for computational solid mechanics (CSM). These methods are proving will be given. Key Words: Vertex-based, Finite Volume, Solid Mechanics, Elasto-plastic. 1. Introduction Over

Taylor, Gary

314

A method for computing the leading-edge suction in a higher-order panel method

NASA Technical Reports Server (NTRS)

Experimental data show that the phenomenon of a separation induced leading edge vortex is influenced by the wing thickness and the shape of the leading edge. Both thickness and leading edge shape (rounded rather than point) delay the formation of a vortex. Existing computer programs used to predict the effect of a leading edge vortex do not include a procedure for determining whether or not a vortex actually exists. Studies under NASA Contract NAS1-15678 have shown that the vortex development can be predicted by using the relationship between the leading edge suction coefficient and the parabolic nose drag. The linear theory FLEXSTAB was used to calculate the leading edge suction coefficient. This report describes the development of a method for calculating leading edge suction using the capabilities of the higher order panel methods (exact boundary conditions). For a two dimensional case, numerical methods were developed using the double strength and downwash distribution along the chord. A Gaussian quadrature formula that directly incorporates the logarithmic singularity in the downwash distribution, at all panel edges, was found to be the best method.

Ehlers, F. E.; Manro, M. E.

1984-01-01

315

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact

Cong Wang; Qian Wang; Kui Ren; Wenjing Lou

2010-01-01

316

Multi-centred mixed-methods PEPFAR HIV care & support public health evaluation: study protocol

Background A public health response is essential to meet the multidimensional needs of patients and families affected by HIV disease in sub-Saharan Africa. In order to appraise curret provision of HIV care and support in East Africa, and to provide evidence-based direction to future care programming, and Public Health Evaluation was commissioned by the PEPFAR programme of the US Government. Methods/Design This paper described the 2-Phase international mixed methods study protocol utilising longitudinal outcome measurement, surveys, patient and family qualitative interviews and focus groups, staff qualitative interviews, health economics and document analysis. Aim 1) To describe the nature and scope of HIV care and support in two African countries, including the types of facilities available, clients seen, and availability of specific components of care [Study Phase 1]. Aim 2) To determine patient health outcomes over time and principle cost drivers [Study Phase 2]. The study objectives are as follows. 1) To undertake a cross-sectional survey of service configuration and activity by sampling 10% of the facilities being funded by PEPFAR to provide HIV care and support in Kenya and Uganda (Phase 1) in order to describe care currently provided, including pharmacy drug reviews to determine availability and supply of essential drugs in HIV management. 2) To conduct patient focus group discussions at each of these (Phase 1) to determine care received. 3) To undertake a longitudinal prospective study of 1200 patients who are newly diagnosed with HIV or patients with HIV who present with a new problem attending PEPFAR care and support services. Data collection includes self-reported quality of life, core palliative outcomes and components of care received (Phase 2). 4) To conduct qualitative interviews with staff, patients and carers in order to explore and understand service issues and care provision in more depth (Phase 2). 5) To undertake document analysis to appraise the clinical care procedures at each facility (Phase 2). 6) To determine principle cost drivers including staff, overhead and laboratory costs (Phase 2). Discussion This novel mixed methods protocol will permit transparent presentation of subsequent dataset results publication, and offers a substantive model of protocol design to measure and integrate key activities and outcomes that underpin a public health approach to disease management in a low-income setting. PMID:20920241

2010-01-01

317

The Safe Water Drinking Act of 1974 as amended in 1996 gave each State the responsibility of developing a Source-Water Assessment Plan (SWAP) that is designed to protect public-water supplies from contamination. Each SWAP must include three elements: (1) a delineation of the source-water protection area, (2) an inventory of potential sources of contaminants within the area, and (3) a determination of the susceptibility of the public-water supply to contamination from the inventoried sources. The Indiana Department of Environmental Management (IDEM) was responsible for preparing a SWAP for all public-water supplies in Indiana, including about 2,400 small public ground-water supplies that are designated transient, non-community (TNC) supplies. In cooperation with IDEM, the U.S. Geological Survey compiled information on conditions near the TNC supplies and helped IDEM complete source-water assessments for each TNC supply. The delineation of a source-water protection area (called the assessment area) for each TNC ground-water supply was defined by IDEM as a circular area enclosed by a 300-foot radius centered at the TNC supply well. Contaminants of concern (COCs) were defined by IDEM as any of the 90 contaminants for which the U.S. Environmental Protection Agency has established primary drinking-water standards. Two of these, nitrate as nitrogen and total coliform bacteria, are Indiana State-regulated contaminants for TNC water supplies. IDEM representatives identified potential point and nonpoint sources of COCs within the assessment area, and computer database retrievals were used to identify potential point sources of COCs in the area outside the assessment area. Two types of methods-subjective and subjective hybrid-were used in the SWAP to determine susceptibility to contamination. Subjective methods involve decisions based upon professional judgment, prior experience, and (or) the application of a fundamental understanding of processes without the collection and analysis of data for a specific condition. Subjective hybrid methods combine subjective methods with quantitative hydrologic analyses. The subjective methods included an inventory of potential sources and associated contaminants, and a qualitative description of the inherent susceptibility of the area around the TNC supply. The description relies on a classification of the hydrogeologic and geomorphic characteristics of the general area around the TNC supply in terms of its surficial geology, regional aquifer system, the occurrence of fine- and coarse-grained geologic materials above the screen of the TNC well, and the potential for infiltration of contaminants. The subjective hybrid method combined the results of a logistic regression analysis with a subjective analysis of susceptibility and a subjective set of definitions that classify the thickness of fine-grained geologic materials above the screen of a TNC well in terms of impedance to vertical flow. The logistic regression determined the probability of elevated concentrations of nitrate as nitrogen (greater than or equal to 3 milligrams per liter) in ground water associated with specific thicknesses of fine-grained geologic materials above the screen of a TNC well. In this report, fine-grained geologic materials are referred to as a geologic barrier that generally impedes vertical flow through an aquifer. A geologic barrier was defined to be thin for fine-grained materials between 0 and 45 feet thick, moderate for materials between 45 and 75 feet thick, and thick if the fine-grained materials were greater than 75 feet thick. A flow chart was used to determine the susceptibility rating for each TNC supply. The flow chart indicated a susceptibility rating using (1) concentrations of nitrate as nitrogen and total coliform bacteria reported from routine compliance monitoring of the TNC supply, (2) the presence or absence of potential sources of regulated contaminants (nitrate as nitrogen and coliform bac

Arihood, Leslie D.; Cohen, David A.

2006-01-01

318

3D modeling method for computer animate based on modified weak structured light method

NASA Astrophysics Data System (ADS)

A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

2010-11-01

319

Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

Reeder, Blaine; Turner, Anne M

2011-01-01

320

A METHOD OF ASSESSING USERS' VS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK SETTINGS A Thesis by ROBERT JAMES SCOTT STEELE Submitted to the Graduate College of Texas A&M University In Par ial Fulfillment... of the Requirements for the Degree of MASTER GF SCIENCE August 1986 Major Subject: Recreation and Resource Development A METHOD OF ASSESSING USERS' YS MANAGERS' PERCEPTIONS OF SAFETY AND SECURITY PROBLEMS IN PUBLIC BEACH PARK AREAS A Thesis by ROBERT JAMES...

Steele, Robert James Scott

2012-06-07

321

Asronomical refraction: Computational methods for all zenith angles

NASA Technical Reports Server (NTRS)

It is shown that the problem of computing astronomical refraction for any value of the zenith angle may be reduced to a simple, nonsingular, numerical quadrature when the proper choice is made for the independent variable of integration.

Auer, L. H.; Standish, E. M.

2000-01-01

322

Computational Fluid Dynamics. [numerical methods and algorithm development

NASA Technical Reports Server (NTRS)

This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

1992-01-01

323

Leading Computational Methods on Scalar and Vector HEC Platforms

The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end computing (HEC) platforms, primarily because of their generality, scalability, and cost effectiveness. However, the growing gap between sustained and peak performance for full-scale scientific applications on conventional supercomputers has become a major concern in high performance computing, requiring significantly larger systems and application scalability than

Leonid Oliker; Jonathan Carter; Michael Wehner; Andrew Canning; Stéphane Ethier; Arthur Mirin; David Parks; Patrick H. Worley; Shigemune Kitawaki; Yoshinori Tsuda

2005-01-01

324

A MEASURE-THEORETIC COMPUTATIONAL METHOD FOR INVERSE SENSITIVITY PROBLEMS I: METHOD AND ANALYSIS

We consider the inverse sensitivity analysis problem of quantifying the uncertainty of inputs to a deterministic map given specified uncertainty in a linear functional of the output of the map. This is a version of the model calibration or parameter estimation problem for a deterministic map. We assume that the uncertainty in the quantity of interest is represented by a random variable with a given distribution, and we use the law of total probability to express the inverse problem for the corresponding probability measure on the input space. Assuming that the map from the input space to the quantity of interest is smooth, we solve the generally ill-posed inverse problem by using the implicit function theorem to derive a method for approximating the set-valued inverse that provides an approximate quotient space representation of the input space. We then derive an efficient computational approach to compute a measure theoretic approximation of the probability measure on the input space imparted by the approximate set-valued inverse that solves the inverse problem. PMID:23637467

Breidt, J.; Butler, T.; Estep, D.

2012-01-01

325

A Spectral Time-Domain Method for Computational Electrodynamics

by Kane Yee over forty years ago, the finite-difference time-domain (FDTD) method has been a widely proposed the original finite-difference time-domain method for solving the equations (1). This method uses condition. Nonetheless, it remains a widely used method to this day, and has inspired a host of related

Lambers, James

326

FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA

1 FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA Gareth A.Taylor@brunel.ac.uk ABSTRACT This paper presents the computational modelling of welding phenomena within a versatile numerical) and Computational Solid Mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat

Taylor, Gary

327

Mixing-Plane Method for Flutter Computation in Multi-stage Turbomachines

Mixing-Plane Method for Flutter Computation in Multi-stage Turbomachines Roy Culver, and Feng Liu to perform flutter analysis on a single stage transonic compressor. The turbomachine considered is composed computations. Forced motion flutter computations are performed for both the isolated compressor

Liu, Feng

328

On the error of computing ab + cd using Cornea, Harrison and Tang's method

On the error of computing ab + cd using Cornea, Harrison and Tang's method Jean-Michel Muller CNRS 2013 Abstract In their book Scientific Computing on The Itanium [1], Cornea, Harrison and Tang, was introduced by Cornea, Harrison and Tang in their book Scientific Comput- ing on The Itanium [1]. Cornea et al

Paris-Sud XI, UniversitÃ© de

329

SUBSPACE METHODS AND EQUILIBRATION IN COMPUTER VISION Matthias Muhlich and Rudolf Mester

SUBSPACE METHODS AND EQUILIBRATION IN COMPUTER VISION Matthias MÂ¨uhlich and Rudolf Mester J. W, Germany (Muehlich|Mester)@iap.uni-frankfurt.de ABSTRACT Many computer vision problems (e.g. the estimation- mography matrix estimation). 1. INTRODUCTION The mathematical core of numerous computer vision prob- lems

Mester, Rudolf

330

Enforcing Trust-based Intrusion Detection in Cloud Computing Using Algebraic Methods

Enforcing Trust-based Intrusion Detection in Cloud Computing Using Algebraic Methods Amira Bradai scheme for hybrid cloud computing is proposed. We consider a trust metric based on honesty, cooperation detection, Perron Frobenius, cloud computing, hybrid execution, false alarms, security scores. I

Paris-Sud XI, UniversitÃ© de

331

Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows

NASA Technical Reports Server (NTRS)

This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.

Herrick, Gregory P.; Chen, Jen-Ping

2012-01-01

332

Tracking Replicability as a Method of Post-Publication Open Evaluation

Recent reports have suggested that many published results are unreliable. To increase the reliability and accuracy of published papers, multiple changes have been proposed, such as changes in statistical methods. We support such reforms. However, we believe that the incentive structure of scientific publishing must change for such reforms to be successful. Under the current system, the quality of individual scientists is judged on the basis of their number of publications and citations, with journals similarly judged via numbers of citations. Neither of these measures takes into account the replicability of the published findings, as false or controversial results are often particularly widely cited. We propose tracking replications as a means of post-publication evaluation, both to help researchers identify reliable findings and to incentivize the publication of reliable results. Tracking replications requires a database linking published studies that replicate one another. As any such database is limited by the number of replication attempts published, we propose establishing an open-access journal dedicated to publishing replication attempts. Data quality of both the database and the affiliated journal would be ensured through a combination of crowd-sourcing and peer review. As reports in the database are aggregated, ultimately it will be possible to calculate replicability scores, which may be used alongside citation counts to evaluate the quality of work published in individual journals. In this paper, we lay out a detailed description of how this system could be implemented, including mechanisms for compiling the information, ensuring data quality, and incentivizing the research community to participate. PMID:22403538

Hartshorne, Joshua K.; Schachner, Adena

2011-01-01

333

The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances

NASA Technical Reports Server (NTRS)

In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.

Beltran, Adriana; Salvador, James

1997-01-01

334

Methods for design and evaluation of integrated hardware-software systems for concurrent computation

NASA Technical Reports Server (NTRS)

Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

Pratt, T. W.

1985-01-01

335

ERIC Educational Resources Information Center

Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

Siu, Kin Wai Michael; Lam, Mei Seung

2012-01-01

336

Toward dynamic and attribute based publication, discovery and selection for cloud computing

Cloud computing is an emerging paradigm where computing resources are offered over the Internet as scalable, on-demand (Web) services. While cloud vendors have concentrated their efforts on the improvement of performance, resource consumption and scalability, other cloud characteristics have been neglected. On the one hand cloud service providers face difficult problems of publishing services that expose resources, and on the

Andrzej Goscinski; Michael Brock

2010-01-01

337

NASA Astrophysics Data System (ADS)

Given the recent emphasis and significant expenditures on technology as a tool in educational reform, policymakers, educators, and taxpayers are seeking accountability in terms of evaluation of its impact. With a view to investigating how the presence of computers in the classroom has affected the process of teaching and learning, this study aims to determine whether and how computer use by public middle school students in the science classroom might facilitate the individualization of students' instructional experiences. Questionnaires from 50 middle school science teachers located in 20 Manhattan public schools were collected to provide background information on each teacher's teaching philosophy, teaching practices, attitude toward technology, technology skills, and technology use in the science classroom. Questionnaires from 673 students of these teachers provided information regarding the students' computer use and skills and addressed issues of classroom environment deemed to be indicators of individualization of instruction. A classroom observation instrument was used to quantitatively track how 191 of these students interacted and worked with peers, the teacher, and resources in the classroom. The relationships between degree of computer use and the indicators of individualization of instruction were investigated using multilevel statistics, accounting for the clustering effect caused by students being grouped together in classrooms, to provide a more reliable analysis than traditional single level, fixed effects models. Random intercept analyses allowed an investigation into the mediating effects of teacher and classroom variables on the various outcomes. An increase in computer use was found to be associated with changes in certain aspects of the learning environment: fewer but more protracted verbal interactions in the classroom; more one-on-one interactions among students and between individual students and the teacher; more time spent working independently; more time spent working on assignments that varied according to the student's interests; fewer shifts in activity during a given time period; greater flexibility for students to work at their own pace; use of a wider range of resources; and greater student initiative in selecting resources to use.

Hollands, Fiona Mae

338

Performance of particle in cell methods on highly concurrent computational architectures

Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for

M. F. Adams; S. Ethier; N. Wichmann

2007-01-01

339

Quasi-Monte Carlo methods for computing flow in random porous media

Quasi-Monte Carlo methods for computing flow in random porous media I. G. Graham, F. Y. Kuo, D://www.bath.ac.uk/math-sci/BICS #12;Quasi-Monte Carlo methods for computing flow in random porous media I. G. Graham1,4 , F. Y. Kuo2, and where classical Monte Carlo methods with random sampling are currently the method of choice

Burton, Geoffrey R.

340

Multi-Level iterative methods in computational plasma physics

Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD.

Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

1999-03-01

341

Permeability computation on a REV with an immersed finite element method

An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

Laure, P. [Laboratoire J.-A. Dieudonne, CNRS UMR 6621, Universite de Nice-Sophia Antipolis, Parc Valrose, 06108 Nice, Cedex 02 (France); Puaux, G.; Silva, L.; Vincent, M. [MINES ParisTech, CEMEF-Centre de Mise en Forme des Materiaux, CNRS UMR 7635, BP 207 1 rue Claude, Daunesse 06904 Sophia Antipolis cedex (France)

2011-05-04

342

ERIC Educational Resources Information Center

The authors present reports of current research on distance education at the FernUniversitat in West Germany. Fritsch discusses adapting distance education techniques for small classes. Kuffner describes procedures for providing feedback to students using personalized computer-generated letters. Klute discusses using a computer with tutorial…

Fritsch, Helmut; And Others

1989-01-01

343

tissue engineering -- a review Wei Sun *, Pallavi Lal Department of Mechanical Engineering and Mechanics 30 October 2000 Abstract The utilization of computer-aided technologies in tissue engineering has evolved in the development of a new field of computer-aided tissue engineering (CATE). This article

Sun, Wei

344

A rational interpolation method to compute frequency response

NASA Technical Reports Server (NTRS)

A rational interpolation method for approximating a frequency response is presented. The method is based on a product formulation of finite differences, thereby avoiding the numerical problems incurred by near-equal-valued subtraction. Also, resonant pole and zero cancellation schemes are developed that increase the accuracy and efficiency of the interpolation method. Selection techniques of interpolation points are also discussed.

Kenney, Charles; Stubberud, Stephen; Laub, Alan J.

1993-01-01

345

Infodemiology can be defined as the science of distribution and determinants of information in an electronic medium, specifically the Internet, or in a population, with the ultimate aim to inform public health and public policy. Infodemiology data can be collected and analyzed in near real time. Examples for infodemiology applications include: the analysis of queries from Internet search engines to predict disease outbreaks (eg. influenza); monitoring peoples' status updates on microblogs such as Twitter for syndromic surveillance; detecting and quantifying disparities in health information availability; identifying and monitoring of public health relevant publications on the Internet (eg. anti-vaccination sites, but also news articles or expert-curated outbreak reports); automated tools to measure information diffusion and knowledge translation, and tracking the effectiveness of health marketing campaigns. Moreover, analyzing how people search and navigate the Internet for health-related information, as well as how they communicate and share this information, can provide valuable insights into health-related behavior of populations. Seven years after the infodemiology concept was first introduced, this paper revisits the emerging fields of infodemiology and infoveillance and proposes an expanded framework, introducing some basic metrics such as information prevalence, concept occurrence ratios, and information incidence. The framework distinguishes supply-based applications (analyzing what is being published on the Internet, eg. on Web sites, newsgroups, blogs, microblogs and social media) from demand-based methods (search and navigation behavior), and further distinguishes passive from active infoveillance methods. Infodemiology metrics follow population health relevant events or predict them. Thus, these metrics and methods are potentially useful for public health practice and research, and should be further developed and standardized. PMID:19329408

2009-01-01

346

-1- The Simplex Method - Computational Checks for the Simplex ...

As a nice side benefit, we will derive some computational checks as we ..... columns. For each column j not in 0-1 form, set xj. = 0. The optimal value is z = CBXB = ym+1,n+1. ...... solution U to (36), the value of the objective function of the

gabi

2005-07-25

347

New Methods of Mobile Computing: From Smartphones to Smart Education

ERIC Educational Resources Information Center

Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…

Sykes, Edward R.

2014-01-01

348

COMPUTER VISION BASED METHOD FOR FIRE DETECTION IN COLOR VIDEOS

This paper presents a computer vision based system for automatically detecting the presence of fire in stable video sequences. The algorithm is based not only on the color and movement attributes of fire but also analyzes the temporal variation of fire intensity, the spatial color variation of fire and the tendency of fire to be grouped around a central point.

Jessica Ebert; Jennie Shipley

349

Simple computer method provides contours for radiological images

NASA Technical Reports Server (NTRS)

Computer is provided with information concerning boundaries in total image. Gradient of each point in digitized image is calculated with aid of threshold technique; then there is invoked set of algorithms designed to reduce number of gradient elements and to retain only major ones for definition of contour.

Newell, J. D.; Keller, R. A.; Baily, N. A.

1975-01-01

350

Random Numbers for Parallel Computers: Requirements and Methods

Oreshkina , Richard Simarda a DIRO, Pavillon Aisenstadt, UniversitÂ´e de MontrÂ´eal, C.P.6128, Succ. Centre. Conceptually, these RNGs are designed to produce sequences of real num- bers that behave approximately), originally designed for fast high-quality image rendering on computer screens and video-game consoles

L'Ecuyer, Pierre

351

Computational methods for rapid prototyping of analytic solid models

Looks at how layered fabrication processes typically entail extensive computations and large memory requirements in the reduction of three-dimensional part descriptions to area-filling paths that cover the interior of each of a sequence of planar slices. Notes that the polyhedral “STL” representation exacerbates this problem by necessitating large input data volumes to describe curved surface models at acceptable levels of

Rida T. Farouki; Thomas König

1996-01-01

352

V International Conference on Computational Methods in Marine Engineering MARINE 2013

V International Conference on Computational Methods in Marine Engineering MARINE 2013 B. Brinkmann.D. KAKLIS School of Naval Architecture & Marine Engineering National Technical University of Athens (NTUA) 9 International Conference on Computational Methods in Marine Engineering (2013)" #12;A.-A.I Ginnis, R. Duvigneau

Paris-Sud XI, UniversitÃ© de

353

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2011 CFR

...2011-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2011-07-01

354

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2013 CFR

...2013-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2013-07-01

355

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2012 CFR

...2012-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2012-07-01

356

29 CFR 794.123 - Method of computing annual volume of sales.

Code of Federal Regulations, 2014 CFR

...2014-07-01 false Method of computing annual volume of sales. 794.123 Section 794.123...7(b)(3) of the Act Annual Gross Volume of Sales § 794.123 Method of computing annual volume of sales. (a) Where the...

2014-07-01

357

Verifying a Computational Method for Predicting Extreme Ground Motion Harris, R.A.1

Verifying a Computational Method for Predicting Extreme Ground Motion Harris, R.A.1 , M. Barall et al., Verifying a Computational Method for Predicting Extreme Ground Motion, SRL, accepted 2 it difficult to predict the ground motion very close to earthquake-generating faults, if the prediction

Ampuero, Jean Paul

358

In this paper, we describe different methods of computing the eigenvalues associated with the prolate spheroidal wave functions (PSWFs). These eigenvalues play an important role in computing the values of PSWFs as well as in the different numerical applications based on these later. The methods given in this work are accurate, fast and valid for small as well as for

Abderrazek Karoui; Tahar Moumni

2008-01-01

359

Background Computational models play an increasingly important role in the assessment and control of public health crises, as demonstrated during the 2009 H1N1 influenza pandemic. Much research has been done in recent years in the development of sophisticated data-driven models for realistic computer-based simulations of infectious disease spreading. However, only a few computational tools are presently available for assessing scenarios, predicting epidemic evolutions, and managing health emergencies that can benefit a broad audience of users including policy makers and health institutions. Results We present "GLEaMviz", a publicly available software system that simulates the spread of emerging human-to-human infectious diseases across the world. The GLEaMviz tool comprises three components: the client application, the proxy middleware, and the simulation engine. The latter two components constitute the GLEaMviz server. The simulation engine leverages on the Global Epidemic and Mobility (GLEaM) framework, a stochastic computational scheme that integrates worldwide high-resolution demographic and mobility data to simulate disease spread on the global scale. The GLEaMviz design aims at maximizing flexibility in defining the disease compartmental model and configuring the simulation scenario; it allows the user to set a variety of parameters including: compartment-specific features, transition values, and environmental effects. The output is a dynamic map and a corresponding set of charts that quantitatively describe the geo-temporal evolution of the disease. The software is designed as a client-server system. The multi-platform client, which can be installed on the user's local machine, is used to set up simulations that will be executed on the server, thus avoiding specific requirements for large computational capabilities on the user side. Conclusions The user-friendly graphical interface of the GLEaMviz tool, along with its high level of detail and the realism of its embedded modeling approach, opens up the platform to simulate realistic epidemic scenarios. These features make the GLEaMviz computational tool a convenient teaching/training tool as well as a first step toward the development of a computational tool aimed at facilitating the use and exploitation of computational models for the policy making and scenario analysis of infectious disease outbreaks. PMID:21288355

2011-01-01

360

Computational methods for constructing protein structure models from 3D electron microscopy maps

Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3 Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. PMID:23796504

Esquivel-Rodríguez, Juan; Kihara, Daisuke

2013-01-01

361

An Overview of Public Access Computer Software Management Tools for Libraries

ERIC Educational Resources Information Center

An IT decision maker gives an overview of public access PC software that's useful in controlling session length and scheduling, Internet access, print output, security, and the latest headaches: spyware and adware. In this article, the author describes a representative sample of software tools in several important categories such as setup…

Wayne, Richard

2004-01-01

362

ERIC Educational Resources Information Center

A method is proposed for using computer systems to introduce students in geography courses on the college level to quantitative methods. Two computer systems are discussed--Interactive Computer Systems (computer packages which enhance student learning by providing instantaneous feedback) and Computer Enhancement of Instruction, CEI, (standard…

Rivizzigno, Victoria L.

363

Multiple leaf tracking using computer vision methods with shape constraints

NASA Astrophysics Data System (ADS)

Accurate monitoring of leaves and plants is a necessity for research on plant physiology. To aid this biological research, we propose a new active contour method to track individual leaves in chlorophyll fluorescence time laps sequences. The proposed active contour algorithm is developed such that it can handle sequences with low temporal resolution. This paper proposes a novel optimization method which incorporates prior knowledge about the plant shape. Tests show that the proposed framework outperforms state of the art tracking methods.

De Vylder, Jonas; Van Der Straeten, Dominique; Philips, Wilfried

2013-05-01

364

Computing short-time aircraft maneuvers using direct methods

This paper analyzes the applicability of direct methods to design optimal short-term spatial maneuvers for an unmanned vehicle\\u000a in a faster than real-time scale. It starts by introducing different basic control schemes, which employ online trajectory\\u000a generation. Next, it presents and analyzes the results obtained through two recently developed direct transcription (collocation)\\u000a methods: the Gauss pseudospec-tral method and the Legendre-Gauss-Lobatto

G. Basset; Y. Xu; O. A. Yakimenko

2010-01-01

365

Adaptive computational methods for SSME internal flow analysis

NASA Technical Reports Server (NTRS)

Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

Oden, J. T.

1986-01-01

366

Background: Environmental public health disasters involving hazardous contaminants may have devastating effects. While much is known about their immediate devastation, far less is known about long-term impacts of these disasters. Extensive latent and chronic long-term public health effects may occur. Careful evaluation of contaminant exposures and long-term health outcomes within the constraints imposed by limited financial resources is essential. Methods: Here, we review epidemiologic methods lessons learned from conducting long-term evaluations of four environmental public health disasters involving hazardous contaminants at Chernobyl, the World Trade Center, Bhopal, and Graniteville (South Carolina, USA). Findings: We found several lessons learned which have direct implications for the on-going disaster recovery work following the Fukushima radiation disaster or for future disasters. Interpretation: These lessons should prove useful in understanding and mitigating latent health effects that may result from the nuclear reactor accident in Japan or future environmental public health disasters. PMID:23066404

Svendsen, Erik R.; Runkle, Jennifer R.; Dhara, Venkata Ramana; Lin, Shao; Naboka, Marina; Mousseau, Timothy A.; Bennett, Charles

2012-01-01

367

Computer vision methods for visual MIMO optical system

Cameras have become commonplace in phones, laptops, music-players and handheld games. Similarly, light emitting displays are prevalent in the form of electronic billboards, televisions, computer monitors, and hand-held devices. The prevalence of cameras and displays in our society creates a novel opportunity to build camera-based optical wireless communication systems based on a concept called visual MIMO. We extend the common

Wenjia Yuan; Kristin Dana; Michael Varga; Ashwin Ashok; Marco Gruteser; Narayan Mandayam

2011-01-01

368

Extrapolation Methods to Compute the Hypersingular Integral on Interval

The composite trapezoidal rule for the computation of Hadamard finite-part integral on interval with the hyper singular kernel 1\\/(t-s)2 is discussed and the case of the mesh point coinciding with the singular point by generalized finite-part definition is considered. The asymptotic expansion is obtained and an extrapolation algorithm is presented to accelerate the convergence rate. Based on the Toeplitz matrix

Jin Li; Dehao Yu

2010-01-01

369

A memory based method for computing robot-arm configuration

to model the cerebellar cortex in its function as a high order motion controller. The model was based on the current theory of how the nervous system operated in that part of the brain. He named this model CMAC for Cerebellar Model Articulation... for robot manipulators. In addition, the technique had to have low computational overhead so that real time control based on the true kinematic model could be achieved. In particular the objectives were to study the feasabilty of applying CMAC, ss...

Karimjee, Saleem

2012-06-07

370

Calculating computer-generated holograms takes a tremendous amount of computation time. We propose a fast method for calculating object lights for Fresnel holograms without the use of a Fourier transform. This method generates object lights of variously shaped patches from a basic object light for a fixed-shape patch by using three-dimensional affine transforms. It can thus calculate holograms that display complex objects including patches of various shapes. Computer simulations and optical experiments demonstrate the effectiveness of this method. The results show that it performs twice as fast as a method that uses a Fourier transform. PMID:19956293

Sakata, Hironobu; Sakamoto, Yuji

2009-12-01

371

Analysis of multigrid methods on massively parallel computers: Architectural implications

NASA Technical Reports Server (NTRS)

We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

Matheson, Lesley R.; Tarjan, Robert E.

1993-01-01

372

Frequency response modeling and control of flexible structures: Computational methods

NASA Technical Reports Server (NTRS)

The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

Bennett, William H.

1989-01-01

373

PUBLICATIONS [1] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, NASEC-

PUBLICATIONS [1] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, NASECÂ400. [4] M. J. Ward, F. M. Odeh, D. S. Cohen, Asymptotic Methods for MOSFET Modeling, SIAM J. Appl. Math. J. Ward, Low-Voltage Backscattered Electron Collection for Package Substrates and Integrated

Jellinek, Mark

374

NASA Astrophysics Data System (ADS)

Load-stepped method is a new full field automatic photoelasticity image processing method which can obtain the phase value directly. The principle of the load-stepped photoelasticity technique is introduced. The computer simulated photoelasticity images are used to describe the method and the results are satisfied. The authors expect the method will play an important role in dynamic photoelasticity image processing.

Ji, Xinhua; Huang, Kai; Li, Jun

2003-09-01

375

Systematic Methods for the Computation of the Directional Fields and Singular Points of Fingerprints

The first subject of the paper is the estimation of a high resolution directional field of fingerprints. Traditional methods are discussed and a method, based on principal component analysis, is proposed. The method not only computes the direction in any pixel location, but its coherence as well. It is proven that this method provides exactly the same results as the

Asker M. Bazen; Sabih H. Gerez

2002-01-01

376

A comparison of methods for the assessment of postural load and duration of computer use

Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring true exposure to postural load consisted of an observation of the workstation design and posture by a trained observer. A software program was used to record individual computer use. Results: Comparing the answers for each item of postural load, six of eleven items showed low agreement (kappa <0.20). For six items the sensitivity was below 50%, while for eight items the specificity was 80% or higher. Computer workers were unable to identify risk factors in their workplace and work posture. On average, computer workers overestimated their total computer use by 1.6 hours. The agreement among employees who reported a maximum of three hours of computer use per day was higher than the agreement among employees with a high duration of computer use. Conclusions: Self-report by means of this questionnaire is not a very reliable method to measure postural load and duration of computer use. This study emphasises that the challenge to develop quick and inexpensive techniques for assessing exposure to postural load and duration of computer use is still open. PMID:15550610

Heinrich, J; Blatter, B; Bongers, P

2004-01-01

377

Still today, the majority of publications in computer vision focuses on component technologies. However, computer vision has reached a level of maturity that allows not only to perform research on individual methods and system components but also to build fully integrated computer vision systems of significant complexity. This opens a number of new problems related to system architecture and integration,

Bernt Schiele; Gerhard Sagerer

2003-01-01

378

Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

Alex J. Dragt

2012-08-31

379

Bayesian methods in bioinformatics and computational systems biology

Bayesian methods are valuable, inter alia, whenever there is a need to extract information from data that are uncertain or subject to any kind of error or noise (including measurement error and experimental error, as well as noise or random variation intrinsic to the process of interest). Bayesian methods offer a number of advantages over more conventional statistical techniques that

Darren J. Wilkinson

2007-01-01

380

Equations of motion methods for computing electron affinities and

or IP is an infinitesimal fraction of the total energy. Equations of motion (EOM) methods and other of a set of working equations. A history of the development of EOM theories as applied to EAs and IPs in this contribution. EOM methods based upon MÃ¸llerÂPlesset, multiconfiguration self-consistent field, and coupled

Simons, Jack

381

Computational Aspects of the LAMBDA Method for GPS Ambiguity Resolution

Precise relative positioning based on a short observa- tion time span yields ambiguities that are heavily cor- related, together with position estimates of poor preci- sion. For an efficient estimation of the integer values of the GPS double difference ambiguities, the LAMBDA method has been developed and applied since 1993. In the context of the LAMBDA method, from a compu-

P. J. de Jonge; C. C. J. M. Tiberius; P. J. G. Teunissen

1996-01-01

382

Computational Method for Electrical Potential and Other Field Problems

ERIC Educational Resources Information Center

Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

Hastings, David A.

1975-01-01

383

Yeast Ancestral Genome Reconstructions: The Possibilities of Computational Methods

NASA Astrophysics Data System (ADS)

In 2006, a debate has risen on the question of the efficiency of bioinformatics methods to reconstruct mammalian ancestral genomes. Three years later, Gordon et al. (PLoS Genetics, 5(5), 2009) chose not to use automatic methods to build up the genome of a 100 million year old Saccharomyces cerevisiae ancestor. Their manually constructed ancestor provides a reference genome to test whether automatic methods are indeed unable to approach confident reconstructions. Adapting several methodological frameworks to the same yeast gene order data, I discuss the possibilities, differences and similarities of the available algorithms for ancestral genome reconstructions. The methods can be classified into two types: local and global. Studying the properties of both helps to clarify what we can expect from their usage. Both methods propose contiguous ancestral regions that come very close (> 95% identity) to the manually predicted ancestral yeast chromosomes, with a good coverage of the extant genomes.

Tannier, Eric

384

A Combined Method to Compute the Proximities of Asteroids

NASA Astrophysics Data System (ADS)

We describe a simple and efficient numerical-analytical method to find all of the proximities and critical points of the distance function in the case of two elliptical orbits with a common focus. Our method is based on the solutions of Simovljevi?'s (1974) graphical method and on the transcendent equations developed by Lazovi? (1993). The method is tested on 2 997 576 pairs of asteroid orbits and compared with the algebraic and polynomial solutions of Gronchi (2005). The model with four proximities was obtained by Gronchi (2002) only by applying the method of random samples, i.e., after many simulations and trials with various values of elliptical elements. We found real pairs with four proximities.

Šegan, S.; Milisavljevi?, S.; Mar?eta, D.

2011-09-01

385

NASA Astrophysics Data System (ADS)

With the recent releases of both Google's "Sky" and Microsoft's "WorldWide Telescope" and the large and increasing popularity of video games, the time is now for using these tools, and those crafted at NASA's Jet Propulsion Laboratory, to engage the public in astronomy like never before. This presentation will use "Cassini at Saturn Interactive Explorer " (CASSIE) to demonstrate the power of web-based video-game engine technology in providing the public a "first-person" look at space exploration. The concept of virtual space exploration is to allow the public to "see" objects in space as if they were either riding aboard or "flying" next to an ESA/NASA spacecraft. Using this technology, people are able to immediately "look" in any direction from their virtual location in space and "zoom-in" at will. Users can position themselves near Saturn's moons and observe the Cassini Spacecraft's "encounters" as they happened. Whenever real data for their "view" exists it is incorporated into the scene. Where data is missing, a high-fidelity simulation of the view is generated to fill in the scene. The observer can also change the time of observation into the past or future. Our approach is to utilize and extend the Unity 3d game development tool, currently in use by the computer gaming industry, along with JPL mission specific telemetry and instrument data to build our virtual explorer. The potential of the application of game technology for the development of educational curricula and public engagement are huge. We believe this technology can revolutionize the way the general public and the planetary science community views ESA/NASA missions and provides an educational context that is attractive to the younger generation. This technology is currently under development and application at JPL to assist our missions in viewing their data, communicating with the public and visualizing future mission plans. Real-time demonstrations of CASSIE and other applications in development will be shown. Astronomy is one of the oldest basic sciences. We should use one of today's newest communications technologies available to engage the public. We should embrace the use of web-based gaming technology to prepare the world for the International Year of Astronomy 2009.

Hussey, K.; Doronila, P.; Kulikov, A.; Lane, K.; Upchurch, P.; Howard, J.; Harvey, S.; Woodmansee, L.

2008-09-01

386

The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of

Sébastien Moretti; Fabrice Armougom; Iain M. Wallace; Desmond G. Higgins; C. Victor Jongeneel; Cédric Notredame

2007-01-01

387

An accurate and efficient solution method using spectral collocation method with domain decomposition is proposed for computing optical waveguides with discontinuous refractive index profiles. The use of domain decomposition divides the usual single domain into a few subdomains at the interfaces of discontinuous refractive index profiles. Each subdomain can be expanded by a suitable set of orthogonal basis functions and

Chia-Chien Huang; Chia-Chih Huang; Jaw-Yen Yang

2003-01-01

388

Shielding analysis methods available in the scale computational system

Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.

Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.

1986-01-01

389

Digital Computer Methods for Processing Neutron Radioactivation Analysis Data

portions of the program is stored on logical drum 1, and Main Program Part 3 is read into the core. Figure 8 Subroutine Test 3 25 Read into core analytical data spectrum at time T, 426 IX, IDX 6 Initial T supplied by Callin Pro ram. Compute...(0 0 ~ I I ~ Ol Figure 19 41 C C 3T 38 39 40 42 44 46 48 50 52 54 55 56 100 101 GT 58 60 A&A& PROGRAM IY ~ PART 4 CIMP'UTE REMAINING COVNT ROVTINE& PRIM'I SVM OF REMAINING COUNT IN EACH ANALYTICAL SPECTRUN DIMENSION SPEC (144...

Kuykendall, William E

1960-01-01

390

Computational methods for high-throughput pooled genetic experiments

Advances in high-throughput DNA sequencing have created new avenues of attack for classical genetics problems. This thesis develops and applies principled methods for analyzing DNA sequencing data from multiple pools of ...

Edwards, Matthew Douglas

2011-01-01

391

A low computation cost method for seizure prediction.

The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. PMID:25062892

Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

2014-10-01

392

NASA Astrophysics Data System (ADS)

Watershed management is a participatory process that requires collaboration among multiple groups of people. Environmental decision support systems (EDSS) have long been used to support such co-management and co-learning processes in watershed management. However, implementing and maintaining EDSS in-house can be a significant burden to many water agencies because of budget, technical, and policy constraints. Basing on experiences from several web-GIS environmental management projects in Texas, we showcase how cloud-computing services can help shift the design and hosting of EDSS from the traditional client-server-based platforms to be simple clients of cloud-computing services.

Sun, A. Y.; Scanlon, B. R.; Uhlman, K.

2013-12-01

393

Computation of molecular electrostatics with boundary element methods.

In continuum approaches to molecular electrostatics, the boundary element method (BEM) can provide accurate solutions to the Poisson-Boltzmann equation. However, the numerical aspects of this method pose significant problems. We describe our approach, applying an alpha shape-based method to generate a high-quality mesh, which represents the shape and topology of the molecule precisely. We also describe an analytical method for mapping points from the planar mesh to their exact locations on the surface of the molecule. We demonstrate that derivative boundary integral formulation has numerical advantages over the nonderivative formulation: the well-conditioned influence matrix can be maintained without deterioration of the condition number when the number of the mesh elements scales up. Singular integrand kernels are characteristics of the BEM. Their accurate integration is an important issue. We describe variable transformations that allow accurate numerical integration. The latter is the only plausible integral evaluation method when using curve-shaped boundary elements. Images FIGURE 3 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 PMID:9336178

Liang, J; Subramaniam, S

1997-01-01

394

The Evolution of Computer Forensic Best Practices: An Update on Programs and Publications

The field of computer forensics is one of the newer disciplines in the area of forensic science. Like all of the others, it is going through a transition from an art practiced by individuals to a more standardized set of techniques for which “best practices” can be defined. Over the past few years, a number of documents have been published

Alan E. Brill; Mark Pollitt; Carrie Morgan Whitcomb

2006-01-01

395

A great deal of research has examined computer-mediated communication discussions in educational environments for evidence of learning. These studies have often been disappointing, with analysts not finding the kinds of ‘quality’ talk that they had hoped for. In this study we draw upon elements of discursive psychology as we oriented to what was happening in the talk from the participants’

Jessica N. Lester; Trena M. Paulus

2011-01-01

396

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring

Qian Wang; Cong Wang; Kui Ren; Wenjing Lou; Jin Li

2011-01-01

397

ERIC Educational Resources Information Center

Describes the interactive computer Project BARN (developed by University of Wisconsin Center for Health Systems Research and Analysis) that provides health information to adolescents practicing high-risk behaviors in sensitive areas of human sexuality, drugs, and cigarette smoking. Poses the question that such interaction could be a compromise…

Hawkins, Robert P.; And Others

1987-01-01

398

Enabling Public Verifiability and Data Dynamics for Storage Security in Cloud Computing

Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well un- derstood. This work studies the problem of

Qian Wang; Cong Wang; Jin Li; Kui Ren; Wenjing Lou

2009-01-01

399

Computer Based Learning in FE. A Staff Development Model. A Staff Development Publication.

ERIC Educational Resources Information Center

This booklet describes the development and content of a model staff development pack for use in training teachers to incorporate the techniques of computer-based learning into their subject teaching. The guide consists of three parts. Part 1 outlines the aims and objectives, content, and use of the pack. Described next are seven curriculum samples…

Further Education Unit, London (England).

400

Computation of spectroscopic factors with the coupled-cluster method

We present a calculation of spectroscopic factors within coupled-cluster theory. Our derivation of algebraic equations for the one-body overlap functions are based on coupled-cluster equation-of-motion solutions for the ground and excited states of the doubly magic nucleus with mass number $A$ and the odd-mass neighbor with mass $A-1$. As a proof-of-principle calculation, we consider $^{16}$O and the odd neighbors $^{15}$O and $^{15}$N, and compute the spectroscopic factor for nucleon removal from $^{16}$O. We employ a renormalized low-momentum interaction of the $V_{\\mathrm{low-}k}$ type derived from a chiral interaction at next-to-next-to-next-to-leading order. We study the sensitivity of our results by variation of the momentum cutoff, and then discuss the treatment of the center of mass.

Ø. Jensen; G. Hagen; T. Papenbrock; D. J. Dean; J. S. Vaagen

2010-04-15

401

Standardized development of computer software. Part 1: Methods

NASA Technical Reports Server (NTRS)

This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

Tausworthe, R. C.

1976-01-01

402

Static Dependency Pair Method Based on Strong Computability for Higher-Order Rewrite Systems

NASA Astrophysics Data System (ADS)

Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include ?-abstraction but STRSs do not, we restructure the static dependency pair method to allow ?-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

Kusakari, Keiichirou; Isogai, Yasuo; Sakai, Masahiko; Blanqui, Frédéric

403

A method for computing -plane patterns of horn antennas

This paper introduces a method for calculating the total antenna pattern in theEplane, including the backlobe region, of a horn by applying diffraction theory. Treatment of diffraction by a thick edge permits horns of various edge thicknesses to be treated. The diffraction concepts developed by Sommerfeld and Pauli, which treated plane wave diffraction by a wedge, are extended so that

P. Russo; R. Rudduck

1965-01-01

404

Monocular computer vision image calibration method and its application

Big error of locating control points of anamorphic vision image calibration cause it difficult to calibrate of dynamic image automatically and accurately. Accordingly, a new calibrating method based on pattern matching is put forward. It locates the control points accurately; and builds the polynomial modeling of the anamorphic image, chooses the optimum polynomial order and calibrates the image distortion with

Jun Liu; Dingguo Li; Zhenwei Hu; Zhi Xie

2010-01-01

405

Submitted to: Journal of Computer Methods in Applied

of Oxidation and its Effect on Crack Growth in Titanium Alloys Dimitris C. Lagoudas, Pavlin Entchev of titanium alloys is inves- tigated in this work. The oxidation process is modeled by modifying the Fickian different variants of a fixed grid finite element method for nu- merical simulation of oxidation are used

406

Computational methods for large-scale data in medical diagnostics

organization of the genome affects the variety of proteins in the organism; on the other hand, proteins molecular biology experimental method called Micro- array-based Comparative Genomic Hybridization (a-Poisson regression, 1 These rearrangements have often prefix micro referring to their sub-microscopic size. 2 #12

Bechler, Pawel

407

Multipole Method to Compute Heat Losses from District Heating Pipes

SUMMARY: The district heating industry is currently searching for new installation fashions for district heat distribution networks in order to decrease the cost of installation. The heat losses cause a large part of the lifetime cost and environmental impacts of district heating networks. This paper presents how the multipole method can be used for quick and accurate determination of heat

Camilla Persson; Johan Claesson

408

New developments in adaptive methods for computational fluid dynamics

NASA Technical Reports Server (NTRS)

New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.

Oden, J. T.; Bass, Jon M.

1990-01-01

409

Comparison of Monte Carlo methods for fluorescence molecular tomography—computational efficiency

Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used. PMID:21992393

Chen, Jin; Intes, Xavier

2011-01-01

410

This report presents results of computations of doses and the associated health risks of postulated accidental atmospheric releases from the Rocky Flats Plant (RFP) of one gram of weapons-grade plutonium in a form that is respirable. These computations are intended to be reference computations that can be used to evaluate a variety of accident scenarios by scaling the dose and health risk results presented here according to the amount of plutonium postulated to be released, instead of repeating the computations for each scenario. The MACCS2 code has been used as the basis of these computations. The basis and capabilities of MACCS2 are summarized, the parameters used in the evaluations are discussed, and results are presented for the doses and health risks to the public, both the Maximum Offsite Individual (a maximally exposed individual at or beyond the plant boundaries) and the population within 50 miles of RFP. A number of different weather scenarios are evaluated, including constant weather conditions and observed weather for 1990, 1991, and 1992. The isotopic mix of weapons-grade plutonium will change as it ages, the {sup 241}Pu decaying into {sup 241}Am. The {sup 241}Am reaches a peak concentration after about 72 years. The doses to the bone surface, liver, and whole body will increase slightly but the dose to the lungs will decrease slightly. The overall cancer risk will show almost no change over this period. This change in cancer risk is much smaller than the year-to-year variations in cancer risk due to weather. Finally, x/Q values are also presented for other applications, such as for hazardous chemical releases. These include the x/Q values for the MOI, for a collocated worker at 100 meters downwind of an accident site, and the x/Q value integrated over the population out to 50 miles.

Peterson, V.L.

1993-12-23

411

\\u000a An implementation method of parallel finite element computation based on overlapping domain decomposition was presented to\\u000a improve the parallel computing efficiency of finite element and lower the cost and difficulty of parallel programming. By\\u000a secondary processing the nodal partition obtained by using Metis, the overlapping domain decomposition of finite element mesh\\u000a was gotten. Through the redundancy computation of overlapping element,

Jianfei Zhang; Lei Zhang; Hongdao Jiang

2009-01-01

412

A method for computer aided planning of community college instructional space requirements

A NETHOD FOR COMPUTER AIDED PLANNING OF COIIMUNITY COLLEGE INSTRUCTIONAL SPACE REQVIRKIENTS A Thesis by DONALD GUSTAVE RAPP Submitted to the Graduate College of the Texas ASM University in partial fulfillment of the requirements... for the degree of MASTER OF SCIENCE May 1969 Major Subject: Computer Science A METHOD FOR COMPUTER AIDED PLANNING OF COIQBJNITY COLLEGE INSTRUCTIONAL SPACE REQUIREMENTS A Thesis by DONALD GUSTAVE RAPP Approved as to style and content by: Q ) (Chairman...

Rapp, Donald Gustave

1969-01-01

413

UCLA COMPUTATIONAL AND APPLIED MATHEMATICS A Fourier-Wachspress Method for Solving Helmholtz;A FOURIER-WACHSPRESS METHOD FOR SOLVING HELMHOLTZ'S EQUATION IN THREE DIMENSIONAL LAYERED DOMAINS for solving Poisson's or Helmholtz's equation in three dimensional layered domains. The method combines

Soatto, Stefano

414

Application of computer-assisted modified coupled-mode method for the design of polarimetric sensors

A method for modeling of multiply purturbed fibers was developed as an extension to the modified coupled-mode method. Being based on numerical solution of coupled mode equations, the method is not limited in the scope of its applications to the cases in which coupling coefficients are constant along the fiber. Short computation time was achieved as a result of modification

Pawel Wierzba; Bogdan B. Kosmowski; Jerzy Plucinski

2003-01-01

415

A Monte Carlo method to compute the exchange coefficient in the double porosity model

A Monte Carlo method to compute the exchange coefficient in the double porosity model Fabien: Monte Carlo methods, double porosity model, ran- dom walk on squares, fissured media AMS Classification: 76S05 (65C05 76M35) Published in Monte Carlo Methods Appl.. Proc. of Monte Carlo and probabilistic

Paris-Sud XI, UniversitÃ© de

416

A Numerov-type Method for Computing Eigenvalues and Resonances of the Radial Schrödinger Equation

A two-step method is developed for computing eigenvalues and resonances of the radial Schrödinger equation. Numerical results obtained for the integration of the eigenvalue and the resonance problems for several potentials show that this new method is better than other similar methods.

Tom E. Simos; G. Tougelidis

1996-01-01

417

A numerical method for computing unsteady 2-D boundary layer flows

A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the

Andreas Krainer

1988-01-01

418

There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modeling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty

David E. Thompson

419

Computing a partial generalized real Schur form using the Jacobi-Davidson method

Computing a partial generalized real Schur form using the Jacobi-Davidson method T.L. van Noorden and J. Rommes Abstract In this paper, a new variant of the Jacobi-Davidson method is pre- sented whenever the inner iteration, which may consist of the Jacobi-Davidson method applied to a deflated matrix

Eindhoven, Technische Universiteit

420

Equations of Motion (EOM) Methods for Computing Electron Affinities Jack Simons

Equations of Motion (EOM) Methods for Computing Electron Affinities Jack Simons Chemistry for which the EA is an infinitesimal fraction of the total energy. The equations of motion (EOM) methods. EOM methods based upon MÃ¸ller-Plesset, Multiconfiguration self-consistent field, and coupled

Simons, Jack

421

An overview of computational simulation methods for composite structures failure and life analysis

Three parallel computational simulation methods are being developed at the LeRC Structural Mechanics Branch (SMB) for composite structures failure and life analysis: progressive fracture CODSTRAN; hierarchical methods for high-temperature composites; and probabilistic evaluation. Results to date demonstrate that these methods are effective in simulating composite structures failure/life/reliability.

Chamis, C.C.

1993-10-01

422

Fast methods for computing scene raw signals in millimeter-wave sensor simulations

NASA Astrophysics Data System (ADS)

Modern millimeter wave (mmW) radar sensor systems employ wideband transmit waveforms and efficient receiver signal processing methods for resolving accurate measurements of targets embedded in complex backgrounds. Fast Fourier Transform processing of pulse return signal samples is used to resolve range and Doppler locations, and amplitudes of scattered RF energy. Angle glint from RF scattering centers can be measured by performing monopulse arithmetic on signals resolved in both delta and sum antenna channels. Environment simulations for these sensors - including all-digital and hardware-in-the-loop (HWIL) scene generators - require fast, efficient methods for computing radar receiver input signals to support accurate simulations with acceptable execution time and computer cost. Although all-digital and HWIL simulations differ in their representations of the radar sensor (which is itself a simulation in the all-digital case), the signal computations for mmW scene modeling are closely related for both types. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) have developed various fast methods for computing mmW scene raw signals to support both HWIL scene projection and all-digital receiver model input signal synthesis. These methods range from high level methods of decomposing radar scenes for accurate application of spatially-dependent nonlinear scatterer phase history, to low-level methods of efficiently computing individual scatterer complex signals and single precision transcendental functions. The efficiencies of these computations are intimately tied to math and memory resources provided by computer architectures. The paper concludes with a summary of radar scene computing performance on available computer architectures, and an estimate of future growth potential for this computational performance.

Olson, Richard F.; Reynolds, Terry M.; Satterfield, H. Dewayne

2010-04-01

423

spondylitis in the UK. Rheumatology 2007;46(8):1338-44. doi:10.1093/rheumatology/ kem133. Ara R, Ward S, Lloyd-alpha antagonists in the management of rheumatoid arthritis: results from the British Society for Rheumatology Biologics Registry. Rheumatology 2007;46(8):1345-54. Brennan A, Kharroubi S. Efficient computation

Li, Yi

424

CLOUD COMPUTING TECHNOLOGIES PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing DePaul University's Cloud Computing Technologies Program provides a broad understanding of the different leading Cloud Computing technologies. The program is designed to quickly educate

Schaefer, Marcus

425

CLOUD COMPUTING FUNDAMENTALS PROGRAM An eleven-week in-depth program in the principles, methods, and technologies of Cloud Computing DePaul University's Cloud Computing Fundamentals Program provides a comprehensive introduction to essential aspects of Cloud Computing. The program is designed to quickly educate

Schaefer, Marcus

426

Computational methods for the verification of adaptive control systems

Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems

Ravi K. Prasanth; Jovan Boskovic; Raman K. Mehra

2004-01-01

427

Efficient Field-Computer File Transfer Methods for Suboptimum Conditions

This paper describes a project to upgrade the file-transmission capabilities of a system using modem-linked PC's to acquire production data in remote oilfields and subsequently transfer these data to an area production office for further processing. The method initially specified for accomplishing this task failed repeatedly under adverse conditions. After the modems and file-transfer software were replaced, communications became much

L. B. Sisk

1991-01-01

428

Density functional methods as computational tools in materials design

This article gives a brief overview of density functional theory and discusses two specific implementations: a numerical localized basis approach (DMol) and the pseudopotential plane-wave method. Characteristic examples include Cu, clusters, CO and NO dissociation on copper surfaces, Li-, K-, and O-endohedral fullerenes, tris-quaternary ammonium cations as zeolite template, and oxygen defects in bulk SiO2. The calculations reveal the energetically

Y. S. Li; M. A. Daelen; M. Wrinn; D. King-Smith; J. M. Newsam; B. Delley; E. Wimmer; T. Klitsner; M. P. Sears; G. A. Carlson; J. S. Nelson; D. C. Allan; M. P. Teter

1994-01-01

429

A Moving Least Squares method for implant model deformation in Computer Aided Orthopedic Surgery for

A Moving Least Squares method for implant model deformation in Computer Aided Orthopedic Surgery surgical procedure. Computer Aided Orthope- dic Surgery (CAOS) systems are extensively used for the planning of surgeries for fractures of lower extremities. These systems are input an X-Ray image

Coto, Ernesto

430

On the error of computing ab + cd using Cornea, Harrison and Tang's method

On the error of computing ab + cd using Cornea, Harrison and Tang's method Jean-Michel Muller September 2013 Abstract In their book Scientific Computing on The Itanium [1], Cornea, Harrison and Tang requires the availability of an FMA instruction, was introduced by Cornea, Harrison and Tang in their book

Muller, Jean-Michel

431

Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging, piece-wise, CT perfusion, dose reduction 1. INTRODUCTION Radiation exposure from computed tomography (CTRadiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

Chen, Tsuhan

432

NASA Technical Reports Server (NTRS)

The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

1986-01-01

433

The computational complexity of elliptic curve integer sub-decomposition (ISD) method

NASA Astrophysics Data System (ADS)

The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ?1 and ?2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C?n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C?n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

2014-07-01

434

Applications of level set methods in computational biophysics Emmanuel Maitre1

Applications of level set methods in computational biophysics Emmanuel Maitre1 , Thomas Milcent1 to fluid-structure problems arising in biophysics. The first one is concerned with three computational tools in biophysics applications. 1 Introduction Biophysics and biomechanics are two fields where

Paris-Sud XI, UniversitÃ© de

435

Computation of Impedance and Attenuation of TEM-Lines by Finite Difference Methods

The characteristic impedance and the attenuation of transmission lines supporting TEM modes can be computed by using finite difference methods for solving the Laplace equation for the domain defined by the inner and the outer conductor. The difference equations can be solved by machine computation and the impedance and the attenuation is obtained by integrating the field gradients and the

M. V. Schneider

1965-01-01

436

NASA Astrophysics Data System (ADS)

Many multimedia processing algorithms as well as communication algorithms implemented in mobile devices are based on intensive implementation of linear algebra methods, in particular, implying implementation of a large number of inner products in real time. Among most efficient approaches to perform inner products are the Associative Computing (ASC) approach and Distributed Arithmetic (DA) approach. In ASC, computations are performed on Associative Processors (ASP), where Content-Addressable memories (CAMs) are used instead of traditional processing elements to perform basic arithmetic operations. In the DA approach, computations are reduced to look-up table reads with respect to binary planes of inputs. In this work, we propose a modification of Associative processors that supports efficient implementation of the DA method. Thus, the two powerful methods are combined to further improve the efficiency of multiple inner product computation. Computational complexity analysis of the proposed method illustrates significant speed-up when computing multiple inner products as compared both to the pure ASC method and to the pure DA method as well as to other state-of the art traditional methods for inner product calculation.

Guevorkian, David; Yli-Pietilä, Timo; Liuha, Petri; Egiazarian, Karen

2012-02-01

437

Although research examining the effects of pretrial publicity (PTP) on individuals' appraisals of a defendant and verdict decision making generally has been found to be internally valid, the external validity has been questioned by some social scientists as well as lawyers and judges. It is often proposed that the verisimilitude (or ecological validity) ofthe research should be increased in the service of increasing external validity; however, increasing verisimilitude can be costly in terms of both time and money. It is proposed that the Internet is a viable means of conducting PTP research that allows high verisimilitude without high costs. This is demonstrated with a study in which we used the Internet to examine PTP effects in an actual trial as it was taking place. Successful use of the Internet to conduct experimental research in other areas of psychology and law is discussed, as well as the importance of future research examining whether independent variables interact with methods in ways that undermine the generalizability of research findings. PMID:11868618

Studebaker, Christina A; Robbennolt, Jennifer K; Penrod, Steven D; Pathak-Sharma, Maithilee K; Groscup, Jennifer L; Devenport, Jennifer L

2002-02-01

438

Public bookmarks and private benefits: An analysis of incentives in social computing

Users of social computing websites are both producers and consumers of the information found on the site. This creates a novel problem for web-based software applications: how can website designers induce users to produce information that is useful for others? We study this question by interviewing users of the social bookmarking website del.icio.us. We find that for the users in

Rick Wash; Emilee Rader

2007-01-01

439

Theoretical studies of potential energy surfaces and computational methods

This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.

Shepard, R. [Argonne National Laboratory, IL (United States)

1993-12-01

440

The Role of Analytic Methods in Computational Aeroacoustics

NASA Technical Reports Server (NTRS)

As air traffic grows, annoyance produced by aircraft noise will grow unless new aircraft produce no objectionable noise outside airport boundaries. Such ultra-quiet aircraft must be of revolutionary design, having unconventional planforms and most likely with propulsion systems highly integrated with the airframe. Sophisticated source and propagation modeling will be required to properly account for effects of the airframe on noise generation, reflection, scattering, and radiation. It is tempting to say that since all the effects are included in the Navier-Stokes equations, time-accurate CFD can provide all the answers. Unfortunately, the computational time required to solve a full aircraft noise problem will be prohibitive for many years to come. On the other hand, closed form solutions are not available for such complicated problems. Therefore, a hybrid approach is recommended in which analysis is taken as far as possible without omitting relevant physics or geometry. Three examples are given of recently reported work in broadband noise prediction, ducted fan noise propagation and radiation, and noise prediction for complex three-dimensional jets.

Farassat, F.; Posey, J. W.

2003-01-01

441

Pragmatic approaches to using computational methods to predict xenobiotic metabolism.

In this study the performance of a selection of computational models for the prediction of metabolites and/or sites of metabolism was investigated. These included models incorporated in the MetaPrint2D-React, Meteor, and SMARTCyp software. The algorithms were assessed using two data sets: one a homogeneous data set of 28 Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) and paracetamol (DS1) and the second a diverse data set of 30 top-selling drugs (DS2). The prediction of metabolites for the diverse data set (DS2) was better than for the more homogeneous DS1 for each model, indicating that some areas of chemical space may be better represented than others in the data used to develop and train the models. The study also identified compounds for which none of the packages could predict metabolites, again indicating areas of chemical space where more information is needed. Pragmatic approaches to using metabolism prediction software have also been proposed based on the results described here. These approaches include using cutoff values instead of restrictive reasoning settings in Meteor to reduce the output with little loss of sensitivity and for directing metabolite prediction by preselection based on likely sites of metabolism. PMID:23718189

Piechota, Przemyslaw; Cronin, Mark T D; Hewitt, Mark; Madden, Judith C

2013-06-24

442

An Improved Computer Vision Method for White Blood Cells Detection

The automatic detection of white blood cells (WBCs) still remains as an unsolved issue in medical imaging. The analysis of WBC images has engaged researchers from fields of medicine and computer vision alike. Since WBC can be approximated by an ellipsoid form, an ellipse detector algorithm may be successfully applied in order to recognize such elements. This paper presents an algorithm for the automatic detection of WBC embedded in complicated and cluttered smear images that considers the complete process as a multiellipse detection problem. The approach, which is based on the differential evolution (DE) algorithm, transforms the detection task into an optimization problem whose individuals represent candidate ellipses. An objective function evaluates if such candidate ellipses are actually present in the edge map of the smear image. Guided by the values of such function, the set of encoded candidate ellipses (individuals) are evolved using the DE algorithm so that they can fit into the WBCs which are enclosed within the edge map of the smear image. Experimental results from white blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique in terms of its accuracy and robustness. PMID:23762178

Cuevas, Erik; Díaz, Margarita; Manzanares, Miguel; Zaldivar, Daniel; Perez-Cisneros, Marco

2013-01-01

443

This thesis presents a number of novel computational methods for the analysis and design of protein-protein complexes, and their application to the study of the interactions of phosphopeptides with phosphopeptide-binding ...

Joughin, Brian Alan

2007-01-01

444

- 1 Â MARIE CURIE Research Training Network (RTN) COMISEF Computational Optimization Methods) This new RTN has been established to develop novel optimization procedures for applications in statistics researchers recruited into this RTN will experience an outstanding interdisciplinary training in quantitative

Nagurney, Anna

445

Modeling of the Aging Viscoelastic Properties of Cement Paste Using Computational Methods

computational model using finite element method to predict the viscoelastic behavior of cement paste, and using this model, virtual tests can be carried out to improve understanding of the mechanisms of viscoelastic behavior. The primary finding from...

Li, Xiaodan

2012-07-16

446

Computational studies of hydrogen storage materials and the development of related methods

Computational methods, including density functional theory and the cluster expansion formalism, are used to study materials for hydrogen storage. The storage of molecular hydrogen in the metal-organic framework with formula ...

Mueller, Timothy Keith

2007-01-01

447

Review of the Use of Electroencephalography as an Evaluation Method for Human-Computer Interaction

Review of the Use of Electroencephalography as an Evaluation Method for Human-Computer Interaction on electroencephalography (EEG), as it could be handled effectively during a dedicated evaluation phase. We identify

Paris-Sud XI, UniversitÃ© de

448

NASA Astrophysics Data System (ADS)

Solving the unconstrained optimization problems is not easy and DFP update method is one of the methods that we can work with to solve the problems. In unconstrained optimization, the time computing needed by the method's algorithm to solve the problems is very vital and because of that, we proposed a hybrid search direction for DFP update method in order to reduce the computation time needed for solving unconstrained optimization problems. Some convergence analysis and numerical results of the hybrid search direction were analyzed and the results showed that the proposed hybrid search direction strictly reduce the computation time needed by DFP update method and at the same time increase the method's efficiency which is sometimes fail for some complicated unconstrained optimization problems.

Sofi, A. Z. M.; Mamat, M.; Ibrahim, M. A. H.

2013-04-01

449

A coarse-grid projection method for accelerating incompressible flow computations

NASA Astrophysics Data System (ADS)

We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

San, Omer; Staples, Anne E.

2013-01-01

450

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2013 CFR

...transaction and pricing data in real-time for all publicly reportable...transaction and pricing data in real-time shall perform, on an annual...in a consistent, usable and machine-readable electronic format...publicly disseminated in real-time shall be corrected or...

2013-04-01

451

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2014 CFR

...transaction and pricing data in real-time for all publicly reportable...transaction and pricing data in real-time shall perform, on an annual...in a consistent, usable and machine-readable electronic format...publicly disseminated in real-time shall be corrected or...

2014-04-01

452

17 CFR 43.3 - Method and timing for real-time public reporting.

Code of Federal Regulations, 2012 CFR

...transaction and pricing data in real-time for all publicly reportable...transaction and pricing data in real-time shall perform, on an annual...in a consistent, usable and machine-readable electronic format...publicly disseminated in real-time shall be corrected or...

2012-04-01

453

Consumer Health Information Behavior in Public Libraries: A Mixed Methods Study

ERIC Educational Resources Information Center

Previous studies indicated inadequate health literacy of American adults as one of the biggest challenges for consumer health information services provided in public libraries. Little attention, however, has been paid to public users' health literacy and health information behaviors. In order to bridge the research gap, the study aims to…

Yi, Yong Jeong

2012-01-01

454

A rigid motion correction method for helical computed tomography (CT).

We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data. PMID:25674780

Kim, J-H; Nuyts, J; Kyme, A; Kuncic, Z; Fulton, R

2015-03-01

455

Novel systems biology and computational methods for lipidomics

NASA Astrophysics Data System (ADS)

The analysis and interpretation of large lipidomic data sets requires the development of new dynamical systems, data mining and visualization approaches. Traditional techniques are insufficient to study corregulations and stochastic fluctuations observed in lipidomic networks and resulting experimental data. The emphasis of this paper lies in the presentation of novel approaches for the dynamical analysis and projection representation. Different paradigms describing kinetic models and providing context-based information are described and at the same time their interrelations are revealed. These qualitative and quantitative methods are applied to the lipidomic analysis of U87 MG glioblastoma cells. The achieved provide a more detailed insight into the data structure of the lipidomic system.

Meyer-Bäse, Anke; Lespinats, Sylvain

2010-04-01

456

We have developed a coherent set of techniques for parallel computing. We have used the Finite Element Method associated with\\u000a the C++ Object-Oriented Programming with only one database. A technique of data selection is used in the determination of\\u000a the data dedicated to each processor. This method is performed by SIMD technology associated with MPI capabilities. This parallel\\u000a computing is

André Chambarel; Hervé Bolvin

2002-01-01

457

Two public chest X-ray datasets for computer-aided screening of pulmonary diseases.

The U.S. National Library of Medicine has made two datasets of postero-anterior (PA) chest radiographs available to foster research in computer-aided diagnosis of pulmonary diseases with a special focus on pulmonary tuberculosis (TB). The radiographs were acquired from the Department of Health and Human Services, Montgomery County, Maryland, USA and Shenzhen No. 3 People's Hospital in China. Both datasets contain normal and abnormal chest X-rays with manifestations of TB and include associated radiologist readings. PMID:25525580

Jaeger, Stefan; Candemir, Sema; Antani, Sameer; Wáng, Yì-Xiáng J; Lu, Pu-Xuan; Thoma, George

2014-12-01

458

NASA Astrophysics Data System (ADS)

Many of the fundamental processes underlying hazards such as earthquakes and volcanoes are poorly understood. Hazard systems are difficult to replicate in lab environments, and so we need to observe them in 'natural laboratories'. The global coverage offered by satellite-based SAR missions, and rapidly expanding GPS networks can provide orders of magnitude more observations. These combined geodetic data products will enable greater understanding of processes leading up to, during, and after natural and man-made disasters. However, a science data system is needed that can efficiently monitor & analyze the voluminous data, and provide users the tools to access the data products. In the interpretation process from observations to decision-making, data from observations are first used to improve the understanding of the physical processes, which then lead to more informed knowledge. However the need for handling high data volumes and processing expertise are often bottlenecks to providing the data product streams needed for improved decision-making. To help address lower latency and high data volume needs for monitoring and response to globally distributed hazards, we leveraged a hybrid-cloud computing approach that utilizes a seamless mixture of an on-premise Eucalyptus-based cloud computing environment with public Amazon Web Services (AWS) cloud computing resources. We will present some findings on the automation of geodetic processing, use of hybrid-cloud computing to address on-premise resource constraint issues, scalability issues in processing latency and data movement, as well as data discovery, access, and integration for other tools for location analytics.

Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Moore, A. W.; Fielding, E. J.; Agram, P.; Manipon, G.; Simons, M.; Rosen, P. A.; Stough, T. M.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.

2013-12-01

459

ERIC Educational Resources Information Center

Wayfinding is the method by which humans orient and navigate in space, and particularly in built environments such as cities and complex buildings, including public libraries. In order to wayfind successfully in the built environment, humans need information provided by wayfinding systems and tools, for instance architectural cues, signs, and…

Mandel, Lauren Heather

2012-01-01

460

Structuring a corresponding performance assessment index system by dividing the responsibility main body,and selecting a performance assessment index according to the principal component analysis. Then basing on a comprehensive performance assessment which is obtained by fuzzy comprehensive evaluation method, to structure a performance assessment model of supervising public projects that is perfect and based on the ideas of activity-based cost,

Yang Hong-xiong; Liu Yi-liu; Sun Chun-ling

2010-01-01

461

Skin Burns Degree Determined by Computer Image Processing Method

NASA Astrophysics Data System (ADS)

In this paper a new method determining the degree of skin burns in quantities is put forward. Firstly, with Photoshop9.0 software, we analyzed the statistical character of skin burns images' histogram, and then turned the images of burned skins from RGB color space to HSV space, to analyze the transformed color histogram. Lastly through Photoshop9.0 software we get the percentage of the skin burns area. We made the mean of images' histogram,the standard deviation of color maps,and the percentage of burned areas as indicators of evaluating burns,then distributed indicators the weighted values,at last get the burned scores by summing the products of every indicator of the burns and the weighted values. From the classification of burned scores, the degree of burns can be evaluated.

Li, Hong-yan

462

Computational Methods for Estimating Energy Expenditure in Human Physical Activities

Accurate and reliable methods for assessing human physical activity energy expenditure (PAEE) are informative and essential for understanding individual behaviors and quantifying the impact of physical activity (PA) on disease, PA surveillance and for examining determinants of PA in different populations. This paper reviews recent advances in the estimation of PAEE, in three interrelated areas: 1) types of sensors worn by human subjects, 2) features extracted from the measured sensor signals, and 3) modeling techniques to estimate the PAEE using these features. The review illustrates three directions in the PAEE studies, and provides recommendations for future research, with the aim to produce valid, reliable, and accurate assessment of PAEE from wearable sensors. PMID:22617402

Liu, Shaopeng; Gao, Robert; Freedson, Patty

2012-01-01

463

A parallel finite-difference method for computational aerodynamics

NASA Technical Reports Server (NTRS)

A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.

Swisshelm, Julie M.

1989-01-01

464

Data-Driven Computational Methods for Materials Characterization, Classification, and Discovery

NASA Astrophysics Data System (ADS)

Many major technological challenges facing contemporary society, in fields from energy to medicine, contain within them a materials discovery requirement. While, historically, these discoveries emerged from intuition and experimentation in the laboratory, modern computational methods and hardware hold the promise to dramatically accelerate materials discovery efforts. However, a number of key questions must be answered in order for computation to approach its full potential in new materials development. This thesis explores some of these questions, including: 1) How can we ensure that computational methods are amenable to as broad a range of materials as possible? 2) How can computational techniques assist experimental materials characterization? 3) Can computation readily predict properties indicative of real-world materials performance? 4) How do we glean actionable insights from the vast stores of data that computational methods generate? and 5) Can we lift some of the burdensome requirements for computational study of compounds that are entirely uncharacterized experimentally? In addressing these points, we turn frequently to concepts from statistics, computer science, and applied mathematics to shed new light on traditional topics in materials science, and offer a data-driven approach to steps in materials discovery.

Meredig, Bryce

465

Subtraction method of computing QCD jet cross sections at NNLO accuracy

We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

Zoltan Trocsanyi; Gabor Somogyi

2008-07-03

466

A combined direct/inverse three-dimensional transonic wing design method for vector computers

NASA Technical Reports Server (NTRS)

A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

Weed, R. A.; Carlson, L. A.; Anderson, W. K.

1984-01-01

467

Computational methods to compute wavefront error due to aero-optic effects

NASA Astrophysics Data System (ADS)

Aero-optic effects can have deleterious effects on high performance airborne optical sensors that must view through turbulent flow fields created by the aerodynamic effects of windows and domes. Evaluating aero-optic effects early in the program during the design stages allows mitigation strategies and optical system design trades to be performed to optimize system performance. This necessitates a computationally efficient means to evaluate the impact of aero-optic effects such that the resulting dynamic pointing errors and wavefront distortions due to the spatially and temporally varying flow field can be minimized or corrected. To this end, an aero-optic analysis capability was developed within the commercial software SigFit that couples CFD results with optical design tools. SigFit reads the CFD generated density profile using the CGNS file format. OPD maps are then created by converting the three-dimensional density field into an index of refraction field and then integrating along specified paths to compute OPD errors across the optical field. The OPD maps may be evaluated directly against system requirements or imported into commercial optical design software including Zemax® and Code V® for a more detailed assessment of the impact on optical performance from which design trades may be performed.

Genberg, Victor; Michels, Gregory; Doyle, Keith; Bury, Mark; Sebastian, Thomas

2013-09-01

468

Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

NASA Astrophysics Data System (ADS)

An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

Hadj Salah, S.; Hajji, S.; Ben Hamida, M. B.; Charrada, K.

2015-01-01

469

Performance of particle in cell methods on highly concurrent computational architectures

Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system;\\u000aof equations used in simulations of magnetic fusion plasmas. PIC methods use grid based;\\u000acomputations, for solving Poissonâs equation or more generally Maxwellâs equations, as well;\\u000aas Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of;\\u000adiscretizations, deterministic field solves and Monte-Carlo methods for

M. F. Adams; S. Ethier; N. Wichmann

2009-01-01

470

GENERALIZATIONS OF DAVIDSON’S METHOD FOR COMPUTING EIGENVALUES OF SPARSE SYMMETRIC MATRICES*

Abstract. This paper analyzes Davidson’s method for computing a few eigenpairs of large sparse symmetric matrices. An explanation is given for why Davidson’s method often performs well but occasionally performs very badly. Davidson’s method is then generalized to a method which offers a powerful way of applying preconditioning techniques developed for solving systems of linear equations to solving eigenvalue problems. Key words, eigenvalues, eigenvectors, sparse matrices AMS(MOS) subject classifications. 65, 15

Ronald B. Morganf; David; S. Scott

471

Simplified methods for computing total sediment discharge with the modified Einstein procedure

A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

Colby, Bruce R.; Hubbell, David Wellington

1961-01-01

472

A Cognition-Based Method to Ease the Computational Load for an Extended Kalman Filter

The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered. PMID:25479332

Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

2014-01-01

473

A cognition-based method to ease the computational load for an extended Kalman filter.

The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered. PMID:25479332

Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

2014-01-01

474

A mesh-decoupled height function method for computing interface curvature

NASA Astrophysics Data System (ADS)

In this paper, a mesh-decoupled height function method is proposed and tested. The method is based on computing height functions within columns that are not aligned with the underlying mesh and have variable dimensions. Because they are decoupled from the computational mesh, the columns can be aligned with the interface normal vector, which is found to improve the curvature calculation for under-resolved interfaces where the standard height function method often fails. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The toolbox reduces the complexity of the problem to a series of straightforward geometric operations using simplices. The proposed scheme is shown to compute more accurate curvatures than the standard height function method on coarse meshes. A combined method that uses the standard height function where it is well defined and the proposed scheme in under-resolved regions is tested. This approach achieves accurate and robust curvatures for under-resolved interface features and second-order converging curvatures for well-resolved interfaces.

Owkes, Mark; Desjardins, Olivier

2015-01-01

475

A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

Steinhauser, Martin O.; Hiermaier, Stefan

2009-01-01

476

Federal Register 2010, 2011, 2012, 2013, 2014

...Health/ National Science Foundation Public Workshop on Computer Methods for Medical...device. The use of computer models to simulate...Nonproprietary computer models could benchmark...to foster good science for M&S in...

2013-04-05

477

NASA Technical Reports Server (NTRS)

A manipulator and its control system (modeled after a Stanford design) is being developed as part of an artificial intelligence project. This development includes an analytical study of the control system software. A comparison is presented of the computed torque method and the conventional position servo. No conclusion is made as to the perference of one system over the other, as it is dependent upon the application and the results of a sampled data analysis.

Markiewicz, B. R.

1973-01-01

478

Background Photographs are an effective way to collect detailed and objective information about the environment, particularly for public health surveillance. However, accurately and reliably annotating (ie, extracting information from) photographs remains difficult, a critical bottleneck inhibiting the use of photographs for systematic surveillance. The advent of distributed human computation (ie, crowdsourcing) platforms represents a veritable breakthrough, making it possible for the first time to accurately, quickly, and repeatedly annotate photos at relatively low cost. Objective This paper describes a methods protocol, using photographs from point-of-sale surveillance studies in the field of tobacco control to demonstrate the development and testing of custom-built tools that can greatly enhance the quality of crowdsourced annotation. Methods Enhancing the quality of crowdsourced photo annotation requires a number of approaches and tools. The crowdsourced photo annotation process is greatly simplified by decomposing the overall process into smaller tasks, which improves accuracy and speed and enables adaptive processing, in which irrelevant data is filtered out and more difficult targets receive increased scrutiny. Additionally, zoom tools enable users to see details within photographs and crop tools highlight where within an image a specific object of interest is found, generating a set of photographs that answer specific questions. Beyond such tools, optimizing the number of raters (ie, crowd size) for accuracy and reliability is an important facet of crowdsourced photo annotation. This can be determined in a systematic manner based on the difficulty of the task and the desired level of accuracy, using receiver operating characteristic (ROC) analyses. Usability tests of the zoom and crop tool suggest that these tools significantly improve annotation accuracy. The tests asked raters to extract data from photographs, not for the purposes of assessing the quality of that data, but rather to assess the usefulness of the tool. The proportion of individuals accurately identifying the presence of a specific advertisement was higher when provided with pictures of the product’s logo and an example of the ad, and even higher when also provided the zoom tool (?2 2=155.7, P<.001). Similarly, when provided cropped images, a significantly greater proportion of respondents accurately identified the presence of cigarette product ads (?2 1=75.14, P<.001), as well as reported being able to read prices (?2 2=227.6, P<.001). Comparing the results of crowdsourced photo-only assessments to traditional field survey data, an excellent level of correspondence was found, with area under the ROC curves produced by sensitivity analyses averaging over 0.95, requiring on average 10 to 15 crowdsourced raters to achieve values of over 0.90. Results Further testing and improvement of these tools and processes is currently underway. This includes conducting systematic evaluations that crowdsource photograph annotation and methodically assess the quality of raters’ work. Conclusions Overall, the combination of crowdsourcing technologies with tiered data flow and tools that enhance annotation quality represents a breakthrough solution to the problem of photograph annotation, vastly expanding opportunities for the use of photographs rich in public health and other data on a scale previously unimaginable. PMID:24717168

Tacelosky, Michael; Ivey, Keith C; Pearson, Jennifer L; Cantrell, Jennifer; Vallone, Donna M; Abrams, David B; Kirchner, Thomas R

2014-01-01

479

A Computationally Efficient Meshless Local Petrov-Galerkin Method for Axisymmetric Problems

NASA Technical Reports Server (NTRS)

The Meshless Local Petrov-Galerkin (MLPG) method is one of the recently developed element-free methods. The method is convenient and can produce accurate results with continuous secondary variables, but is more computationally expensive than the finite element method. To overcome this disadvantage, a simple Heaviside test function is chosen. The computational effort is significantly reduced by eliminating the domain integral for the axisymmetric potential problems and by simplifying the domain integral for the axisymmetric elasticity problems. The method is evaluated through several patch tests for axisymmetric problems and example problems for which the exact solutions are available. The present method yielded very accurate solutions. The sensitivity of several parameters of the method is also studied.

Raju, I. S.; Chen, T.

2003-01-01

480

This report presents ``reference`` computations that can be used by safety analysts in the evaluations of the consequences of postulated atmospheric releases of radionuclides from the Rocky Flats Environmental Technology Site. These computations deal specifically with doses and health risks to the public. The radionuclides considered are Class W Plutonium, all classes of Enriched Uranium, and all classes of Depleted Uranium. (The other class of plutonium, Y, was treated in an earlier report.) In each case, one gram of the respirable material is assumed to be released at ground leveL both with and without fire. The resulting doses and health risks can be scaled to whatever amount of release is appropriate for a postulated accident being investigated. The report begins with a summary of the organ-specific stochastic risk factors appropriate for alpha radiation, which poses the main health risk of plutonium and uranium. This is followed by a summary of the atmospheric dispersion factors for unfavorable and typical weather conditions for the calculation of consequences to both the Maximum Offsite Individual and the general population within 80 km (50 miles) of the site.

Peterson, V.L.

1995-06-06

481

Schistosomiasis, a group of parasitic diseases caused by Schistosoma parasites, is associated with water resources development and affects more than 200 million people in 76 countries. Depending on the species of parasite involved, disease of the liver, spleen, gastrointestinal or urinary tract, or kidneys may result. A computer-assisted teaching package has been developed by WHO for use in the training of public health workers involved in schistosomiasis control. The package consists of the software, ZOOM, and a schistosomiasis information file, Dr Schisto, and uses hypermedia technology to link pictures and text. ZOOM runs on the IBM-PC and IBM-compatible computers, is user-friendly, requires a minimal hardware configuration, and can interact with the user in English, French, Spanish or Portuguese. The information files for ZOOM can be created or modified by the instructor using a word processor, and thus can be designed to suit the need of students. No programming knowledge is required to create the stacks. PMID:1786618

Martin, G T; Yoon, S S; Mott, K E

1991-01-01

482

Computation of Point of Application of Seismic Passive Resistance by Pseudo-dynamic Method

Computation of seismic passive resistance and its point of application is an important aspect of seismic design of retaining wall. Several researchers in the past had obtained seismic passive earth pressures by using the conventional pseudo-static method. In this pseudo-static method, peak ground acceleration is assumed as constant and seismic passive pressure thus obtained shows the linear variation along the

Sanjay S. Nimbalkar; Deepankar Choudhury

2008-01-01

483

Ab initio modeling of carbohydrates: on the proper selection of computational methods and basis sets

Technology Transfer Automated Retrieval System (TEKTRAN)

With the development of faster computer hardware and quantum mechanical software it has become more feasible to study large carbohydrate molecules via quantum mechanical methods. In the past, studies of carbohydrates were restricted to empirical/semiempirical methods and Hartree Fock. In the last ...

484

Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4

Monte Carlo Methods for Computation and Optimization (048715) Winter 2013/4 Lecture Notes Nahum Shimkin i #12;PREFACE These lecture notes are intended for a first, graduate-level, course on Monte-Carlo, Simulation and the Monte Carlo Method, Wiley, 2008. (2) S. Asmussen and P. Glynn, Stochastic Simulation

Shimkin, Nahum

485

Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho eld

Monte Carlo methods designed for parallel computation Sheldon B. Opps and Jeremy Scho#12;eld of these methods is that individual Monte Carlo chains, which are run on a separate nodes, are coupled together- rate calculation, for example to improve the statistics of a Monte Carlo simulation, one inherent bene

Schofield, Jeremy

486

Conjugate gradient methods for power system dynamic simulation on parallel computers

Parallel processing is a promising technology for the speedup of the dynamic simulations required in power system transient stability analysis. In this paper, three methods for dynamic simulation on parallel computers are described and compared. The methods are based on the concepts of spatial and\\/or time parallelization. In all of them, sets of linear algebraic equations are solved using different

I. C. Decker; D. M. Falcao; E. Kaszkurewicz

1996-01-01

487

Methods, systems, and computer program products for network firewall policy optimization

Methods, systems, and computer program products for firewall policy optimization are disclosed. According to one method, a firewall policy including an ordered list of firewall rules is defined. For each rule, a probability indicating a likelihood of receiving a packet matching the rule is determined. The rules are sorted in order of non-increasing probability in a manner that preserves the firewall policy.

Fulp, Errin W. (Winston-Salem, NC); Tarsa, Stephen J. (Duxbury, MA)

2011-10-18

488

A Coupled Discrete/Continuous Method for Computing Lattices. Application to a Masonry-Like Structure

A Coupled Discrete/Continuous Method for Computing Lattices. Application to a Masonry and its application to a masonry-like structure. This method was proposed and validated in the case, Homogenization, Masonry, Interface, Coupling. 2 hal-00668467,version1-9Feb2012 #12;1. Introduction The aim

Paris-Sud XI, UniversitÃ© de

489

Latent Class Models for Diary Method Data: Parameter Estimation by Local Computations

ERIC Educational Resources Information Center

The increasing use of diary methods calls for the development of appropriate statistical methods. For the resulting panel data, latent Markov models can be used to model both individual differences and temporal dynamics. The computational burden associated with these models can be overcome by exploiting the conditional independence relations…

Rijmen, Frank; Vansteelandt, Kristof; De Boeck, Paul

2008-01-01

490

A numerical method is presented for the computation of unsteady, three-dimensional potential flows in hydraulic pumps and turbines. The superelement method has been extended in order to eliminate slave degrees of freedom not only from the governing Laplace equation, but also from the Kutta conditions. The resulting superelement formulation is invariant under rotation. Therefore the geometrical symmetry of the flow

N. P. Kruyt; Esch van B. P. M; J. B. Jonker

1999-01-01

491

Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field

ERIC Educational Resources Information Center

This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…

Kinnunen, Paivi; Simon, Beth

2012-01-01

492

A Concise Method for Computing Normal Curve Areas Using a Calculator.

ERIC Educational Resources Information Center

A concise method for computing areas under the normal curve using only functions typically found on desk and hand calculators is given. One version of this method will give three-decimal-pace accuracy, and a second, simpler version gives accuracy to two places. (Author/JKS)

Coons, David F.

1978-01-01

493

Discontinuous Galerkin Methods for Computational Aerodynamics Â 3D Adaptive Flow Simulation, an adaptive discontinuous Galerkin solver for 3D turbulent flow. In the following, we present the results, the discontinuous Galerkin method (DGM) has demonstrated its excellence in accurate, higher-order numer- ical

Hartmann, Ralf

494

Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

NASA Technical Reports Server (NTRS)

Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

Min, J. B.; Bass, J. M.; Spradley, L. W.

1994-01-01

495

FAST MARCHING METHOD TO CORRECT FOR REFRACTION IN ULTRASOUND COMPUTED TOMOGRAPHY

FAST MARCHING METHOD TO CORRECT FOR REFRACTION IN ULTRASOUND COMPUTED TOMOGRAPHY Shengying Li the interaction of sound with refractive media. In this paper, we propose the use of the Fast Marching Method (FMM significant challenges. UCT is susceptible to refraction effects, making it difficult to reconstruct images

Mueller, Klaus

496

A survey of computational and physical methods applied to solid-state fermentation

During the last decade, significant effort has been made to apply computational and physical methods to solid-state fermentation (SSF). This had positive impact both on our understanding of the basic principles underlying this old technology, and on the latest progress made in industrial bioengineering. Guidelines on bioreactor design and operation including scale-up, new methods for biomonitoring and advanced control strategies

J. Lenz; M. Höfer; J.-B. Krasenbrink; U. Hölker

2004-01-01