Sample records for computationally intensive part

  1. Single Crystal Diffractometry

    NASA Astrophysics Data System (ADS)

    Arndt, U. W.; Willis, B. T. M.

    2009-06-01

    Preface; Acknowledgements; Part I. Introduction; Part II. Diffraction Geometry; Part III. The Design of Diffractometers; Part IV. Detectors; Part V. Electronic Circuits; Part VI. The Production of the Primary Beam (X-rays); Part VII. The Production of the Primary Beam (Neutrons); Part VIII. The Background; Part IX. Systematic Errors in Measuring Relative Integrated Intensities; Part X. Procedure for Measuring Integrated Intensities; Part XI. Derivation and Accuracy of Structure Factors; Part XII. Computer Programs and On-line Control; Appendix; References; Index.

  2. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  3. Thermal Convection on an Irradiated Target

    NASA Astrophysics Data System (ADS)

    Mehmedagic, Igbal; Thangam, Siva

    2016-11-01

    The present work involves the computational modeling of metallic targets subject to steady and high intensity heat flux. The ablation and associated fluid dynamics when metallic surfaces are exposed to high intensity laser fluence at normal atmospheric conditions is modelled. The incident energy from the laser is partly absorbed and partly reflected by the surface during ablation and subsequent vaporization of the melt. Computational findings based on effective representation and prediction of the heat transfer, melting and vaporization of the targeting material as well as plume formation and expansion are presented and discussed in the context of various ablation mechanisms, variable thermo-physical and optical properties, plume expansion and surface geometry. The energy distribution during the process between the bulk and vapor phase strongly depends on optical and thermodynamic properties of the irradiated material, radiation wavelength, and laser intensity. The relevance of the findings to various manufacturing processes as well as for the development of protective shields is discussed. Funded in part by U. S. Army ARDEC, Picatinny Arsenal, NJ.

  4. Information Technology in Critical Care: Review of Monitoring and Data Acquisition Systems for Patient Care and Research

    PubMed Central

    De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185

  5. Information technology in critical care: review of monitoring and data acquisition systems for patient care and research.

    PubMed

    De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.

  6. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  7. Race, gender, and information technology use: the new digital divide.

    PubMed

    Jackson, Linda A; Zhao, Yong; Kolenic, Anthony; Fitzgerald, Hiram E; Harold, Rena; Von Eye, Alexander

    2008-08-01

    This research examined race and gender differences in the intensity and nature of IT use and whether IT use predicted academic performance. A sample of 515 children (172 African Americans and 343 Caucasian Americans), average age 12 years old, completed surveys as part of their participation in the Children and Technology Project. Findings indicated race and gender differences in the intensity of IT use; African American males were the least intense users of computers and the Internet, and African American females were the most intense users of the Internet. Males, regardless of race, were the most intense videogame players, and females, regardless of race, were the most intense cell phone users. IT use predicted children's academic performance. Length of time using computers and the Internet was a positive predictor of academic performance, whereas amount of time spent playing videogames was a negative predictor. Implications of the findings for bringing IT to African American males and bringing African American males to IT are discussed.

  8. From Users to Choosers: Central IT and the Challenge of Consumer Choice

    ERIC Educational Resources Information Center

    Yanosky, Ronald

    2010-01-01

    Is the era of personal computing ending, or is it only just beginning? Certainly, cyberlife seems to have become more intensely personal over the last few years, partly because it has also become so much more social. The rise of the new consumer-oriented ubiquitous computing will reshape--and reduce--users' reliance on enterprise IT. Much of the…

  9. Perceived competence in computer use as a moderator of musculoskeletal strain in VDU work: an ergonomics intervention case.

    PubMed

    Tuomivaara, S; Ketola, R; Huuhtanen, P; Toivonen, R

    2008-02-01

    Musculoskeletal strain and other symptoms are common in visual display unit (VDU) work. Psychosocial factors are closely related to the outcome and experience of musculoskeletal strain. The user-computer relationship from the viewpoint of the quality of perceived competence in computer use was assessed as a psychosocial stress indicator. It was assumed that the perceived competence in computer use moderates the experience of musculoskeletal strain and the success of the ergonomics intervention. The participants (n = 124, female 58%, male 42%) worked with VDU for more than 4 h per week. They took part in an ergonomics intervention and were allocated into three groups: intensive; education; and reference group. Musculoskeletal strain, the level of ergonomics of the workstation assessed by the experts in ergonomics and amount of VDU work were estimated at the baseline and at the 10-month follow-up. Age, gender and the perceived competence in computer use were assessed at the baseline. The perceived competence in computer use predicted strain in the upper and the lower part of the body at the follow-up. The interaction effect shows that the intensive ergonomics intervention procedure was the most effective among participants with high perceived competence. The interpretation of the results was that an anxiety-provoking and stressful user-computer relationship prevented the participants from being motivated and from learning in the ergonomics intervention. In the intervention it is important to increase the computer competence along with the improvements of physical workstation and work organization.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.

    We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less

  11. QPROT: Statistical method for testing differential expression using protein-level intensity data in label-free quantitative proteomics.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I

    2015-11-03

    We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D. P.

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  13. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  14. CFD Simulations for the Effect of Unsteady Wakes on the Boundary Layer of a Highly Loaded Low-Pressure Turbine Airfoil (L1A)

    NASA Technical Reports Server (NTRS)

    Vinci, Samuel, J.

    2012-01-01

    This report is the third part of a three-part final report of research performed under an NRA cooperative Agreement contract. The first part was published as NASA/CR-2012-217415. The second part was published as NASA/CR-2012-217416. The study of the very high lift low-pressure turbine airfoil L1A in the presence of unsteady wakes was performed computationally and compared against experimental results. The experiments were conducted in a low speed wind tunnel under high (4.9%) and then low (0.6%) freestream turbulence intensity for Reynolds number equal to 25,000 and 50,000. The experimental and computational data have shown that in cases without wakes, the boundary layer separated without reattachment. The CFD was done with LES and URANS utilizing the finite-volume code ANSYS Fluent (ANSYS, Inc.) under the same freestream turbulence and Reynolds number conditions as the experiment but only at a rod to blade spacing of 1. With wakes, separation was largely suppressed, particularly if the wake passing frequency was sufficiently high. This was validated in the 3D CFD efforts by comparing the experimental results for the pressure coefficients and velocity profiles, which were reasonable for all cases examined. The 2D CFD efforts failed to capture the three dimensionality effects of the wake and thus were less consistent with the experimental data. The effect of the freestream turbulence intensity levels also showed a little more consistency with the experimental data at higher intensities when compared with the low intensity cases. Additional cases with higher wake passing frequencies which were not run experimentally were simulated. The results showed that an initial 25% increase from the experimental wake passing greatly reduced the size of the separation bubble, nearly completely suppressing it.

  15. Technicians for Intelligent Buildings. Final Report.

    ERIC Educational Resources Information Center

    Prescott, Carolyn; Thomson, Ron

    "Intelligent building" is a term that has been coined in recent years to describe buildings in which computer technology is intensely applied in two areas of building operations: control systems and shared tenant services. This two-part study provides an overview of the intelligent building industry and reports on issues related to the…

  16. CAI and Developmental Education.

    ERIC Educational Resources Information Center

    Anderson, Rick

    This paper discusses the problems and achievements of computer assisted instruction (CAI) projects at University College, University of Cincinnati. The most intensive use of CAI on campus, the CAI Lab, is part of the Developmental Education Center's effort to serve students who lack mastery of basic college-level skills in mathematics and English.…

  17. Implementation of High-Order Multireference Coupled-Cluster Methods on Intel Many Integrated Core Architecture.

    PubMed

    Aprà, E; Kowalski, K

    2016-03-08

    In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.

  18. Mechanics of Brittle Materials. Part 1. Preliminary Mechanical Properties and Statistical Representations

    DTIC Science & Technology

    1973-10-01

    intensity computation are shown in Figure 17. Using the same formal procedure outlined by Winne & Wundt . a notch geometry can be chosen to induce...Nitride at Elevated Temperatures . Winne, D.H. and Wundt , B.M., "Application of the Gnffith-Irwm Theory of Crack Propagation to the Bursting Behavior

  19. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  20. Combination of the discontinuous Galerkin method with finite differences for simulation of seismic wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir

    We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less

  1. Scientific Services on the Cloud

    NASA Astrophysics Data System (ADS)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  2. Computational Investigation of Soot and Radiation in Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Lalit, Harshad

    This study delves into computational modeling of soot and infrared radiation for turbulent reacting flows, detailed understanding of both of which is paramount in the design of cleaner engines and pollution control. In the first part of the study, the concept of Stochastic Time and Space Series Analysis (STASS) as a numerical tool to compute time dependent statistics of radiation intensity is introduced for a turbulent premixed flame. In the absence of high fidelity codes for large eddy simulation or direct numerical simulation of turbulent flames, the utility of STASS for radiation imaging of reacting flows to understand the flame structure is assessed by generating images of infrared radiation in spectral bands dominated by radiation from gas phase carbon dioxide and water vapor using an assumed PDF method. The study elucidates the need for time dependent computation of radiation intensity for validation with experiments and the need for accounting for turbulence radiation interactions for correctly predicting radiation intensity and consequently the flame temperature and NOx in a reacting fluid flow. Comparison of single point statistics of infrared radiation intensity with measurements show that STASS can not only predict the flame structure but also estimate the dynamics of thermochemical scalars in the flame with reasonable accuracy. While a time series is used to generate realizations of thermochemical scalars in the first part of the study, in the second part, instantaneous realizations of resolved scale temperature, CO2 and H2O mole fractions and soot volume fractions are extracted from a large eddy simulation (LES) to carry out quantitative imaging of radiation intensity (QIRI) for a turbulent soot generating ethylene diffusion flame. A primary motivation of the study is to establish QIRI as a computational tool for validation of soot models, especially in the absence of conventional flow field and measured scalar data for sooting flames. Realizations of scalars from the LES are used in conjunction with the radiation heat transfer equation and a narrow band radiation model to compute time dependent and time averaged images of infrared radiation intensity in spectral bands corresponding to molecular radiation from gas phase carbon dioxide and soot particles exclusively. While qualitative and quantitative comparisons with measured images in the CO2 radiation band show that the flame structure is correctly computed, images computed in the soot radiation band illustrate that the soot volume fraction is under predicted by the computations. The effect of the soot model and cause of under prediction is investigated further by correcting the soot volume fraction using an empirical state relationship. By comparing default simulations with computations using the state relation, it is shown that while the soot model under-estimates the soot concentration, it correctly computes the intermittency of soot in the flame. The study of sooting flames is extended further by performing a parametric analysis of physical and numerical parameters that affect soot formation and transport in two laboratory scale turbulent sooting flames, one fueled by natural gas and the other by ethylene. The study is focused on investigating the effect of molecular diffusion of species, dilution of fuel with hydrogen gas and the effect of chemical reaction mechanism on the soot concentration in the flame. The effect of species Lewis numbers on soot evolution and transport is investigated by carrying out simulations, first with the default equal diffusivity (ED) assumption and then by incorporating a differential diffusion (DD) model. Computations using the DD model over-estimate the concentration of the soot precursor and soot oxidizer species, leading to inconsistencies in the estimate of the soot concentration. The linear differential diffusion (LDD) model, reported previously to consistently model differential diffusion effects is implemented to correct the over prediction effect of the DD model. It is shown that the effect of species Lewis number on soot evolution is a secondary phenomenon and that soot is primarily transported by advection of the fluid in a turbulent flame. The effect of hydrogen dilution on the soot formation and transport process is also studied. It is noted that the decay of soot volume fraction and flame length with hydrogen addition follows trends observed in laminar sooting flame measurements. While hydrogen enhances mixing shown by the laminar flamelet solutions, the mixing effect does not significantly contribute to differential molecular diffusion effects in the soot nucleation regions downstream of the flame and has a negligible effect on soot transport. The sensitivity of computations of soot volume fraction towards the chemical reaction mechanism is shown. It is concluded that modeling reaction pathways of C3 and C4 species that lead up to Polycyclic Aromatic Hydrocarbon (PAH) molecule formation is paramount for accurate predictions of soot in the flame. (Abstract shortened by ProQuest.).

  3. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.

  4. Computational Modeling of Ablation on an Irradiated Target

    NASA Astrophysics Data System (ADS)

    Mehmedagic, Igbal; Thangam, Siva

    2017-11-01

    Computational modeling of pulsed nanosecond laser interaction with an irradiated metallic target is presented. The model formulation involves ablation of the metallic target irradiated by pulsed high intensity laser at normal atmospheric conditions. Computational findings based on effective representation and prediction of the heat transfer, melting and vaporization of the targeting material as well as plume formation and expansion are presented along with its relevance for the development of protective shields. In this context, the available results for a representative irradiation from 1064 nm laser pulse is used to analyze various ablation mechanisms, variable thermo-physical and optical properties, plume expansion and surface geometry. Funded in part by U. S. Army ARDEC, Picatinny Arsenal, NJ.

  5. Efficient computation of the Grünwald-Letnikov fractional diffusion derivative using adaptive time step memory

    NASA Astrophysics Data System (ADS)

    MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.

    2015-09-01

    Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.

  6. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  7. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  8. A world-wide databridge supported by a commercial cloud provider

    NASA Astrophysics Data System (ADS)

    Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio

    2017-10-01

    Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.

  9. Impact of Computer Related Posture on the Occurrence of Musculoskeletal Discomfort among Secondary School Students in Lagos, Nigeria.

    PubMed

    Odebiyi, D O; Olawale, O A; Adeniji, Y M

    2013-01-01

    Computers have become an essential part of life particularly in industrially advanced countries of the world. Children now have greater accessibility to computers both at school and at home. Recent studies suggest that with this increased exposure, there are associated musculoskeletal disorders (MSDs) in both school-aged children and adults. To assess the posture assumed by secondary school students during computer use and its impact on the occurrence and severity of reported musculoskeletal discomforts. Posture assumed during normal computer class, occurrence of discomforts, body parts involved and the intensity of discomforts were evaluated in 235 school aged children using Rapid Upper Limb Assessment (RULA) scale, Body Discomfort Chart (BDC) and Visual Analogue Scale (VAS) before and after normal computer class. Inferential statistics of t-test and chi-square were used to determine significance difference between variables, with level of significant set at p < 0.05. None of the participants demonstrated acceptable posture. Computer use produced significant discomforts on the neck, shoulder and low back. There was a significant relationship between participants height and posture assumed. Two hundred and eleven (89.8%) participants reported discomforts/pain during the use of computer. Weight and height were contributory factors to the occurrence of musculoskeletal discomfort/pain (p < 0.05) in some of the body parts studied. Musculoskeletal discomfort was found to be a problem among the school-aged children during computer use. Weight and height were implicated as factors that influenced the form of posture and the nature of the reported discomfort. Creating awareness about the knowledge of ergonomics and safety for promotion of good posture was therefore recommended.

  10. a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.

  11. Virtual reality and brain computer interface in neurorehabilitation

    PubMed Central

    Dahdah, Marie; Driver, Simon; Parsons, Thomas D.; Richter, Kathleen M.

    2016-01-01

    The potential benefit of technology to enhance recovery after central nervous system injuries is an area of increasing interest and exploration. The primary emphasis to date has been motor recovery/augmentation and communication. This paper introduces two original studies to demonstrate how advanced technology may be integrated into subacute rehabilitation. The first study addresses the feasibility of brain computer interface with patients on an inpatient spinal cord injury unit. The second study explores the validity of two virtual environments with acquired brain injury as part of an intensive outpatient neurorehabilitation program. These preliminary studies support the feasibility of advanced technologies in the subacute stage of neurorehabilitation. These modalities were well tolerated by participants and could be incorporated into patients' inpatient and outpatient rehabilitation regimens without schedule disruptions. This paper expands the limited literature base regarding the use of advanced technologies in the early stages of recovery for neurorehabilitation populations and speaks favorably to the potential integration of brain computer interface and virtual reality technologies as part of a multidisciplinary treatment program. PMID:27034541

  12. Full-tree utilization of southern pine and hardwoods growing on southern pine sites

    Treesearch

    Peter Koch

    1974-01-01

    in 1963, approximately 30 percent of the dry weight of above- and below-ground parts of southern pine trees ended as dry surfaced lumber or paper; the remaining 70 percent was largely unused. By 1980, computer-controlled chipping headrigs, thin-kerf saws, lamination of lumber from rotary-cut veneer, high-yield pulping processes, and more intensive use of roots, bark,...

  13. Please Reduce Cycle Time

    DTIC Science & Technology

    2014-12-01

    observed an ERP system implementation that encountered this exact model. The modified COTS software worked and passed the acceptance tests but never... software -intensive program. We decided to create a very detailed master sched- ule with multiple supporting subschedules that linked and Implementing ...processes in place as part of the COTS implementation . For hardware , COTS can also present some risks. Many pro- grams use COTS computers and servers

  14. Whole-tree utilization of southern pine advanced by developments in mechanical conversion

    Treesearch

    P. Koch

    1973-01-01

    In 1963 approximately 30 percent of the dry weight of above- and below-ground parts of southern pine trees ended as dry-surfaced lumber or paper; the remaining 70 percent was largely unused. By 1980, computer-controlled chipping headrigs, thin-kerf saws, lamination of lumber from rotary-cut veneer, high-yield pulping processes, and more intensive use of roots, bark,...

  15. Whole-tree utilization of southern pine advanced by developments in mechanical conversion

    Treesearch

    Peter Koch

    1973-01-01

    In 1963 approximately 30 percent of the dry weight of aboe- and below-ground parts of southern pine trees ended as dry-surfaced lumber or paper; the remaining 70 percent was largely unused. By 1980, computer-controlled chipping headrigs, think-kerf saws, lamination of lumber from rotary-cut veneer, high-yield pulping processes, and more intensive use of roots, bark,...

  16. Numerical characteristics of quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  17. Calculation of stress intensity factors in an isotropic multicracked plate: Part 2: Symbolic/numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.

    1992-01-01

    Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolic, R J

    This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less

  19. Cumulative Effects of Human Activities on Marine Mammal Populations

    DTIC Science & Technology

    2015-09-30

    marine mammals and sea turtles. She has studied habitat use of whales and dolphins, underwater sound levels and environmental impacts of offshore wind ... turbines on marine mammals, and migration pathways and hot spots of marine predators at the National Oceanic and Atmospheric Administration as part...distribution of wild animal and plant populations, and the use of computer-intensive methods to fit and compare stochastic models of wildlife population

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, I.M.; Pichakhchi, L.D.

    It is shown that the emission spectrum of T Tauri stars with anomalous continuous radiation in the ultraviolet can be explained by assuming that it is a negative absorption spectrum of hydrogen excited by synchrotron radiation of great intensity in a small part of the star's atmosphere--in its active zone. A method was also proposed for the determination of the spectrum of synchrotron radiation from the observed hydrogen emission spectrum. The intensity in the infrared part of the spectrum was determined from the broadening of the higher terms of the Balmer series that form the quasicontinuum, while the intensity inmore » the ultraviolet was determined from hydrogen ionization. In the present study the distribution of hydrogen atoms among the excited levels in the field of such radiation is calculated using an electronic computer. The calculations show that the Balmer lines will in fact be observed in emission due to induced transitions, i.e., as a sequence of negative absorption lines. The considerable overpopulation of the upper levels is responsible for the small Balmer decrement and the appearance of anomalous emission in the ultraviolet and also for the increase in intensity of the latter when approaching the Balmer discontinuity. Thus the theory of the excitation of the emission spectrum of T Tauri stars is confirmed quantitatively. (auth)« less

  1. Stress Intensity Factors for Part-Through Surface Cracks in Hollow Cylinders

    NASA Technical Reports Server (NTRS)

    Mettu, Sambi R.; Raju, Ivatury S.; Forman, Royce G.

    1992-01-01

    Flaws resulting from improper welding and forging are usually modeled as cracks in flat plates, hollow cylinders or spheres. The stress intensity factor solutions for these crack cases are of great practical interest. This report describes some recent efforts at improving the stress intensity factor solutions for cracks in such geometries with emphasis on hollow cylinders. Specifically, two crack configurations for cylinders are documented. One is that of a surface crack in an axial plane and the other is a part-through thumb-nail crack in a circumferential plane. The case of a part-through surface crack in flat plates is used as a limiting case for very thin cylinders. A combination of the two cases for cylinders is used to derive a relation for the case of a surface crack in a sphere. Solutions were sought which cover the entire range of the geometrical parameters such as cylinder thickness, crack aspect ratio and crack depth. Both the internal and external position of the cracks are considered for cylinders and spheres. The finite element method was employed to obtain the basic solutions. Power-law form of loading was applied in the case of flat plates and axial cracks in cylinders and uniform tension and bending loads were applied in the case of circumferential (thumb-nail) cracks in cylinders. In the case of axial cracks, the results for tensile and bending loads were used as reference solutions in a weight function scheme so that the stress intensity factors could be computed for arbitrary stress gradients in the thickness direction. For circumferential cracks, since the crack front is not straight, the above technique could not be used. Hence for this case, only the tension and bending solutions are available at this time. The stress intensity factors from the finite element method were tabulated so that results for various geometric parameters such as crack depth-to-thickness ratio (a/t), crack aspect ratio (a/c) and internal radius-to-thickness ratio (R/t) or the crack length-to-width ratio (2c/W) could be obtained by interpolation and extrapolation. Such complete tables were then incorporated into the NASA/FLAGRO computer program which is widely used by the aerospace community for fracture mechanics analysis.

  2. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  3. Ultraviolet continuum absorption /less than about 1000 A/ above the quiet sun transition region

    NASA Technical Reports Server (NTRS)

    Doschek, G. A.; Feldman, U.

    1982-01-01

    Lyman continuum absorption shortward of 912 A in the quiet sun solar transition region is investigated by combining spectra obtained from the Apollo Telescope Mount experiments on Skylab. The most recent atomic data are used to compute line intensities for lines that fall on both sides of the Lyman limit. Lines of O III, O IV, O V, and S IV are considered. The computed intensity ratios of most lines from O IV, O V, and S IV agree with the experimental ratios to within a factor of 2. However, the discrepancies show no apparent wavelength dependence. From this fact, it is concluded that at least part of the discrepancy between theory and observation for lines of these ions can be accounted for by uncertainties in instrumental calibration and atomic data. However, difficulties remain in reconciling observation and theory, particularly for lines of O III, and one line of S IV. The other recent results of Schmahl and Orrall (1979) are also discussed in terms of newer atomic data.

  4. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  5. Tutorial: Asteroseismic Stellar Modelling with AIMS

    NASA Astrophysics Data System (ADS)

    Lund, Mikkel N.; Reese, Daniel R.

    The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.

  6. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  7. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  8. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  9. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  10. Participation in the Apollo passive seismic experiment

    NASA Technical Reports Server (NTRS)

    Press, F.; Toksoez, M. N.; Dainty, A.

    1972-01-01

    Computer programs which were written to read digital tapes containing lunar seismic data were studied. Interpreting very early parts of the lunar seismogram as seismic body-wave phases enabled the determination of the structure of the outer part of the moon in the Fra Mauro region. The crust in the Fra Mauro region is 60 to 65 km-thick, overlaying a high velocity mantle. The crust is further divided into an upper part, 25 km thick, apparently made of material similar to the surficial basalts, and a lower part of seemingly different composition, possibly an anorthositic gabbro. The generation of the exceedingly long reverberating wave-train observed in lunar seismogram was also studied. This is believed to be due to an intense scattering layer with very high quality coefficient overlying a more homogeneous elastic medium. Titles and abstracts of related published papers are included.

  11. LANDSAT-1 data, its use in a soil survey program

    NASA Technical Reports Server (NTRS)

    Westin, F. C.; Frazee, C. J.

    1975-01-01

    The following applications of LANDSAT imagery were investigated: assistance in recognizing soil survey boundaries, low intensity soil surveys, and preparation of a base map for publishing thematic soils maps. The following characteristics of LANDSAT imagery were tested as they apply to the recognition of soil boundaries in South Dakota and western Minnesota: synoptic views due to the large areas covered, near-orthography and lack of distortion, flexibility of selecting the proper season, data recording in four parts of the spectrum, and the use of computer compatible tapes. A low intensity soil survey of Pennington County, South Dakota was completed in 1974. Low intensity inexpensive soil surveys can provide the data needed to evaluate agricultural land for the remaining counties until detailed soil surveys are completed. In using LANDSAT imagery as a base map for publishing thematic soil maps, the first step was to prepare a mosaic with 20 LANDSAT scenes from several late spring passes in 1973.

  12. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  13. Imaging simulation of active EO-camera

    NASA Astrophysics Data System (ADS)

    Pérez, José; Repasi, Endre

    2018-04-01

    A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.

  14. Real-time implementation of a color sorting system

    NASA Astrophysics Data System (ADS)

    Srikanteswara, Srikathyanyani; Lu, Qiang O.; King, William; Drayer, Thomas H.; Conners, Richard W.; Kline, D. Earl; Araman, Philip A.

    1997-09-01

    Wood edge glued panels are used extensively in the furniture and cabinetry industries. They are used to make doors, tops, and sides of solid wood furniture and cabinets. Since lightly stained furniture and cabinets are gaining in popularity, there is an increasing demand to color sort the parts used to make these edge glued panels. The goal of the sorting processing is to create panels that are uniform in both color and intensity across their visible surface. If performed manually, the color sorting of edge-glued panel parts is very labor intensive and prone to error. This paper describes a complete machine vision system for performing this sort. This system uses two color line scan cameras for image input and a specially designed custom computing machine to allow real-time implementation. Users define the number of color classes that are to be used. An 'out' class is provided to handle unusually colored parts. The system removes areas of character mark, e.g., knots, mineral streak, etc., from consideration when assigning a color class to a part. The system also includes a better face algorithm for determining which part face would be the better to put on the side of the panel that will show. The throughput is two linear feet per second. Only a four inch between part spacing is required. This system has undergone extensive in plant testing and will be commercially available in the very near future. The results of this testing will be presented.

  15. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  16. Plates and shells containing a surface crack under general loading conditions

    NASA Technical Reports Server (NTRS)

    Joseph, Paul F.; Erdogan, Fazil

    1987-01-01

    Various through and part-through crack problems in plates and shells are considered. The line-spring model of Rice and Levy is generalized to the skew-symmetric case to solve surface crack problems involving mixed-mode, coplanar crack growth. Compliance functions are introduced which are valid for crack depth to thickness ratios at least up to .95. This includes expressions for tension and bending as well as expressions for in-plane shear, out-of-plane shear, and twisting. Transverse shear deformation is taken into account in the plate and shell theories and this effect is shown to be important in comparing stress intensity factors obtained from the plate theory with three-dimensional solutions. Stress intensity factors for cylinders obtained by the line-spring model also compare well with three-dimensional solution. By using the line-spring approach, stress intensity factors can be obtained for the through crack and for part-through crack of any crack front shape, without recalculation integrals that take up the bulk of the computer time. Therefore, parameter studies involving crack length, crack depth, shell type, and shell curvature are made in some detail. The results will be useful in brittle fracture and in fatigue crack propagation studies. All problems considered are of the mixed boundary value type and are reducted to strongly singular integral equations which make use of the finite-part integrals of Hadamard. The equations are solved numerically in a manner that is very efficient.

  17. Laser Velocimeter Measurements and Analysis in Turbulent Flows with Combustion. Part 2.

    DTIC Science & Technology

    1983-07-01

    sampling error for 63 this sample size. Mean velocities and turbulence intensi- ties were found to be statistically accurate to ± 1 % and 13%, respectively...Although the statist - ical error was found to be rather small (± 1 % for mean velo- cities and 13% for turbulence intensities), there can be additional...34Computational and Experimental Study of a Captive Annular Eddy," Journal of Fluid Mechanics, Vol. 28, pt. 1 , pp. 43-63, 12 April, 1967. 152 REFERENCES (con’d

  18. Three-Dimensional Computed Tomography as a Method for Finding Die Attach Voids in Diodes

    NASA Technical Reports Server (NTRS)

    Brahm, E. N.; Rolin, T. D.

    2010-01-01

    NASA analyzes electrical, electronic, and electromechanical (EEE) parts used in space vehicles to understand failure modes of these components. The diode is an EEE part critical to NASA missions that can fail due to excessive voiding in the die attach. Metallography, one established method for studying the die attach, is a time-intensive, destructive, and equivocal process whereby mechanical grinding of the diodes is performed to reveal voiding in the die attach. Problems such as die attach pull-out tend to complicate results and can lead to erroneous conclusions. The objective of this study is to determine if three-dimensional computed tomography (3DCT), a nondestructive technique, is a viable alternative to metallography for detecting die attach voiding. The die attach voiding in two- dimensional planes created from 3DCT scans was compared to several physical cross sections of the same diode to determine if the 3DCT scan accurately recreates die attach volumetric variability

  19. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    PubMed

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  20. A Longitudinal Investigation of the Effects of Computer Anxiety on Performance in a Computing-Intensive Environment

    ERIC Educational Resources Information Center

    Buche, Mari W.; Davis, Larry R.; Vician, Chelley

    2007-01-01

    Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…

  1. Mechanics of the acoustic radiation force in tissue-like solids

    NASA Astrophysics Data System (ADS)

    Dontsov, Egor V.

    The acoustic radiation force (ARF) is a phenomenon affiliated with the nonlinear effects of high-intensity wave propagation. It represents the mean momentum transfer from the sound wave to the medium, and allows for an effective computation of the mean motion (e.g. acoustic streaming in fluids) induced by a high-intensity sound wave. Nowadays, the high-intensity focused ultrasound is frequently used in medical diagnosis applications due to its ability to "push" inside the tissue with the radiation body force and facilitate the local quantification of tissue's viscoelastic properties. The main objectives of this study include: i) the theoretical investigation of the ARF in fluids and tissue-like solids generated respectively by the amplitude modulated plane wave and focused ultrasound; ii) computation of the nonlinear acoustic wave propagation when the amplitude of the focused ultrasound field is modulated by a low-frequency signal, and iii) modeling of the ARF-induced motion in tissue-like solids for the purpose of quantifying their nonlinear elasticity via the magnitude of the ARF. Regarding the first part, a comparison with the existing theory of the ARF reveals a number of key features that are brought to light by the new formulation, including the contributions to the ARF of ultrasound modulation and thermal expansion, as well as the precise role of constitutive nonlinearities in generating the sustained body force in tissue-like solids by a focused ultrasound beam. In the second part, the hybrid time-frequency domain algorithm for the numerical analysis of the nonlinear wave equation is proposed. The approach is validated by comparing the results to the finite-difference modeling in time domain. Regarding the third objective, the Fourier transform approach is used to compute the ARF-induced shear wave motion in tissue-mimicking phantoms. A comparison between the experiment (tests performed at the Mayo Clinic) and model permitted the estimation of a particular coefficient of nonlinear tissue elasticity from the amplitude of the ARF-generated shear waves. For completeness, the ARF estimates of this coefficient are verified via an established technique known as acoustoelasticity.

  2. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    PubMed

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  3. Three layers multi-granularity OCDM switching system based on learning-stateful PCE

    NASA Astrophysics Data System (ADS)

    Wang, Yubao; Liu, Yanfei; Sun, Hao

    2017-10-01

    In the existing three layers multi-granularity OCDM switching system (TLMG-OCDMSS), F-LSP, L-LSP and OC-LSP can be bundled as switching granularity. For CPU-intensive network, the node not only needs to compute the path but also needs to bundle the switching granularity so that the load of single node is heavy. The node will paralyze when the traffic of the node is too heavy, which will impact the performance of the whole network seriously. The introduction of stateful PCE(S-PCE) will effectively solve these problems. PCE is composed of two parts, namely, the path computation element and the database (TED and LSPDB), and returns the result of path computation to PCC (path computation clients) after PCC sends the path computation request to it. In this way, the pressure of the distributed path computation in each node is reduced. In this paper, we propose the concept of Learning PCE (L-PCE), which uses the existing LSPDB as the data source of PCE's learning. By this means, we can simplify the path computation and reduce the network delay, as a result, improving the performance of network.

  4. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  5. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  6. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  7. Data intensive computing at Sandia.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    2010-09-01

    Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

  8. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  9. A computational model for estimating recruitment of primary afferent fibers by intraneural stimulation in the dorsal root ganglia

    NASA Astrophysics Data System (ADS)

    Bourbeau, D. J.; Hokanson, J. A.; Rubin, J. E.; Weber, D. J.

    2011-10-01

    Primary afferent microstimulation has been proposed as a method for activating cutaneous and muscle afferent fibers to restore tactile and proprioceptive feedback after limb loss or peripheral neuropathy. Large populations of primary afferent fibers can be accessed directly by implanting microelectrode arrays in the dorsal root ganglia (DRG), which provide a compact and stable target for stimulating a diverse group of sensory fibers. To gain insight into factors affecting the number and types of primary afferents activated, we developed a computational model that simulates the recruitment of fibers in the feline L7 DRG. The model comprises two parts. The first part is a single-fiber model used to describe the current-distance relation and was based on the McIntyre-Richardson-Grill model for excitability. The second part uses the results of the singe-fiber model and published data on fiber size distributions to predict the probability of recruiting a given number of fibers as a function of stimulus intensity. The range of intensities over which exactly one fiber was recruited was approximately 0.5-5 µA (0.1-1 nC per phase); the stimulus intensity at which the probability of recruiting exactly one fiber was maximized was 2.3 µA. However, at 2.3 µA, it was also possible to recruit up to three fibers, albeit with a lower probability. Stimulation amplitudes up to 6 µA were tested with the population model, which showed that as the amplitude increased, the number of fibers recruited increased exponentially. The distribution of threshold amplitudes predicted by the model was similar to that previously reported by in vivo experimentation. Finally, the model suggested that medium diameter fibers (7.3-11.5 µm) may be recruited with much greater probability than large diameter fibers (12.8-16 µm). This model may be used to efficiently test a range of stimulation parameters and nerve morphologies to complement results from electrophysiology experiments and to aid in the design of microelectrode arrays for neural interfaces.

  10. The 'last mile' of data handling: Fermilab's IFDH tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyon, Adam L.; Mengel, Marc W.

    2014-01-01

    IFDH (Intensity Frontier Data Handling), is a suite of tools for data movement tasks for Fermilab experiments and is an important part of the FIFE[2] (Fabric for Intensity Frontier [1] Experiments) initiative described at this conference. IFDH encompasses moving input data from caches or storage elements to compute nodes (the 'last mile' of data movement) and moving output data potentially to those caches as part of the journey back to the user. IFDH also involves throttling and locking to ensure that large numbers of jobs do not cause data movement bottlenecks. IFDH is realized as an easy to use layermore » that users call in their job scripts (e.g. 'ifdh cp'), hiding the low level data movement tools. One advantage of this layer is that the underlying low level tools can be selected or changed without the need for the user to alter their scripts. Logging and performance monitoring can also be added easily. This system will be presented in detail as well as its impact on the ease of data handling at Fermilab experiments.« less

  11. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study

    PubMed Central

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-01-01

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia. PMID:26262633

  12. Wrist Hypothermia Related to Continuous Work with a Computer Mouse: A Digital Infrared Imaging Pilot Study.

    PubMed

    Reste, Jelena; Zvagule, Tija; Kurjane, Natalja; Martinsone, Zanna; Martinsone, Inese; Seile, Anita; Vanadzins, Ivars

    2015-08-07

    Computer work is characterized by sedentary static workload with low-intensity energy metabolism. The aim of our study was to evaluate the dynamics of skin surface temperature in the hand during prolonged computer mouse work under different ergonomic setups. Digital infrared imaging of the right forearm and wrist was performed during three hours of continuous computer work (measured at the start and every 15 minutes thereafter) in a laboratory with controlled ambient conditions. Four people participated in the study. Three different ergonomic computer mouse setups were tested on three different days (horizontal computer mouse without mouse pad; horizontal computer mouse with mouse pad and padded wrist support; vertical computer mouse without mouse pad). The study revealed a significantly strong negative correlation between the temperature of the dorsal surface of the wrist and time spent working with a computer mouse. Hand skin temperature decreased markedly after one hour of continuous computer mouse work. Vertical computer mouse work preserved more stable and higher temperatures of the wrist (>30 °C), while continuous use of a horizontal mouse for more than two hours caused an extremely low temperature (<28 °C) in distal parts of the hand. The preliminary observational findings indicate the significant effect of the duration and ergonomics of computer mouse work on the development of hand hypothermia.

  13. surf3d: A 3-D finite-element program for the analysis of surface and corner cracks in solids subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1993-01-01

    A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.

  14. Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.

    PubMed

    Das, Biman; Drake, Eli; Jack, John

    2004-02-01

    Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.

  15. Blood vessel-based liver segmentation through the portal phase of a CT dataset

    NASA Astrophysics Data System (ADS)

    Maklad, Ahmed S.; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Moriyama, Noriyuki; Utsunomiya, Toru; Shimada, Mitsuo

    2013-02-01

    Blood vessels are dispersed throughout the human body organs and carry unique information for each person. This information can be used to delineate organ boundaries. The proposed method relies on abdominal blood vessels (ABV) to segment the liver considering the potential presence of tumors through the portal phase of a CT dataset. ABV are extracted and classified into hepatic (HBV) and nonhepatic (non-HBV) with a small number of interactions. HBV and non-HBV are used to guide an automatic segmentation of the liver. HBV are used to individually segment the core region of the liver. This region and non-HBV are used to construct a boundary surface between the liver and other organs to separate them. The core region is classified based on extracted posterior distributions of its histogram into low intensity tumor (LIT) and non-LIT core regions. Non-LIT case includes normal part of liver, HBV, and high intensity tumors if exist. Each core region is extended based on its corresponding posterior distribution. Extension is completed when it reaches either a variation in intensity or the constructed boundary surface. The method was applied to 80 datasets (30 Medical Image Computing and Computer Assisted Intervention (MICCAI) and 50 non-MICCAI data) including 60 datasets with tumors. Our results for the MICCAI-test data were evaluated by sliver07 [1] with an overall score of 79.7, which ranks seventh best on the site (December 2013). This approach seems a promising method for extraction of liver volumetry of various shapes and sizes and low intensity hepatic tumors.

  16. Advanced ballistic range technology

    NASA Technical Reports Server (NTRS)

    Yates, Leslie A.

    1993-01-01

    Experimental interferograms, schlieren, and shadowgraphs are used for quantitative and qualitative flow-field studies. These images are created by passing light through a flow field, and the recorded intensity patterns are functions of the phase shift and angular deflection of the light. As part of the grant NCC2-583, techniques and software have been developed for obtaining phase shifts from finite-fringe interferograms and for constructing optical images from Computational Fluid Dynamics (CFD) solutions. During the period from 1 Nov. 1992 - 30 Jun. 1993, research efforts have been concentrated in improving these techniques.

  17. Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions

    NASA Astrophysics Data System (ADS)

    Lopez-Crespo, Pablo; Pommier, Sylvie

    The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.

  18. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  19. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  20. Design of magneto-rheological mount for a cabin of heavy equipment vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Soon-Yong; Do, Xuan Phu; Choi, Seung-Bok

    2016-04-01

    In this paper, magneto-rheological (MR) mount for a cabin of heavy equipment vehicles is designed for improving vibration isolation in both low and high frequency domains. The proposed mount consists of two principal parts of mount, rubber part and MR fluid path. The rubber part of existed mount and spring are used to change the stiffness and frequency characteristics for low vibration frequency range. The MR fluid path is a valve type structure using flow mode. In order to control the external magnetic field, a solenoid coil is placed in MR mount. Magnetic intensity analysis is then conducted to optimize dimensions using computer simulation. Experimental results show that magnetic field can reduce low frequency vibration. The results presented in this work indicate that proper application of MR fluid and rubber characteristic to devise MR mount can lead to the improvement of vibration control performance in both low and high frequency ranges.

  1. Understanding light scattering by a coated sphere part 2: time domain analysis.

    PubMed

    Laven, Philip; Lock, James A

    2012-08-01

    Numerical computations were made of scattering of an incident electromagnetic pulse by a coated sphere that is large compared to the dominant wavelength of the incident light. The scattered intensity was plotted as a function of the scattering angle and delay time of the scattered pulse. For fixed core and coating radii, the Debye series terms that most strongly contribute to the scattered intensity in different regions of scattering angle-delay time space were identified and analyzed. For a fixed overall radius and an increasing core radius, the first-order rainbow was observed to evolve into three separate components. The original component faded away, while the two new components eventually merged together. The behavior of surface waves generated by grazing incidence at the core/coating and coating/exterior interfaces was also examined and discussed.

  2. Comparison of fatigue crack growth of riveted and bonded aircraft lap joints made of Aluminium alloy 2024-T3 substrates - A numerical study

    NASA Astrophysics Data System (ADS)

    Pitta, S.; Rojas, J. I.; Crespo, D.

    2017-05-01

    Aircraft lap joints play an important role in minimizing the operational cost of airlines. Hence, airlines pay more attention to these technologies to improve efficiency. Namely, a major time consuming and costly process is maintenance of aircraft between the flights, for instance, to detect early formation of cracks, monitoring crack growth, and fixing the corresponding parts with joints, if necessary. This work is focused on the study of repairs of cracked aluminium alloy (AA) 2024-T3 plates to regain their original strength; particularly, cracked AA 2024-T3 substrate plates repaired with doublers of AA 2024-T3 with two configurations (riveted and with adhesive bonding) are analysed. The fatigue life of the substrate plates with cracks of 1, 2, 5, 10 and 12.7mm is computed using Fracture Analysis 3D (FRANC3D) tool. The stress intensity factors for the repaired AA 2024-T3 plates are computed for different crack lengths and compared using commercial FEA tool ABAQUS. The results for the bonded repairs showed significantly lower stress intensity factors compared with the riveted repairs. This improves the overall fatigue life of the bonded joint.

  3. Precise signal amplitude retrieval for a non-homogeneous diagnostic beam using complex interferometry approach

    NASA Astrophysics Data System (ADS)

    Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.

    2017-08-01

    Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael; Lethin, Richard

    Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less

  5. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  6. From cosmos to connectomes: the evolution of data-intensive science.

    PubMed

    Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S

    2014-09-17

    The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Fourth-order self-energy contribution to the Lamb shift

    NASA Astrophysics Data System (ADS)

    Mallampalli, S.; Sapirstein, J.

    1998-03-01

    Two-loop self-energy contributions to the fourth-order Lamb shift of ground-state hydrogenic ions are treated to all orders in Zα by using exact Dirac-Coulomb propagators. A rearrangement of the calculation into four ultraviolet finite parts, the M, P, F, and perturbed orbital (PO) terms, is made. Reference-state singularities present in the M and P terms are shown to cancel. The most computationally intensive part of the calculation, the M term, is evaluated for hydrogenlike uranium and bismuth, the F term is evaluated for a range of Z values, but the P term is left for a future calculation. For hydrogenlike uranium, previous calculations of the PO term give -0.971 eV: the contributions from the M and F terms calculated here sum to -0.325 eV.

  8. The 1895 Ljubljana earthquake: can the intensity data points discriminate which one of the nearby faults was the causative one?

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni; Jamšek Rupnik, Petra; Cecić, Ina; Suhadolc, Peter

    2018-05-01

    The earthquake (Mw 6 from the SHEEC defined by the MDPs) that occurred in the central part of Slovenia on 14 April, 1895, affected a broad region, causing deaths, injuries, and destruction. This event was much studied but not fully explained; in particular, its causative source model is still debated. The aim of this work is to contribute to the identification of the seismogenic source of this destructive event, calculating peak ground velocity values through the use of different ground motion prediction equations (GMPEs) and computing a series of ground motion scenarios based on the result of an inversion work proposed by Jukić in 2009 and on various fault models in the surroundings of Ljubljana: Vič, Želimlje, Borovnica, Vodice, Ortnek, Mišjedolski, and Dobrepolje faults. The synthetic seismograms, at the basis of our computations, are calculated using the multi-modal summation technique and a kinematic approach for extended sources, with a maximum peak ground velocity value of 1 Hz. The qualitative and quantitative comparison of these simulations with the macroseismic intensity database allows us to discriminate between various sources and configurations. The quantitative validation of the seismic source is done using ad hoc ground motion to intensity conversion equations (GMICEs), expressly calculated for this study. This study allows us to identify the most probable causative source model of this event, contributing to the improvement of the seismotectonic knowledge of this region. The candidate fault that has the lowest values of average differences between observed and calculated intensities and chi-squared is a strike slip fault with a toward-north rupture as the Ortnek fault.

  9. An infrastructure for data-intensive seismology using ADMIRE: laying the bricks for a new data highway

    NASA Astrophysics Data System (ADS)

    Trani, L.; Spinuso, A.; Galea, M.; Atkinson, M.; Van Eck, T.; Vilotte, J.

    2011-12-01

    The data bonanza generated by today's digital revolution is forcing scientists to rethink their methodologies and working practices. Traditional approaches to knowledge discovery are pushed to their limit and struggle to keep apace with the data flows produced by modern systems. This work shows how the ADMIRE data-intensive architecture supports seismologists by enabling them to focus on their scientific goals and questions, abstracting away the underlying technology platform that enacts their data integration and analysis tasks. ADMIRE accomplishes this partly by recognizing 3 different types of experts that require clearly defined interfaces between their interaction: the domain expert who is the application specialist, the data-analysis expert who is a specialist in extracting information from data, and the data-intensive engineer who develops the infrastructure for data-intensive computation. In order to provide a context in which each category of expert may flourish, ADMIRE uses a 3-level architecture: the upper - tool - level supports the work of both domain and data-analysis experts, housing an extensive and evolving set of portals, tools and development environments; the lower - enactment - level houses a large and dynamic community of providers delivering data and data-intensive enactment environments as an evolving infrastructure that supports all of the work underway in the upper layer. Most data-intensive engineers work here; the crucial innovation lies in the middle level, a gateway that is a tightly defined and stable interface through which the two diverse and dynamic upper and lower layers communicate. This is a minimal and simple protocol and language (DISPEL), ultimately to be controlled by standards, so that the upper and lower communities may invest, secure in the knowledge that changes in this interface will be carefully managed. We implemented a well-established procedure for processing seismic ambient noise on the prototype architecture. The primary goal was to evaluate its capabilities for large-scale integration and analysis of distributed data. A secondary goal was to gauge its potential and the added value that it might bring to the seismological community. Though still in its infant state, the architecture met the demands of our use case and promises to cater for our future requirements. We shall continue to develop its capabilities as part of an EU funded project VERCE - Virtual Earthquake and Seismology Research Community for Europe. VERCE aims to significantly advance our understanding of the Earth in order to aid society in its management of natural resources and hazards. Its strategy is to enable seismologists to fully exploit the under-utilized wealth of seismic data, and key to this is a data-intensive computation framework adapted to the scale and diversity of the community. This is a first step in building a data-intensive highway for geoscientists, smoothing their travel from the primary sources of data to new insights and rapid delivery of actionable information.

  10. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  11. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  12. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  13. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  14. The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials

    NASA Technical Reports Server (NTRS)

    Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.

    2005-01-01

    The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.

  15. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    ERIC Educational Resources Information Center

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  16. Application verification research of cloud computing technology in the field of real time aerospace experiment

    NASA Astrophysics Data System (ADS)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  17. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  18. Analysis of intensity variability in multislice and cone beam computed tomography.

    PubMed

    Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde

    2011-08-01

    The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.

  19. Development and Application of a Parallel LCAO Cluster Method

    NASA Astrophysics Data System (ADS)

    Patton, David C.

    1997-08-01

    CPU intensive steps in the SCF electronic structure calculations of clusters and molecules with a first-principles LCAO method have been fully parallelized via a message passing paradigm. Identification of the parts of the code that are composed of many independent compute-intensive steps is discussed in detail as they are the most readily parallelized. Most of the parallelization involves spatially decomposing numerical operations on a mesh. One exception is the solution of Poisson's equation which relies on distribution of the charge density and multipole methods. The method we use to parallelize this part of the calculation is quite novel and is covered in detail. We present a general method for dynamically load-balancing a parallel calculation and discuss how we use this method in our code. The results of benchmark calculations of the IR and Raman spectra of PAH molecules such as anthracene (C_14H_10) and tetracene (C_18H_12) are presented. These benchmark calculations were performed on an IBM SP2 and a SUN Ultra HPC server with both MPI and PVM. Scalability and speedup for these calculations is analyzed to determine the efficiency of the code. In addition, performance and usage issues for MPI and PVM are presented.

  20. Knowing when to give up: early-rejection stratagems in ligand docking

    NASA Astrophysics Data System (ADS)

    Skone, Gwyn; Voiculescu, Irina; Cameron, Stephen

    2009-10-01

    Virtual screening is an important resource in the drug discovery community, of which protein-ligand docking is a significant part. Much software has been developed for this purpose, largely by biochemists and those in related disciplines, who pursue ever more accurate representations of molecular interactions. The resulting tools, however, are very processor-intensive. This paper describes some initial results from a project to review computational chemistry techniques for docking from a non-chemistry standpoint. An abstract blueprint for protein-ligand docking using empirical scoring functions is suggested, and this is used to discuss potential improvements. By introducing computer science tactics such as lazy function evaluation, dramatic increases to throughput can and have been realized using a real-world docking program. Naturally, they can be extended to any system that approximately corresponds to the architecture outlined.

  1. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  2. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  3. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  4. Number crunching vs. number theory: computers and FLT, from Kummer to SWAC (1850-1960), and beyond

    NASA Astrophysics Data System (ADS)

    Corry, Leo

    2008-07-01

    The article discusses the computational tools (both conceptual and material) used in various attempts to deal with individual cases of FLT [Fermat's Last Theorem], as well as the changing historical contexts in which these tools were developed and used, and affected research. It also explores the changing conceptions about the role of computations within the overall disciplinary picture of number theory, how they influenced research on the theorem, and the kinds of general insights thus achieved. After an overview of Kummer's contributions and its immediate influence, the author presents work that favored intensive computations of particular cases of FLT as a legitimate, fruitful, and worth-pursuing number-theoretical endeavor, and that were part of a coherent and active, but essentially low-profile tradition within nineteenth century number theory. This work was related to table making activity that was encouraged by institutions and individuals whose motivations came mainly from applied mathematics, astronomy, and engineering, and seldom from number theory proper. A main section of the article is devoted to the fruitful collaboration between Harry S. Vandiver and Emma and Dick Lehmer. The author shows how their early work led to the hesitant introduction of electronic computers for research related with FLT. Their joint work became a milestone for computer-assisted activity in number theory at large.

  5. Simulating Local and Intercontinental Pollutant Effects of Biomass Burning: Integration of Several Remotely Sensed Datasets

    NASA Technical Reports Server (NTRS)

    Chatfield, Robert B.; Vastano, John A.; Guild, Liane; Hlavka, Christine; Brass, James A.; Russell, Philip B. (Technical Monitor)

    1994-01-01

    Burning to clear land for crops and to destroy pests is an integral and largely unavoidable part of tropical agriculture. It is easy to note but difficult to quantify using remote sensing. This report describes our efforts to integrate remotely sensed data into our computer model of tropical chemical trace-gas emissions, weather, and reaction chemistry (using the MM5 mesoscale model and our own Global-Regional Atmospheric Chemistry Simulator). The effects of burning over the continents of Africa and South America have been noticed in observations from several satellites. Smoke plumes hundreds of kilometers long may be seen individually, or may merge into a large smoke pall over thousands of kilometers of these continents. These features are related to intense pollution in the much more confined regions with heavy burning. These emissions also translocate nitrogen thousands of kilometers in the tropical ecosystems, with large fixed-nitrogen losses balanced partially by locally intense fertilization downwind, where nitric acid is rained out. At a much larger scale, various satellite measurements have indicated the escape of carbon monoxide and ozone into large filaments which extend across the Tropical and Southern Atlantic Ocean. Our work relates the source emissions, estimated in part from remote sensing, in part from conventional surface reports, to the concentrations of these gases over these intercontinental regions. We will mention work in progress to use meteorological satellite data (AVHRR, GOES, and Meteosat) to estimate the surface temperature and extent and height of clouds, and explain why these uses are so important in our computer simulations of global biogeochemistry. We will compare our simulations and interpretation of remote observations to the international cooperation involving Brazil, South Africa, and the USA in the TRACE-A (Transport and Atmospheric Chemistry near the Equator - Atlantic) and SAFARI (Southern Africa Fire Atmosphere Research Initiative) and remote-sensing /aircraft/ecosystem observational campaigns.

  6. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  7. A clinical decision support system prototype for cardiovascular intensive care.

    PubMed

    Lau, F

    1994-08-01

    This paper describes the development and validation of a decision-support system prototype that can help manage hypovolemic hypotension in the Cardiovascular Intensive Care Unit (CVICU). The prototype uses physiologic pattern-matching, therapeutic protocols, computational drug-dosage response modeling and expert reasoning heuristics in its selection of intervention strategies and choices. As part of model testing, the prototype simulated real-time operation by processing historical physiologic and intervention data on a patient sequentially, generating alerts on questionable data, critiques of interventions instituted and recommendations on preferred interventions. Bench-testing with 399 interventions from 13 historical cases showed therapies for bleeding and fluid replacement proposed by the prototype were significantly more consistent (p < 0.0001) than those instituted by the staff when compared against expert critiques (80% versus 44%). This study has demonstrated the feasibility of formalizing hemodynamic management of CVICU patients in a manner that may be implemented and evaluated in a clinical setting.

  8. Sines and Cosines. Part 1 of 3

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1992-01-01

    Applying the concept of similarities, the mathematical principles of circular motion and sine and cosine waves are presented utilizing both film footage and computer animation in this 'Project Mathematics' series video. Concepts presented include: the symmetry of sine waves; the cosine (complementary sine) and cosine waves; the use of sines and cosines on coordinate systems; the relationship they have to each other; the definitions and uses of periodic waves, square waves, sawtooth waves; the Gibbs phenomena; the use of sines and cosines as ratios; and the terminology related to sines and cosines (frequency, overtone, octave, intensity, and amplitude).

  9. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  10. Energy intensity of computer manufacturing: hybrid assessment combining process and economic input-output methods.

    PubMed

    Williams, Eric

    2004-11-15

    The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.

  11. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  12. Retrieval of complex χ(2) parts for quantitative analysis of sum-frequency generation intensity spectra

    PubMed Central

    Hofmann, Matthias J.; Koelsch, Patrick

    2015-01-01

    Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297

  13. Association of Parkinson's Disease and Its Subtypes with Agricultural Pesticide Exposures in Men: A Case-Control Study in France.

    PubMed

    Moisan, Frédéric; Spinosi, Johan; Delabre, Laurène; Gourlet, Véronique; Mazurie, Jean-Louis; Bénatru, Isabelle; Goldberg, Marcel; Weisskopf, Marc G; Imbernon, Ellen; Tzourio, Christophe; Elbaz, Alexis

    2015-11-01

    Pesticides have been associated with Parkinson's disease (PD), but there are few data on important exposure characteristics such as dose-effect relations. It is unknown whether associations depend on clinical PD subtypes. We examined quantitative aspects of occupational pesticide exposure associated with PD and investigated whether associations were similar across PD subtypes. As part of a French population-based case-control study including men enrolled in the health insurance plan for farmers and agricultural workers, cases with clinically confirmed PD were identified through antiparkinsonian drug claims. Two controls were matched to each case. Using a comprehensive occupational questionnaire, we computed indicators for different dimensions of exposure (duration, cumulative exposure, intensity). We used conditional logistic regression to compute odds ratios (ORs) and 95% confidence intervals (CIs) among exposed male farmers (133 cases, 298 controls). We examined the relation between pesticides and PD subtypes (tremor dominant/non-tremor dominant) using polytomous logistic regression. There appeared to be a stronger association with intensity than duration of pesticide exposure based on separate models, as well as a synergistic interaction between duration and intensity (p-interaction = 0.04). High-intensity exposure to insecticides was positively associated with PD among those with low-intensity exposure to fungicides and vice versa, suggesting independent effects. Pesticide exposure in farms that specialized in vineyards was associated with PD (OR = 2.56; 95% CI: 1.31, 4.98). The association with intensity of pesticide use was stronger, although not significantly (p-heterogeneity = 0.60), for tremor-dominant (p-trend < 0.01) than for non-tremor-dominant PD (p-trend = 0.24). This study helps to better characterize different aspects of pesticide exposure associated with PD, and shows a significant association of pesticides with tremor-dominant PD in men, the most typical PD presentation. Moisan F, Spinosi J, Delabre L, Gourlet V, Mazurie JL, Bénatru I, Goldberg M, Weisskopf MG, Imbernon E, Tzourio C, Elbaz A. 2015. Association of Parkinson's disease and its subtypes with agricultural pesticide exposures in men: a case-control study in France. Environ Health Perspect 123:1123-1129; http://dx.doi.org/10.1289/ehp.1307970.

  14. Eigen Spreading

    DTIC Science & Technology

    2008-02-27

    between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be

  15. Comparison of high intensity focused ultrasound (HIFU) exposures using empirical and backscatter attenuation estimation methods

    NASA Astrophysics Data System (ADS)

    Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff

    2005-09-01

    Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.

  16. A preliminary study of molecular dynamics on reconfigurable computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolinski, C.; Trouw, F. R.; Gokhale, M.

    2003-01-01

    In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less

  17. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  18. The application of LANDSAT remote sensing technology to natural resources management. Section 1: Introduction to VICAR - Image classification module. Section 2: Forest resource assessment of Humboldt County.

    NASA Technical Reports Server (NTRS)

    Fox, L., III (Principal Investigator); Mayer, K. E.

    1980-01-01

    A teaching module on image classification procedures using the VICAR computer software package was developed to optimize the training benefits for users of the VICAR programs. The field test of the module is discussed. An intensive forest land inventory strategy was developed for Humboldt County. The results indicate that LANDSAT data can be computer classified to yield site specific forest resource information with high accuracy (82%). The "Douglas-fir 80%" category was found to cover approximately 21% of the county and "Mixed Conifer 80%" covering about 13%. The "Redwood 80%" resource category, which represented dense old growth trees as well as large second growth, comprised 4.0% of the total vegetation mosaic. Furthermore, the "Brush" and "Brush-Regeneration" categories were found to be a significant part of the vegetative community, with area estimates of 9.4 and 10.0%.

  19. Ames S-32 O-16 O-18 Line List for High-Resolution Experimental IR Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2016-01-01

    By comparing to the most recent experimental data and spectra of the SO2 628 ?1/?3 bands (see Ulenikov et al., JQSRT 168 (2016) 29-39), this study illustrates the reliability and accuracy of the Ames-296K SO2 line list, which is accurate enough to facilitate such high-resolution spectroscopic analysis. The SO2 628 IR line list is computed on a recently improved potential energy surface (PES) refinement, denoted Ames-Pre2, and the published purely ab initio CCSD(T)/aug-cc-pVQZ dipole moment surface. Progress has been made in both energy level convergence and rovibrational quantum number assignments agreeing with laboratory analysis models. The accuracy of the computed 628 energy levels and line list is similar to what has been achieved and reported for SO2 626 and 646, i.e. 0.01-0.03 cm(exp -1) for bands up to 5500 cm(exp -1). During the comparison, we found some discrepancies in addition to overall good agreements. The three-IR-list based feature-by-feature analysis in a 0.25 cm(exp -1) spectral window clearly demonstrates the power of the current Ames line lists with new assignments, correction of some errors, and intensity contributions from varied sources including other isotopologues. We are inclined to attribute part of detected discrepancies to an incomplete experimental analysis and missing intensity in the model. With complete line position, intensity, and rovibrational quantum numbers determined at 296 K, spectroscopic analysis is significantly facilitated especially for a spectral range exhibiting such an unusually high density of lines. The computed 628 rovibrational levels and line list are accurate enough to provide alternatives for the missing bands or suspicious assignments, as well as helpful to identify these isotopologues in various celestial environments. The next step will be to revisit the SO2 828 and 646 spectral analyses.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Sengupta, M.; Wilcox, S.

    This report was part of a multiyear collaboration with the University of Wisconsin and the National Oceanic and Atmospheric Administration (NOAA) to produce high-quality, satellite-based, solar resource datasets for the United States. High-quality, solar resource assessment accelerates technology deployment by making a positive impact on decision making and reducing uncertainty in investment decisions. Satellite-based solar resource datasets are used as a primary source in solar resource assessment. This is mainly because satellites provide larger areal coverage and longer periods of record than ground-based measurements. With the advent of newer satellites with increased information content and faster computers that can processmore » increasingly higher data volumes, methods that were considered too computationally intensive are now feasible. One class of sophisticated methods for retrieving solar resource information from satellites is a two-step, physics-based method that computes cloud properties and uses the information in a radiative transfer model to compute solar radiation. This method has the advantage of adding additional information as satellites with newer channels come on board. This report evaluates the two-step method developed at NOAA and adapted for solar resource assessment for renewable energy with the goal of identifying areas that can be improved in the future.« less

  1. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  2. Phased Array Imaging of Complex-Geometry Composite Components.

    PubMed

    Brath, Alex J; Simonetti, Francesco

    2017-10-01

    Progress in computational fluid dynamics and the availability of new composite materials are driving major advances in the design of aerospace engine components which now have highly complex geometries optimized to maximize system performance. However, shape complexity poses significant challenges to traditional nondestructive evaluation methods whose sensitivity and selectivity rapidly decrease as surface curvature increases. In addition, new aerospace materials typically exhibit an intricate microstructure that further complicates the inspection. In this context, an attractive solution is offered by combining ultrasonic phased array (PA) technology with immersion testing. Here, the water column formed between the complex surface of the component and the flat face of a linear or matrix array probe ensures ideal acoustic coupling between the array and the component as the probe is continuously scanned to form a volumetric rendering of the part. While the immersion configuration is desirable for practical testing, the interpretation of the measured ultrasonic signals for image formation is complicated by reflection and refraction effects that occur at the water-component interface. To account for refraction, the geometry of the interface must first be reconstructed from the reflected signals and subsequently used to compute suitable delay laws to focus inside the component. These calculations are based on ray theory and can be computationally intensive. Moreover, strong reflections from the interface can lead to a thick dead zone beneath the surface of the component which limits sensitivity to shallow subsurface defects. This paper presents a general approach that combines advanced computing for rapid ray tracing in anisotropic media with a 256-channel parallel array architecture. The full-volume inspection of complex-shape components is enabled through the combination of both reflected and transmitted signals through the part using a pair of arrays held in a yoke configuration. Experimental results are provided for specimens of increasing complexity relevant to aerospace applications such as fan blades. It is shown that PA technology can provide a robust solution to detect a variety of defects including porosity and waviness in composite parts.

  3. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  4. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  5. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  6. The large-scale structure of software-intensive systems

    PubMed Central

    Booch, Grady

    2012-01-01

    The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964

  7. Complex dark-field contrast and its retrieval in x-ray phase contrast imaging implemented with Talbot interferometry.

    PubMed

    Yang, Yi; Tang, Xiangyang

    2014-10-01

    Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be retrieved for imaging with a method based on the phase-attenuation duality. The analytic formulae derived in this work to characterize the complex dark-field contrast in x-ray phase contrast imaging method implemented with Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive biomedical applications.

  8. Comment on the paper ;NDSD-1000: High-resolution, high-temperature nitrogen dioxide spectroscopic Databank; by A.A. Lukashevskaya, N.N. Lavrentieva, A.C. Dudaryonok, V.I. Perevalov, J Quant Spectrosc Radiat Transfer 2016;184:205-17

    NASA Astrophysics Data System (ADS)

    Perrin, A.; Ndao, M.; Manceron, L.

    2017-10-01

    A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.

  9. Decision making and preferences for acoustic signals in choice situations by female crickets.

    PubMed

    Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias

    2015-08-01

    Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.

  10. Integrating O/S models during conceptual design, part 3

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1994-01-01

    Space vehicles, such as the Space Shuttle, require intensive ground support prior to, during, and after each mission. Maintenance is a significant part of that ground support. All space vehicles require scheduled maintenance to ensure operability and performance. In addition, components of any vehicle are not one-hundred percent reliable so they exhibit random failures. Once detected, a failure initiates unscheduled maintenance on the vehicle. Maintenance decreases the number of missions which can be completed by keeping vehicles out of service so that the time between the completion of one mission and the start of the next is increased. Maintenance also requires resources such as people, facilities, tooling, and spare parts. Assessing the mission capability and resource requirements of any new space vehicle, in addition to performance specification, is necessary to predict the life cycle cost and success of the vehicle. Maintenance and logistics support has been modeled by computer simulation to estimate mission capability and resource requirements for evaluation of proposed space vehicles. The simulation was written with Simulation Language for Alternative Modeling II (SLAM II) for execution on a personal computer. For either one or a fleet of space vehicles, the model simulates the preflight maintenance checks, the mission and return to earth, and the post flight maintenance in preparation to be sent back into space. THe model enables prediction of the number of missions possible and vehicle turn-time (the time between completion of one mission and the start of the next) given estimated values for component reliability and maintainability. The model also facilitates study of the manpower and vehicle requirements for the proposed vehicle to meet its desired mission rate. This is the 3rd part of a 3 part technical report.

  11. Modes competition in superradiant emission from an inverted sub-wavelength thick slab of two-level atoms

    NASA Astrophysics Data System (ADS)

    Manassah, Jamal T.

    2016-08-01

    Using the expansion in the eigenmodes of 1-D Lienard-Wiechert kernel, the temporal and spectral profiles of the radiation emitted by a fully inverted collection of two-level atoms in a sub-wavelength slab geometry are computed. The initial number of amplifying modes determine the specific regime of radiation. In particular, the temporal profile of the field intensity is oscillatory and the spectral profile is non-Lorentzian with two unequal height peaks in a narrow band centered at the slab thickness value at which the real parts of the lowest order odd and even eigenvalues are equal.

  12. Robust feature extraction for rapid classification of damage in composites

    NASA Astrophysics Data System (ADS)

    Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi

    2009-03-01

    The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.

  13. Integrating Intelligent Systems Domain Knowledge Into the Earth Science Curricula

    NASA Astrophysics Data System (ADS)

    Güereque, M.; Pennington, D. D.; Pierce, S. A.

    2017-12-01

    High-volume heterogeneous datasets are becoming ubiquitous, migrating to center stage over the last ten years and transcending the boundaries of computationally intensive disciplines into the mainstream, becoming a fundamental part of every science discipline. Despite the fact that large datasets are now pervasive across industries and academic disciplines, the array of skills is generally absent from earth science programs. This has left the bulk of the student population without access to curricula that systematically teach appropriate intelligent-systems skills, creating a void for skill sets that should be universal given their need and marketability. While some guidance regarding appropriate computational thinking and pedagogy is appearing, there exist few examples where these have been specifically designed and tested within the earth science domain. Furthermore, best practices from learning science have not yet been widely tested for developing intelligent systems-thinking skills. This research developed and tested evidence based computational skill modules that target this deficit with the intention of informing the earth science community as it continues to incorporate intelligent systems techniques and reasoning into its research and classrooms.

  14. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  15. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  16. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  17. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  18. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  19. Hue-saturation-density (HSD) model for stain recognition in digital images from transmitted light microscopy.

    PubMed

    van Der Laak, J A; Pahlplatz, M M; Hanselaar, A G; de Wilde, P C

    2000-04-01

    Transmitted light microscopy is used in pathology to examine stained tissues. Digital image analysis is gaining importance as a means to quantify alterations in tissues. A prerequisite for accurate and reproducible quantification is the possibility to recognise stains in a standardised manner, independently of variations in the staining density. The usefulness of three colour models was studied using data from computer simulations and experimental data from an immuno-doublestained tissue section. Direct use of the three intensities obtained by a colour camera results in the red-green-blue (RGB) model. By decoupling the intensity from the RGB data, the hue-saturation-intensity (HSI) model is obtained. However, the major part of the variation in perceived intensities in transmitted light microscopy is caused by variations in staining density. Therefore, the hue-saturation-density (HSD) transform was defined as the RGB to HSI transform, applied to optical density values rather than intensities for the individual RGB channels. In the RGB model, the mixture of chromatic and intensity information hampers standardisation of stain recognition. In the HSI model, mixtures of stains that could be distinguished from other stains in the RGB model could not be separated. The HSD model enabled all possible distinctions in a two-dimensional, standardised data space. In the RGB model, standardised recognition is only possible by using complex and time-consuming algorithms. The HSI model is not suitable for stain recognition in transmitted light microscopy. The newly derived HSD model was found superior to the existing models for this purpose. Copyright 2000 Wiley-Liss, Inc.

  20. Assessing Linearity in the Loudness Envelope of the Messa di Voce Singing Exercise Through Acoustic Signal Analysis.

    PubMed

    Yadav, Manuj; Cabrera, Densil; Kenny, Dianna T

    2015-09-01

    Messa di voce (MDV) is a singing exercise that involves sustaining a single pitch with a linear change in loudness from silence to maximum intensity (the crescendo part) and back to silence again (the decrescendo part), with time symmetry between the two parts. Previous studies have used the sound pressure level (SPL, in decibels) of a singer's voice to measure loudness, so as to assess the linearity of each part-an approach that has limitations due to loudness and SPL not being linearly related. This article studies the loudness envelope shapes of MDVs, comparing the SPL approach with approaches that are more closely related to human loudness perception. The MDVs were performed by a cohort of tertiary singing students, recorded six times (once per semester) over a period of 3 years. The loudness envelopes were derived for a typical audience listening position, and for listening to one's own singing, using three models: SPL, Stevens' power law-based model, and a computational loudness model. The effects on the envelope shape due to room acoustics (an important effect) and vibrato (minimal effect) were also considered. The results showed that the SPL model yielded a lower proportion of linear crescendi and decrescendi, compared with other models. The Stevens' power law-based model provided results similar to the more complicated computational loudness model. Longitudinally, there was no consistent trend in the shape of the MDV loudness envelope for the cohort although there were some individual singers who exhibited improvements in linearity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  1. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  2. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    PubMed

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  3. Numerical estimation of cavitation intensity

    NASA Astrophysics Data System (ADS)

    Krumenacker, L.; Fortes-Patella, R.; Archer, A.

    2014-03-01

    Cavitation may appear in turbomachinery and in hydraulic orifices, venturis or valves, leading to performance losses, vibrations and material erosion. This study propose a new method to predict the cavitation intensity of the flow, based on a post-processing of unsteady CFD calculations. The paper presents the analyses of cavitating structures' evolution at two different scales: • A macroscopic one in which the growth of cavitating structures is calculated using an URANS software based on a homogeneous model. Simulations of cavitating flows are computed using a barotropic law considering presence of air and interfacial tension, and Reboud's correction on the turbulence model. • Then a small one where a Rayleigh-Plesset software calculates the acoustic energy generated by the implosion of the vapor/gas bubbles with input parameters from macroscopic scale. The volume damage rate of the material during incubation time is supposed to be a part of the cumulated acoustic energy received by the solid wall. The proposed analysis method is applied to calculations on hydrofoil and orifice geometries. Comparisons between model results and experimental works concerning flow characteristic (size of cavity, pressure,velocity) as well as pitting (erosion area, relative cavitation intensity) are presented.

  4. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  5. Imaging of cerebellopontine angle lesions: an update. Part 1: enhancing extra-axial lesions.

    PubMed

    Bonneville, Fabrice; Savatovsky, Julien; Chiras, Jacques

    2007-10-01

    Computed tomography (CT) and magnetic resonance (MR) imaging reliably demonstrate typical features of vestibular schwannomas or meningiomas in the vast majority of mass lesions in the cerebellopontine angle (CPA). However, a large variety of unusual lesions can also be encountered in the CPA. Covering the entire spectrum of lesions potentially found in the CPA, these articles explain the pertinent neuroimaging features that radiologists need to know to make clinically relevant diagnoses in these cases, including data from diffusion and perfusion-weighted imaging or MR spectroscopy, when available. A diagnostic algorithm based on the lesion's site of origin, shape and margins, density, signal intensity and contrast material uptake is also proposed. Part 1 describes the different enhancing extra-axial CPA masses primarily arising from the cerebellopontine cistern and its contents, including vestibular and non-vestibular schwannomas, meningioma, metastasis, aneurysm, tuberculosis and other miscellaneous meningeal lesions.

  6. Dimensional Measurements of Three Tubes by Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneberk, D J; Martz, Jr., H E; Brown, W D

    2004-10-05

    Low density polyethylene (LDPE), copper (Cu), and gold (Au) tubes were scanned on KCAT to identify and evaluate the impact of phase effects on quantitative object recovery. These tubes are phantoms for high energy density capsules.[Logan, et al. 2004] Digital radiographs for each tube are shown in Figure 1. The LDPE tube was scanned at 60 kV, while the Cu and the Au tubes were scanned at 140 kV. All tubes were scanned at a magnification of 3, with approximately 100-mm distance between the exit plane of the tube and the scintillator. Notice the prominence of the outer bright andmore » inner dark edges for the LDPE tube DR, and their absence from the Cu and Au tube DRs. The bright and dark edges are a result of change in phase of the x-rays. The x-ray fluence is partly attenuated and partly refracted. The location near the outer edge of the tube appears to be more attenuating since those x-rays have refracted to locations just outside the tube. Alternatively, the added counts from the refraction result in intensities that are greater than the incident intensity effectively representing a ''negative attenuation''. This results in more counts in that location than in the incident intensity image violating the ''positive-definite'' requirement for standard CT reconstruction methodologies. One aspect of our CT processing techniques remove some of this signal on the outside of the object. The goal of this paper is to evaluate the accuracy of our dimensional measurement methods for mesoscale object inspection.« less

  7. Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception

    DTIC Science & Technology

    2017-12-01

    computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead

  8. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  9. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  10. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  11. Java Performance for Scientific Applications on LLNL Computer Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less

  12. Computer-based psychological treatment for comorbid depression and problematic alcohol and/or cannabis use: a randomized controlled trial of clinical efficacy.

    PubMed

    Kay-Lambkin, Frances J; Baker, Amanda L; Lewin, Terry J; Carr, Vaughan J

    2009-03-01

    To evaluate computer- versus therapist-delivered psychological treatment for people with comorbid depression and alcohol/cannabis use problems. Randomized controlled trial. Community-based participants in the Hunter Region of New South Wales, Australia. Ninety-seven people with comorbid major depression and alcohol/cannabis misuse. All participants received a brief intervention (BI) for depressive symptoms and substance misuse, followed by random assignment to: no further treatment (BI alone); or nine sessions of motivational interviewing and cognitive behaviour therapy (intensive MI/CBT). Participants allocated to the intensive MI/CBT condition were selected at random to receive their treatment 'live' (i.e. delivered by a psychologist) or via a computer-based program (with brief weekly input from a psychologist). Depression, alcohol/cannabis use and hazardous substance use index scores measured at baseline, and 3, 6 and 12 months post-baseline assessment. (i) Depression responded better to intensive MI/CBT compared to BI alone, with 'live' treatment demonstrating a strong short-term beneficial effect which was matched by computer-based treatment at 12-month follow-up; (ii) problematic alcohol use responded well to BI alone and even better to the intensive MI/CBT intervention; (iii) intensive MI/CBT was significantly better than BI alone in reducing cannabis use and hazardous substance use, with computer-based therapy showing the largest treatment effect. Computer-based treatment, targeting both depression and substance use simultaneously, results in at least equivalent 12-month outcomes relative to a 'live' intervention. For clinicians treating people with comorbid depression and alcohol problems, BIs addressing both issues appear to be an appropriate and efficacious treatment option. Primary care of those with comorbid depression and cannabis use problems could involve computer-based integrated interventions for depression and cannabis use, with brief regular contact with the clinician to check on progress.

  13. Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2003-01-01

    The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.

  14. Design ATE systems for complex assemblies

    NASA Astrophysics Data System (ADS)

    Napier, R. S.; Flammer, G. H.; Moser, S. A.

    1983-06-01

    The use of ATE systems in radio specification testing can reduce the test time by approximately 90 to 95 percent. What is more, the test station does not require a highly trained operator. Since the system controller has full power over all the measurements, human errors are not introduced into the readings. The controller is immune to any need to increase output by allowing marginal units to pass through the system. In addition, the software compensates for predictable, repeatable system errors, for example, cabling losses, which are an inherent part of the test setup. With no variation in test procedures from unit to unit, there is a constant repeatability factor. Preparing the software, however, usually entails considerable expense. It is pointed out that many of the problems associated with ATE system software can be avoided with the use of a software-intensive, or computer-intensive, system organization. Its goal is to minimize the user's need for software development, thereby saving time and money.

  15. Simulation of the modulation transfer function dependent on the partial Fourier fraction in dynamic contrast enhancement magnetic resonance imaging.

    PubMed

    Takatsu, Yasuo; Ueyama, Tsuyoshi; Miyati, Tosiaki; Yamamura, Kenichirou

    2016-12-01

    The image characteristics in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) depend on the partial Fourier fraction and contrast medium concentration. These characteristics were assessed and the modulation transfer function (MTF) was calculated by computer simulation. A digital phantom was created from signal intensity data acquired at different contrast medium concentrations on a breast model. The frequency images [created by fast Fourier transform (FFT)] were divided into 512 parts and rearranged to form a new image. The inverse FFT of this image yielded the MTF. From the reference data, three linear models (low, medium, and high) and three exponential models (slow, medium, and rapid) of the signal intensity were created. Smaller partial Fourier fractions, and higher gradients in the linear models, corresponded to faster MTF decline. The MTF more gradually decreased in the exponential models than in the linear models. The MTF, which reflects the image characteristics in DCE-MRI, was more degraded as the partial Fourier fraction decreased.

  16. Turbulence and Coherent Structure in the Atmospheric Boundary Layer near the Eyewall of Hurricane Hugo (1989)

    NASA Astrophysics Data System (ADS)

    Zhang, J. A.; Marks, F. D.; Montgomery, M. T.; Black, P. G.

    2008-12-01

    In this talk we present an analysis of observational data collected from NOAA'S WP-3D research aircraft during the eyewall penetration of category five Hurricane Hugo (1989). The 1 Hz flight level data near 450m above the sea surface comprising wind velocity, temperature, pressure and relative humidity are used to estimate the turbulence intensity and fluxes. In the turbulent flux calculation, the universal shape spectra and co-spectra derived using the 40 Hz data collected during the Coupled Boundary Layer Air-sea Transfer (CBLAST) Hurricane experiment are applied to correct the high frequency part of the data collected in Hurricane Hugo. Since the stationarity assumption required for standard eddy correlations is not always satisfied, different methods are summarized for computing the turbulence parameters. In addition, a wavelet analysis is conducted to investigate the time and special scales of roll vortices or coherent structures that are believed important elements of the eye/eyewall mixing processes that support intense storms.

  17. [Management of acute respiratory distress syndrome in Midi-Pyrnees].

    PubMed

    Fuzier, R; Mercier-Fuzier, V; Chaminade, B; Georges, B; Decun, J F; Cougot, P; Ducassé, J L; Virenque, C

    2000-10-07

    To assess management of acute respiratory distress syndrome (ARDS) in Midi-Pyrénées, France. A prospective study using a questionnaire divided into 10 parts, definition, etiology, radiography, computed tomography, management, was conducted in 26 intensive care units in the Midi-Pyrénées. Management of ARDS in Midi-Pyrénées was comparted with management elsewhere as described in the literature. Overall participation rate was 73%. Disparities were found concerning the definition. Four etiologies accounted for 75% of all ARDS cases. Chest x-rays were used for positive diagnosis and thoracic scans for complications. Ventilatory and hemodynamic optimizations were the first line therapy used. Twenty-nine percent and 41% of the intensive care unites used nitric oxide and prone position respectively. There are differences between ARDS management in Midi-Pyrénées and that described in the current literature. Epidemiologic studies such as this one are necessary before publishing guidelines for the management of ARDS.

  18. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  19. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  20. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  1. PNNLs Data Intensive Computing research battles Homeland Security threats

    ScienceCinema

    David Thurman; Joe Kielman; Katherine Wolf; David Atkinson

    2018-05-11

    The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  2. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema

    Deborah Gracio; David Koppenaal; Ruby Leung

    2018-05-18

    The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  3. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  4. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  5. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  6. Impedance computations and beam-based measurements: A problem of discrepancy

    NASA Astrophysics Data System (ADS)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  7. Unsteady thermal blooming of intense laser beams

    NASA Astrophysics Data System (ADS)

    Ulrich, J. T.; Ulrich, P. B.

    1980-01-01

    A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.

  8. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  9. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  10. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  11. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  12. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  13. An updated climatology of explosive cyclones using alternative measures of cyclone intensity

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Caballero, R.

    2009-04-01

    Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?

  14. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE PAGES

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil; ...

    2018-03-22

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  15. Investigating power capping toward energy-efficient scientific applications: Investigating Power Capping toward Energy-Efficient Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haidar, Azzam; Jagode, Heike; Vaccaro, Phil

    The emergence of power efficiency as a primary constraint in processor and system design poses new challenges concerning power and energy awareness for numerical libraries and scientific applications. Power consumption also plays a major role in the design of data centers, which may house petascale or exascale-level computing systems. At these extreme scales, understanding and improving the energy efficiency of numerical libraries and their related applications becomes a crucial part of the successful implementation and operation of the computing system. In this paper, we study and investigate the practice of controlling a compute system's power usage, and we explore howmore » different power caps affect the performance of numerical algorithms with different computational intensities. Further, we determine the impact, in terms of performance and energy usage, that these caps have on a system running scientific applications. This analysis will enable us to characterize the types of algorithms that benefit most from these power management schemes. Our experiments are performed using a set of representative kernels and several popular scientific benchmarks. Lastly, we quantify a number of power and performance measurements and draw observations and conclusions that can be viewed as a roadmap to achieving energy efficiency in the design and execution of scientific algorithms.« less

  16. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  17. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  18. What does the amygdala contribute to social cognition?

    PubMed Central

    Adolphs, Ralph

    2010-01-01

    The amygdala has received intense recent attention from neuroscientists investigating its function at the molecular, cellular, systems, cognitive, and clinical level. It clearly contributes to processing emotionally and socially relevant information, yet a unifying description and computational account have been lacking. The difficulty of tying together the various studies stems in part from the sheer diversity of approaches and species studied, in part from the amygdala’s inherent heterogeneity in terms of its component nuclei, and in part because different investigators have simply been interested in different topics. Yet, a synthesis now seems close at hand in combining new results from social neuroscience with data from neuroeconomics and reward learning. The amygdala processes a psychological stimulus dimension related to saliency or relevance; mechanisms have been identified to link it to processing unpredictability; and insights from reward learning have situated it within a network of structures that include the prefrontal cortex and the ventral striatum in processing the current value of stimuli. These aspects help to clarify the amygdala’s contributions to recognizing emotion from faces, to social behavior toward conspecifics, and to reward learning and instrumental behavior. PMID:20392275

  19. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  20. Homo Heuristicus: Less-is-More Effects in Adaptive Cognition

    PubMed Central

    Brighton, Henry; Gigerenzer, Gerd

    2012-01-01

    Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We discuss some of the major progress made so far, focusing on the discovery of less-is-more effects and the study of the ecological rationality of heuristics which examines in which environments a given strategy succeeds or fails, and why. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. PMID:23613644

  1. NULL Convention Floating Point Multiplier

    PubMed Central

    Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069

  2. NULL convention floating point multiplier.

    PubMed

    Albert, Anitha Juliette; Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.

  3. Effects of a Pedagogical Agent's Emotional Expressiveness on Learner Perceptions

    NASA Technical Reports Server (NTRS)

    Romero, Enilda J.; Watson, Ginger S.

    2012-01-01

    The use of animated pedagogical agents or avatars in instruction has lagged behind their use in entertainment. This is due in part to the cost and complexity of development and implementation of agents in educational settings, but also results from a lack of research to understand how emotions from animated agents influence instructional effectiveness. The phenomenological study presented here assesses the perceptions of eight learners interacting with low and high intensity emotionally expressive pedagogical agents in a computer-mediated environment. Research methods include maximum variation and snowball sampling with random assignment to treatment. The resulting themes incorporate perceptions of importance, agent humanness, enjoyment, implementation barriers, and suggested improvements. Design recommendations and implications for future research are presented.

  4. Social, Organizational, and Contextual Characteristics of Clinical Decision Support Systems for Intensive Insulin Therapy: A Literature Review and Case Study

    PubMed Central

    Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.

    2009-01-01

    Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452

  5. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  6. Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; García, Sebastián Gimeno

    2013-05-01

    Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.

  7. 49 CFR Appendix A to Part 227 - Noise Exposure Computation

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Noise Exposure Computation A Appendix A to Part... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. A Appendix A to Part 227—Noise Exposure Computation This appendix is mandatory. I. Computation of Employee Noise Exposure A...

  8. 49 CFR Appendix A to Part 227 - Noise Exposure Computation

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Noise Exposure Computation A Appendix A to Part... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION OCCUPATIONAL NOISE EXPOSURE Pt. 227, App. A Appendix A to Part 227—Noise Exposure Computation This appendix is mandatory. I. Computation of Employee Noise Exposure A...

  9. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity

    PubMed Central

    2013-01-01

    Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209

  10. A computational model of pupil dilation

    NASA Astrophysics Data System (ADS)

    Johansson, Birger; Balkenius, Christian

    2018-01-01

    We present a system-level connectionist model of pupil control that includes brain regions believed to influence the size of the pupil. It includes parts of the sympathetic and parasympathetic nervous system together with the hypothalamus, amygdala, locus coeruleus, and cerebellum. Computer simulations show that the model is able to reproduce a number of important aspects of how the pupil reacts to different stimuli: (1) It reproduces the characteristic shape and latency of the light-reflex. (2) It elicits pupil dilation as a response to novel stimuli. (3) It produces pupil dilation when shown emotionally charged stimuli, and can be trained to respond to initially neutral stimuli through classical conditioning. (4) The model can learn to expect light changes for particular stimuli, such as images of the sun, and produces a "light-response" to such stimuli even when there is no change in light intensity. (5) It also reproduces the fear-inhibited light reflex effect where reactions to light increase is weaker after presentation of a conditioned stimulus that predicts punishment.

  11. Development of a General Form CO 2 and Brine Flux Input Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, K.; Sun, Y.; Carroll, S.

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less

  12. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solversmore » that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.« less

  13. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi

    2015-09-02

    This paper presents a nonlinear analytical model of a novel double sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets (PM), stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry which makes it a good alternative for evaluating prospective designs of TFM as compared tomore » finite element solvers which are numerically intensive and require more computation time. A single phase, 1 kW, 400 rpm machine is analytically modeled and its resulting flux distribution, no-load EMF and torque, verified with Finite Element Analysis (FEA). The results are found to be in agreement with less than 5% error, while reducing the computation time by 25 times.« less

  14. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  15. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  16. An automated optical wedge calibrator for Dobson ozone spectrophotometers

    NASA Technical Reports Server (NTRS)

    Evans, R. D.; Komhyr, W. D.; Grass, R. D.

    1994-01-01

    The Dobson ozone spectrophotometer measures the difference of intensity between selected wavelengths in the ultraviolet. The method uses an optical attenuator (the 'Wedge') in this measurement. The knowledge of the relationship of the wedge position to the attenuation is critical to the correct calculation of ozone from the measurement. The procedure to determine this relationship is time-consuming, and requires a highly skilled person to perform it correctly. The relationship has been found to change with time. For reliable ozone values, the procedure should be done on a Dobson instrument at regular intervals. Due to the skill and time necessary to perform this procedure, many instruments have gone as long as 15 years between procedures. This article describes an apparatus that performs the procedure under computer control, and is adaptable to the majority of existing Dobson instruments. Part of the apparatus is usable for normal operation of the Dobson instrument, and would allow computer collection of the data and real-time ozone measurements.

  17. The visual light field in real scenes

    PubMed Central

    Xia, Ling; Pont, Sylvia C.; Heynderickx, Ingrid

    2014-01-01

    Human observers' ability to infer the light field in empty space is known as the “visual light field.” While most relevant studies were performed using images on computer screens, we investigate the visual light field in a real scene by using a novel experimental setup. A “probe” and a scene were mixed optically using a semitransparent mirror. Twenty participants were asked to judge whether the probe fitted the scene with regard to the illumination intensity, direction, and diffuseness. Both smooth and rough probes were used to test whether observers use the additional cues for the illumination direction and diffuseness provided by the 3D texture over the rough probe. The results confirmed that observers are sensitive to the intensity, direction, and diffuseness of the illumination also in real scenes. For some lighting combinations on scene and probe, the awareness of a mismatch between the probe and scene was found to depend on which lighting condition was on the scene and which on the probe, which we called the “swap effect.” For these cases, the observers judged the fit to be better if the average luminance of the visible parts of the probe was closer to the average luminance of the visible parts of the scene objects. The use of a rough instead of smooth probe was found to significantly improve observers' abilities to detect mismatches in lighting diffuseness and directions. PMID:25926970

  18. Ray tracing on the MPP

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.

  19. Deformable registration of CT and cone-beam CT with local intensity matching.

    PubMed

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-07

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  20. Deformable registration of CT and cone-beam CT with local intensity matching

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-01

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  1. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  2. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  3. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  4. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  5. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  6. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  7. RAINLINK: Retrieval algorithm for rainfall monitoring employing microwave links from a cellular communication network

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Overeem, A.; Leijnse, H.; Rios Gaona, M. F.

    2017-12-01

    The basic principle of rainfall estimation using microwave links is as follows. Rainfall attenuates the electromagnetic signals transmitted from one telephone tower to another. By measuring the received power at one end of a microwave link as a function of time, the path-integrated attenuation due to rainfall can be calculated, which can be converted to average rainfall intensities over the length of a link. Microwave links from cellular communication networks have been proposed as a promising new rainfall measurement technique for one decade. They are particularly interesting for those countries where few surface rainfall observations are available. Yet to date no operational (real-time) link-based rainfall products are available. To advance the process towards operational application and upscaling of this technique, there is a need for freely available, user-friendly computer code for microwave link data processing and rainfall mapping. Such software is now available as R package "RAINLINK" on GitHub (https://github.com/overeem11/RAINLINK). It contains a working example to compute link-based 15-min rainfall maps for the entire surface area of The Netherlands for 40 hours from real microwave link data. This is a working example using actual data from an extensive network of commercial microwave links, for the first time, which will allow users to test their own algorithms and compare their results with ours. The package consists of modular functions, which facilitates running only part of the algorithm. The main processings steps are: 1) Preprocessing of link data (initial quality and consistency checks); 2) Wet-dry classification using link data; 3) Reference signal determination; 4) Removal of outliers ; 5) Correction of received signal powers; 6) Computation of mean path-averaged rainfall intensities; 7) Interpolation of rainfall intensities ; 8) Rainfall map visualisation. Some applications of RAINLINK will be shown based on microwave link data from a temperate climate (the Netherlands), and from a subtropical climate (Brazil). We hope that RAINLINK will promote the application of rainfall monitoring using microwave links in poorly gauged regions around the world. We invite researchers to contribute to RAINLINK to make the code more generally applicable to data from different networks and climates.

  8. Design of a fault tolerant airborne digital computer. Volume 2: Computational requirements and technology

    NASA Technical Reports Server (NTRS)

    Ratner, R. S.; Shapiro, E. B.; Zeidler, H. M.; Wahlstrom, S. E.; Clark, C. B.; Goldberg, J.

    1973-01-01

    This final report summarizes the work on the design of a fault tolerant digital computer for aircraft. Volume 2 is composed of two parts. Part 1 is concerned with the computational requirements associated with an advanced commercial aircraft. Part 2 reviews the technology that will be available for the implementation of the computer in the 1975-1985 period. With regard to the computation task 26 computations have been categorized according to computational load, memory requirements, criticality, permitted down-time, and the need to save data in order to effect a roll-back. The technology part stresses the impact of large scale integration (LSI) on the realization of logic and memory. Also considered was module interconnection possibilities so as to minimize fault propagation.

  9. Effectiveness of speech language therapy either alone or with add-on computer-based language therapy software (Malayalam version) for early post stroke aphasia: A feasibility study.

    PubMed

    Kesav, Praveen; Vrinda, S L; Sukumaran, Sajith; Sarma, P S; Sylaja, P N

    2017-09-15

    This study aimed to assess the feasibility of professional based conventional speech language therapy (SLT) either alone (Group A/less intensive) or assisted by novel computer based local language software (Group B/more intensive) for rehabilitation in early post stroke aphasia. Comprehensive Stroke Care Center of a tertiary health care institute situated in South India, with the study design being prospective open randomised controlled trial with blinded endpoint evaluation. This study recruited 24 right handed first ever acute ischemic stroke patients above 15years of age affecting middle cerebral artery territory within 90days of stroke onset with baseline Western Aphasia Battery (WAB) Aphasia Quotient (AQ) score of <93.8 between September 2013 and January 2016.The recruited subjects were block randomised into either Group A/less intensive or Group B/more intensive therapy arms, in order to receive 12 therapy sessions of conventional professional based SLT of 1h each in both groups, with an additional 12h of computer based language therapy in Group B over 4weeks on a thrice weekly basis, with a follow up WAB performed at four and twelve weeks after baseline assessment. The trial was registered with Clinical trials registry India [2016/08/0120121]. All the statistical analysis was carried out with IBM SPSS Statistics for Windows version 21. 20 subjects [14 (70%) Males; Mean age: 52.8years±SD12.04] completed the study (9 in the less intensive and 11 in the more intensive arm). The mean four weeks follow up AQ showed a significant improvement from the baseline in the total group (p value: 0.01). The rate of rise of AQ from the baseline to four weeks follow up (ΔAQ %) showed a significantly greater value for the less intensive treatment group as against the more intensive treatment group [155% (SD: 150; 95% CI: 34-275) versus 52% (SD: 42%; 95% CI: 24-80) respectively: p value: 0.053]. Even though the more intensive treatment arm incorporating combined professional based SLT and computer software based training fared poorer than the less intensive therapy group, this study nevertheless reinforces the feasibility of SLT in augmenting recovery of early post stroke aphasia. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Tropical Cyclone Intensity in Global Models

    NASA Astrophysics Data System (ADS)

    Davis, C. A.; Wang, W.; Ahijevych, D.

    2017-12-01

    In recent years, global prediction and climate models have begun to depict intense tropical cyclones, even up to Category 5 on the Saffir-Simpson scale. In light of the limitation of horizontal resolution in such models, we examine the how well these models treat tropical cyclone intensity, measured from several different perspectives. The models evaluated include the operational Global Forecast System, with a grid spacing of about 13 km, and the Model for Prediction Across Scales, with a variable resolution of 15 km over the Northwest Pacific transitioning to 60 km elsewhere. We focus on the Northwest Pacific for the period July-October, 2016. Results indicate that discrimination of tropical cyclone intensity is reasonably good up to roughly category 3 storms. The models are able to capture storms of category 4 intensity, but still exhibit a negative intensity bias of 20-30 knots at lead times beyond 5 days. This is partly indicative of the large number of super-typhoons that occurred in 2016. The question arises of how well global models should represent intensity, given that it is unreasonable for them to depict the inner core of many intense tropical cyclones with a grid increment of 13-15 km. We compute an expected "best-case" prediction of intensity based on filtering the observed wind profiles of Atlantic tropical cyclones according to different hypothetical model resolutions. The Atlantic is used because of the significant number of reconnaissance missions and more reliable estimate of wind radii. Results indicate that, even under the most optimistic assumptions, models with horizontal grid spacing of 1/4 degree or coarser should not produce a realistic number of category 4 and 5 storms unless there are errors in spatial attributes of the wind field. Furthermore, models with a grid spacing of 1/4 degree or greater are unlikely to systematically discriminate hurricanes with differing intensity. Finally, for simple wind profiles, it is shown how an accurate representation of maximum wind on a coarse grid will lead to an overestimate of horizontally integrated kinetic energy by a factor of two or more.

  11. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part...) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer Program Note: For text of appendix C see appendix C to part 67. ...

  12. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  13. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE PAGES

    Smaluk, Victor

    2018-04-21

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  15. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smaluk, Victor

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  16. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  17. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  18. [Mobile computing in anaesthesiology and intensive care medicine. The practical relevance of portable digital assistants].

    PubMed

    Pazhur, R J; Kutter, B; Georgieff, M; Schraag, S

    2003-06-01

    Portable digital assistants (PDAs) may be of value to the anaesthesiologist as development in medical care is moving towards "bedside computing". Many different portable computers are currently available and it is now possible for the physician to carry a mobile computer with him all the time. It is data base, reference book, patient tracking help, date planner, computer, book, magazine, calculator and much more in one mobile device. With the help of a PDA, information that is required for our work may be available at all times and everywhere at the point of care within seconds. In this overview the possibilities for the use of PDAs in anaesthesia and intensive care medicine are discussed. Developments in other countries, possibilities in use but also problems such as data security and network technology are evaluated.

  19. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  20. Calculus: A Computer Oriented Presentation, Part 1 [and] Part 2.

    ERIC Educational Resources Information Center

    Stenberg, Warren; Walker, Robert J.

    Parts one and two of a one-year computer-oriented calculus course (without analytic geometry) are presented. The ideas of calculus are introduced and motivated through computer (i.e., algorithmic) concepts. An introduction to computing via algorithms and a simple flow chart language allows the book to be self-contained, except that material on…

  1. Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study.

    PubMed

    Campion, Thomas R; Waitman, Lemuel R; May, Addison K; Ozdas, Asli; Lorenzi, Nancy M; Gadd, Cynthia S

    2010-01-01

    Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. Copyright (c) 2009. Published by Elsevier Ireland Ltd.

  2. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  3. Towards a Unified Architecture for Data-Intensive Seismology in VERCE

    NASA Astrophysics Data System (ADS)

    Klampanos, I.; Spinuso, A.; Trani, L.; Krause, A.; Garcia, C. R.; Atkinson, M.

    2013-12-01

    Modern seismology involves managing, storing and processing large datasets, typically geographically distributed across organisations. Performing computational experiments using these data generates more data, which in turn have to be managed, further analysed and frequently be made available within or outside the scientific community. As part of the EU-funded project VERCE (http://verce.eu), we research and develop a number of use-cases, interfacing technologies to satisfy the data-intensive requirements of modern seismology. Our solution seeks to support: (1) familiar programming environments to develop and execute experiments, in particular via Python/ObsPy, (2) a unified view of heterogeneous computing resources, public or private, through the adoption of workflows, (3) monitoring the experiments and validating the data products at varying granularities, via a comprehensive provenance system, (4) reproducibility of experiments and consistency in collaboration, via a shared registry of processing units and contextual metadata (computing resources, data, etc.) Here, we provide a brief account of these components and their roles in the proposed architecture. Our design integrates heterogeneous distributed systems, while allowing researchers to retain current practices and control data handling and execution via higher-level abstractions. At the core of our solution lies the workflow language Dispel. While Dispel can be used to express workflows at fine detail, it may also be used as part of meta- or job-submission workflows. User interaction can be provided through a visual editor or through custom applications on top of parameterisable workflows, which is the approach VERCE follows. According to our design, the scientist may use versions of Dispel/workflow processing elements offered by the VERCE library or override them introducing custom scientific code, using ObsPy. This approach has the advantage that, while the scientist uses a familiar tool, the resulting workflow can be executed on a number of underlying stream-processing engines, such as STORM or OGSA-DAI, transparently. While making efficient use of arbitrarily distributed resources and large data-sets is of priority, such processing requires adequate provenance tracking and monitoring. Hiding computation and orchestration details via a workflow system, allows us to embed provenance harvesting where appropriate without impeding the user's regular working patterns. Our provenance model is based on the W3C PROV standard and can provide information of varying granularity regarding execution, systems and data consumption/production. A video demonstrating a prototype provenance exploration tool can be found at http://bit.ly/15t0Fz0. Keeping experimental methodology and results open and accessible, as well as encouraging reproducibility and collaboration, is of central importance to modern science. As our users are expected to be based at different geographical locations, to have access to different computing resources and to employ customised scientific codes, the use of a shared registry of workflow components, implementations, data and computing resources is critical.

  4. 5 CFR 831.703 - Computation of annuities for part-time service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, Reinhold C.

    This is the first formal progress report issued by the ORNL Life Sciences Division. It covers the period from February 1997 through December 1998, which has been critical in the formation of our new division. The legacy of 50 years of excellence in biological research at ORNL has been an important driver for everyone in the division to do their part so that this new research division can realize the potential it has to make seminal contributions to the life sciences for years to come. This reporting period is characterized by intense assessment and planning efforts. They included thorough scrutinymore » of our strengths and weaknesses, analyses of our situation with respect to comparative research organizations, and identification of major thrust areas leading to core research efforts that take advantage of our special facilities and expertise. Our goal is to develop significant research and development (R&D) programs in selected important areas to which we can make significant contributions by combining our distinctive expertise and resources in the biological sciences with those in the physical, engineering, and computational sciences. Significant facilities in mouse genomics, mass spectrometry, neutron science, bioanalytical technologies, and high performance computing are critical to the success of our programs. Research and development efforts in the division are organized in six sections. These cluster into two broad areas of R&D: systems biology and technology applications. The systems biology part of the division encompasses our core biological research programs. It includes the Mammalian Genetics and Development Section, the Biochemistry and Biophysics Section, and the Computational Biosciences Section. The technology applications part of the division encompasses the Assessment Technology Section, the Environmental Technology Section, and the Toxicology and Risk Analysis Section. These sections are the stewards of the division's core competencies. The common mission of the division is to advance science and technology to understand complex biological systems and their relationship with human health and the environment.« less

  6. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.

  7. Determination of collagen fibril structure and orientation in connective tissues by X-ray diffraction

    NASA Astrophysics Data System (ADS)

    Wilkinson, S. J.; Hukins, D. W. L.

    1999-08-01

    Elastic scattering of X-rays can provide the following information on the fibrous protein collagen: its molecular structure, the axial arrangement of rod-like collagen molecules in a fibril, the lateral arrangement of molecules within a fibril, and the orientation of fibrils within a biological tissue. The first part of the paper reviews the principles involved in deducing this information. The second part describes a new computer program for measuring the equatorial intensity distribution, that provides information on the lateral arrangement of molecules within a fibril, and the angular distribution of the equatorial peaks that provides information on the orientation of fibrils. Orientation of fibrils within a tissue is quantified by the orientation distribution function, g( φ), which represents the probability of finding a fibril oriented between φ and φ+ δφ. The application of the program is illustrated by measurement of g( φ) for the collagen fibrils in demineralised cortical bone from cow tibia.

  8. Parent-child attitude congruence on type and intensity of physical activity: testing multiple mediators of sedentary behavior in older children.

    PubMed

    Anderson, Cheryl B; Hughes, Sheryl O; Fuemmeler, Bernard F

    2009-07-01

    This study examined parent-child attitudes on value of specific types and intensities of physical activity, which may explain gender differences in child activity, and evaluated physical activity as a mechanism to reduce time spent in sedentary behaviors. A community sample of 681 parents and 433 children (mean age 9.9 years) reported attitudes on importance of vigorous and moderate intensity team and individually performed sports/activities, as well as household chores. Separate structural models (LISREL 8.7) for girls and boys tested whether parental attitudes were related to child TV and computer via child attitudes, sport team participation, and physical activity, controlling for demographic factors. Child 7-day physical activity, sport teams, weekly TV, computer. Parent-child attitude congruence was more prevalent among boys, and attitudes varied by ethnicity, parent education, and number of children. Positive parent-child attitudes for vigorous team sports were related to increased team participation and physical activity, as well as reduced TV and computer in boys and girls. Value of moderate intensity household chores, such as cleaning house and doing laundry, was related to decreased team participation and increased TV in boys. Only organized team sports, not general physical activity, was related to reduced TV and computer. Results support parents' role in socializing children's achievement task values, affecting child activity by transferring specific attitudes. Value of vigorous intensity sports provided the most benefits to activity and reduction of sedentary behavior, while valuing household chores had unexpected negative effects.

  9. 12 CFR 516.10 - How does OTS compute time periods under this part?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... APPLICATION PROCESSING PROCEDURES § 516.10 How does OTS compute time periods under this part? In computing time periods under this part, OTS does not include the day of the act or event that commences the time... 12 Banks and Banking 5 2010-01-01 2010-01-01 false How does OTS compute time periods under this...

  10. Integrating Computing across the Curriculum: The Impact of Internal Barriers and Training Intensity on Computer Integration in the Elementary School Classroom

    ERIC Educational Resources Information Center

    Coleman, LaToya O.; Gibson, Philip; Cotten, Shelia R.; Howell-Moroney, Michael; Stringer, Kristi

    2016-01-01

    This study examines the relationship between internal barriers, professional development, and computer integration outcomes among a sample of fourth- and fifth-grade teachers in an urban, low-income school district in the Southeastern United States. Specifically, we examine the impact of teachers' computer attitudes, computer anxiety, and computer…

  11. Overview 1993: Computational applications

    NASA Technical Reports Server (NTRS)

    Benek, John A.

    1993-01-01

    Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.

  12. Evaluating virtual hosted desktops for graphics-intensive astronomy

    NASA Astrophysics Data System (ADS)

    Meade, B. F.; Fluke, C. J.

    2018-04-01

    Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.

  13. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  14. BarraCUDA - a fast short read sequence aligner using graphics processing units

    PubMed Central

    2012-01-01

    Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net PMID:22244497

  15. [A personal computer-based system for online monitoring of neurologic intensive care patients].

    PubMed

    Stoll, M; Hamann, G; Jost, V; Schimrigk, K

    1992-03-01

    In the management of neurological intensive care patients with an intracranial space-consuming process the measurement and recording of intracranial pressure together with arterial blood pressure is of special interest. These parameters can be used to monitor the treatment of brain edema and hypertension. Intracranial pressure measurement is also important in the diagnosis of the various subtypes of hydrocephalus. Not only the absolute figures, but also the recognition of specific pressure-patterns is of particular clinical and scientific interest. This new, easily installed and inexpensive system comprises a PC and a conventional monitor, which are connected by an AD-conversion card. Our software, specially developed for this system demonstrates, stores and prints the online-course and the trend of the measurements. In addition it is also possible to view the online-course of conspicuous parts of the trend curve retrospectively and to use these values for statistical analyses. Object-orientated software development techniques were used for flexible graphic output on the screen, printer or to a file. Though developed for this specific purpose, this system is also suitable for recording continuous, longer-term measurements in general.

  16. Two frameworks for integrating knowledge in induction

    NASA Technical Reports Server (NTRS)

    Rosenbloom, Paul S.; Hirsh, Haym; Cohen, William W.; Smith, Benjamin D.

    1994-01-01

    The use of knowledge in inductive learning is critical for improving the quality of the concept definitions generated, reducing the number of examples required in order to learn effective concept definitions, and reducing the computation needed to find good concept definitions. Relevant knowledge may come in many forms (such as examples, descriptions, advice, and constraints) and from many sources (such as books, teachers, databases, and scientific instruments). How to extract the relevant knowledge from this plethora of possibilities, and then to integrate it together so as to appropriately affect the induction process is perhaps the key issue at this point in inductive learning. Here the focus is on the integration part of this problem; that is, how induction algorithms can, and do, utilize a range of extracted knowledge. Preliminary work on a transformational framework for defining knowledge-intensive inductive algorithms out of relatively knowledge-free algorithms is described, as is a more tentative problems-space framework that attempts to cover all induction algorithms within a single general approach. These frameworks help to organize what is known about current knowledge-intensive induction algorithms, and to point towards new algorithms.

  17. Notable local floods of 1942-43, Floods of July 18, 1942 in north-central Pennsylvania, with a section on descriptive details of the storm and floods

    USGS Publications Warehouse

    Eisenlohr, William Stewart; Stewart, J.E.

    1952-01-01

    During the night of August 4-5, 1943, a violent thunderstorm of unusual intensity occurred in parts of Braxton, Calhoun, Gilmer, Ritchie, and Wirth Counties in the Little Kanawha River Basin in central West Virginia. Precipitation amounted to as much as 15 inches in 2 hours in some sections. As a result, many small streams and a reach of the Little Kanawha River in the vicinity of Burnsville and Gilmer reached the highest stages known. Computations based on special surveys made at suitable sites on representative small streams in the areas of intense flooding indicate that peak discharges closely approach 50 percent of the Jarvis scale. Twenty-three lives were lost on the small tributaries as numerous homes were swept away by the flood, which developed with incredible rapidity during the early morning hours. Damage estimated at $1,300,000 resulted to farm buildings, crops, land, livestock, railroads, highways, and gas- and oil-producing facilities. Considerable permanent land damage resulted from erosion and deposition of sand and gravel.

  18. On the Value of Reptilian Brains to Map the Evolution of the Hippocampal Formation.

    PubMed

    Reiter, Sam; Liaw, Hua-Peng; Yamawaki, Tracy M; Naumann, Robert K; Laurent, Gilles

    2017-01-01

    Our ability to navigate through the world depends on the function of the hippocampus. This old cortical structure plays a critical role in spatial navigation in mammals and in a variety of processes, including declarative and episodic memory and social behavior. Intense research has revealed much about hippocampal anatomy, physiology, and computation; yet, even intensely studied phenomena such as the shaping of place cell activity or the function of hippocampal firing patterns during sleep remain incompletely understood. Interestingly, while the hippocampus may be a 'higher order' area linked to a complex cortical hierarchy in mammals, it is an old cortical structure in evolutionary terms. The reptilian cortex, structurally much simpler than the mammalian cortex and hippocampus, therefore presents a good alternative model for exploring hippocampal function. Here, we trace common patterns in the evolution of the hippocampus of reptiles and mammals and ask which parts can be profitably compared to understand functional principles. In addition, we describe a selection of the highly diverse repertoire of reptilian behaviors to illustrate the value of a comparative approach towards understanding hippocampal function. © 2017 S. Karger AG, Basel.

  19. Towards ethical decision support and knowledge management in neonatal intensive care.

    PubMed

    Yang, L; Frize, M; Eng, P; Walker, R; Catley, C

    2004-01-01

    Recent studies in neonatal medicine, clinical nursing, and cognitive psychology have indicated the need to augment current decision-making practice in neonatal intensive care units with computerized, intelligent decision support systems. Rapid progress in artificial intelligence and knowledge management facilitates the design of collaborative ethical decision-support tools that allow clinicians to provide better support for parents facing inherently difficult choices, such as when to withdraw aggressive treatment. The appropriateness of using computers to support ethical decision-making is critically analyzed through research and literature review. In ethical dilemmas, multiple diverse participants need to communicate and function as a team to select the best treatment plan. In order to do this, physicians require reliable estimations of prognosis, while parents need a highly useable tool to help them assimilate complex medical issues and address their own value system. Our goal is to improve and structuralize the ethical decision-making that has become an inevitable part of modern neonatal care units. The paper contributes to clinical decision support by outlining the needs and basis for ethical decision support and justifying the proposed development efforts.

  20. Suggested Approaches to the Measurement of Computer Anxiety.

    ERIC Educational Resources Information Center

    Toris, Carol

    Psychologists can gain insight into human behavior by examining what people feel about, know about, and do with, computers. Two extreme reactions to computers are computer phobia, or anxiety, and computer addiction, or "hacking". A four-part questionnaire was developed to measure computer anxiety. The first part is a projective technique which…

  1. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  2. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Metric for Reducing False Positives in the Computer-Aided Detection of Breast Cancer from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Based Screening Examinations of High-Risk Women.

    PubMed

    Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L

    2016-02-01

    Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.

  4. Web-client based distributed generalization and geoprocessing

    USGS Publications Warehouse

    Wolf, E.B.; Howe, K.

    2009-01-01

    Generalization and geoprocessing operations on geospatial information were once the domain of complex software running on high-performance workstations. Currently, these computationally intensive processes are the domain of desktop applications. Recent efforts have been made to move geoprocessing operations server-side in a distributed, web accessible environment. This paper initiates research into portable client-side generalization and geoprocessing operations as part of a larger effort in user-centered design for the US Geological Survey's The National Map. An implementation of the Ramer-Douglas-Peucker (RDP) line simplification algorithm was created in the open source OpenLayers geoweb client. This algorithm implementation was benchmarked using differing data structures and browser platforms. The implementation and results of the benchmarks are discussed in the general context of client-side geoprocessing. (Abstract).

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Petersson, N. A.; Rodgers, A.

    Acoustic waveform modeling is a computationally intensive task and full three-dimensional simulations are often impractical for some geophysical applications such as long-range wave propagation and high-frequency sound simulation. In this study, we develop a two-dimensional high-order accurate finite-difference code for acoustic wave modeling. We solve the linearized Euler equations by discretizing them with the sixth order accurate finite difference stencils away from the boundary and the third order summation-by-parts (SBP) closure near the boundary. Non-planar topographic boundary is resolved by formulating the governing equation in curvilinear coordinates following the interface. We verify the implementation of the algorithm by numerical examplesmore » and demonstrate the capability of the proposed method for practical acoustic wave propagation problems in the atmosphere.« less

  6. RNA-Seq for Bacterial Gene Expression.

    PubMed

    Poulsen, Line Dahl; Vinther, Jeppe

    2018-06-01

    RNA sequencing (RNA-seq) has become the preferred method for global quantification of bacterial gene expression. With the continued improvements in sequencing technology and data analysis tools, the most labor-intensive and expensive part of an RNA-seq experiment is the preparation of sequencing libraries, which is also essential for the quality of the data obtained. Here, we present a straightforward and inexpensive basic protocol for preparation of strand-specific RNA-seq libraries from bacterial RNA as well as a computational pipeline for the data analysis of sequencing reads. The protocol is based on the Illumina platform and allows easy multiplexing of samples and the removal of sequencing reads that are PCR duplicates. © 2018 by John Wiley & Sons, Inc. © 2018 John Wiley & Sons, Inc.

  7. Potential energy surface, dipole moment surface and the intensity calculations for the 10 μm, 5 μm and 3 μm bands of ozone

    NASA Astrophysics Data System (ADS)

    Polyansky, Oleg L.; Zobov, Nikolai F.; Mizus, Irina I.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan

    2018-05-01

    Monitoring ozone concentrations in the Earth's atmosphere using spectroscopic methods is a major activity which undertaken both from the ground and from space. However there are long-running issues of consistency between measurements made at infrared (IR) and ultraviolet (UV) wavelengths. In addition, key O3 IR bands at 10 μm, 5 μm and 3 μm also yield results which differ by a few percent when used for retrievals. These problems stem from the underlying laboratory measurements of the line intensities. Here we use quantum chemical techniques, first principles electronic structure and variational nuclear-motion calculations, to address this problem. A new high-accuracy ab initio dipole moment surface (DMS) is computed. Several spectroscopically-determined potential energy surfaces (PESs) are constructed by fitting to empirical energy levels in the region below 7000 cm-1 starting from an ab initio PES. Nuclear motion calculations using these new surfaces allow the unambiguous determination of the intensities of 10 μm band transitions, and the computation of the intensities of 10 μm and 5 μm bands within their experimental error. A decrease in intensities within the 3 μm is predicted which appears consistent with atmospheric retrievals. The PES and DMS form a suitable starting point both for the computation of comprehensive ozone line lists and for future calculations of electronic transition intensities.

  8. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  9. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  10. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  11. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  12. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  13. Real Time Earthquake Information System in Japan

    NASA Astrophysics Data System (ADS)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally monitors earthquake data and analyzes earthquake activities and tsunami occurrence round-the-clock on a real-time basis. In addition to the above, JMA has been developing a system of Nowcast Earthquake Information which can provide its users with occurrence of an earthquake prior to arrival of strong ground motion for a decade. Earthquake Research Institute, the University of Tokyo, is preparing a demonstrative experiment in collaboration with JMA, for a better utilization of Nowcast Earthquake Information to apply actual measures to reduce earthquake disasters caused by strong ground motion.

  14. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  15. The effective application of a discrete transition model to explore cell-cycle regulation in yeast

    PubMed Central

    2013-01-01

    Background Bench biologists often do not take part in the development of computational models for their systems, and therefore, they frequently employ them as “black-boxes”. Our aim was to construct and test a model that does not depend on the availability of quantitative data, and can be directly used without a need for intensive computational background. Results We present a discrete transition model. We used cell-cycle in budding yeast as a paradigm for a complex network, demonstrating phenomena such as sequential protein expression and activity, and cell-cycle oscillation. The structure of the network was validated by its response to computational perturbations such as mutations, and its response to mating-pheromone or nitrogen depletion. The model has a strong predicative capability, demonstrating how the activity of a specific transcription factor, Hcm1, is regulated, and what determines commitment of cells to enter and complete the cell-cycle. Conclusion The model presented herein is intuitive, yet is expressive enough to elucidate the intrinsic structure and qualitative behavior of large and complex regulatory networks. Moreover our model allowed us to examine multiple hypotheses in a simple and intuitive manner, giving rise to testable predictions. This methodology can be easily integrated as a useful approach for the study of networks, enriching experimental biology with computational insights. PMID:23915717

  16. The neural processing of voluntary completed, real and virtual violent and nonviolent computer game scenarios displaying predefined actions in gamers and nongamers.

    PubMed

    Regenbogen, Christina; Herrmann, Manfred; Fehr, Thorsten

    2010-01-01

    Studies investigating the effects of violent computer and video game playing have resulted in heterogeneous outcomes. It has been assumed that there is a decreased ability to differentiate between virtuality and reality in people that play these games intensively. FMRI data of a group of young males with (gamers) and without (controls) a history of long-term violent computer game playing experience were obtained during the presentation of computer game and realistic video sequences. In gamers the processing of real violence in contrast to nonviolence produced activation clusters in right inferior frontal, left lingual and superior temporal brain regions. Virtual violence activated a network comprising bilateral inferior frontal, occipital, postcentral, right middle temporal, and left fusiform regions. Control participants showed extended left frontal, insula and superior frontal activations during the processing of real, and posterior activations during the processing of virtual violent scenarios. The data suggest that the ability to differentiate automatically between real and virtual violence has not been diminished by a long-term history of violent video game play, nor have gamers' neural responses to real violence in particular been subject to desensitization processes. However, analyses of individual data indicated that group-related analyses reflect only a small part of actual individual different neural network involvement, suggesting that the consideration of individual learning history is sufficient for the present discussion.

  17. First-principles calculations on anharmonic vibrational frequencies of polyethylene and polyacetylene in the Gamma approximation.

    PubMed

    Keçeli, Murat; Hirata, So; Yagi, Kiyoshi

    2010-07-21

    The frequencies of the infrared- and/or Raman-active (k=0) vibrations of polyethylene and polyacetylene are computed by taking account of the anharmonicity in the potential energy surfaces (PESs) and the resulting phonon-phonon couplings explicitly. The electronic part of the calculations is based on Gaussian-basis-set crystalline orbital theory at the Hartree-Fock and second-order Møller-Plesset (MP2) perturbation levels, providing one-, two-, and/or three-dimensional slices of the PES (namely, using the so-called n-mode coupling approximation with n=3), which are in turn expanded in the fourth-order Taylor series with respect to the normal coordinates. The vibrational part uses the vibrational self-consistent field, vibrational MP2, and vibrational truncated configuration-interaction (VCI) methods within the Gamma approximation, which amounts to including only k=0 phonons. It is shown that accounting for both electron correlation and anharmonicity is essential in achieving good agreement (the mean and maximum absolute deviations less than 50 and 90 cm(-1), respectively, for polyethylene and polyacetylene) between computed and observed frequencies. The corresponding values for the calculations including only one of such effects are in excess of 120 and 300 cm(-1), respectively. The VCI calculations also reproduce semiquantitatively the frequency separation and intensity ratio of the Fermi doublet involving the nu(2)(0) fundamental and nu(8)(pi) first overtone in polyethylene.

  18. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  19. 5 CFR 839.1101 - How are my retirement benefits computed if I elect CSRS or CSRS Offset under this part?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How are my retirement benefits computed... Provisions § 839.1101 How are my retirement benefits computed if I elect CSRS or CSRS Offset under this part? Unless otherwise stated in this part, your retirement benefit is computed as if you were properly put in...

  20. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    ERIC Educational Resources Information Center

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  1. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  2. Factors associated with success in the oral part of the European Diploma in Intensive Care

    PubMed Central

    Waldauf, Petr; Rubulotta, Francesca; Sitzwohl, Christian; Elbers, Paul; Girbes, Armand; Saha, Rajnish; Marsh, Brian; Kumar, Ravindra; Maggiorini, Marco

    2017-01-01

    Introduction The oral part of European Diploma in Intensive Care diploma examinations changed in 2013 into an objective structured clinical examination-type exam. This step was undertaken to provide a fair and reproducible clinical exam. All candidates face identical questions with predefined correct answers simultaneously in seven high throughput exam centres on the same day. We describe the factors that are associated with success in part 2 European Diploma in Intensive Care exam. Methods We prospectively collected self-reported data from all candidates sitting European Diploma in Intensive Care part 2 in 2015, namely demographics, professional background and attendance to a European Diploma in Intensive Care part 2 or generic objective structured clinical examination preparatory courses. After testing association with success (with cutoff at p < 0.10) and co-linearity of these factors as independent variables, we performed a multivariate logistical analysis, with binary exam outcome (pass/fail) as the dependent variable. Structural equation modelling was used to gain further insight into relations among determinants of success in the oral part of the European Diploma in Intensive Care. Results Out of 427 candidates sitting the exam, completed data from 341 (80%) were available for analysis. The following candidates' factors were associated with increased chance of success: English as native language (odds ratio 4.3 (95% CI 1.7–10.7)), use of Patient-centred Acute Care Training e-learning programme module (odds ratios 2.0 (1.2–3.3)), working in an EU country (odds ratios 2.5 (1.5–4.3)), and better results in the written part of the European Diploma in Intensive Care (for each additional SD of 6.1 points odds ratios 1.9 (1.4–2.4)). Chance of success in the European Diploma in Intensive Care 2 decreased with increased candidates ‘age (for each additional SD of 5.5 years odds ratios 0.67 (0.51–0.87)). Exam centres (7 in total) could be clustered into 3 groups with similar success rates. There were significant differences in exam outcomes among these 3 groups of exam centres even after adjustment to known candidates' factors (G1 vs G2 odds ratios 2.4 (1.4–4.1); G1 vs G3 odds ratios 9.7 (4.0–23.1) and G2 vs G3 odds ratios 3.9 (1.7–9.2)). A short data collection period (only one year) and 20% of missing candidates' data are the main limitations of this study. Conclusions Younger age, English as native language, better results in written part of the exam, working at a European country and the use of PACT for preparation, were factors associated with success in the oral part of the European Diploma in Intensive Care exam. Despite the limitations of this study, the differences in outcome among the exam centres will need further investigation. PMID:29123559

  3. Factors associated with success in the oral part of the European Diploma in Intensive Care.

    PubMed

    Waldauf, Petr; Rubulotta, Francesca; Sitzwohl, Christian; Elbers, Paul; Girbes, Armand; Saha, Rajnish; Marsh, Brian; Kumar, Ravindra; Maggiorini, Marco; Duška, František

    2017-11-01

    The oral part of European Diploma in Intensive Care diploma examinations changed in 2013 into an objective structured clinical examination-type exam. This step was undertaken to provide a fair and reproducible clinical exam. All candidates face identical questions with predefined correct answers simultaneously in seven high throughput exam centres on the same day. We describe the factors that are associated with success in part 2 European Diploma in Intensive Care exam. We prospectively collected self-reported data from all candidates sitting European Diploma in Intensive Care part 2 in 2015, namely demographics, professional background and attendance to a European Diploma in Intensive Care part 2 or generic objective structured clinical examination preparatory courses. After testing association with success (with cutoff at p < 0.10) and co-linearity of these factors as independent variables, we performed a multivariate logistical analysis, with binary exam outcome (pass/fail) as the dependent variable. Structural equation modelling was used to gain further insight into relations among determinants of success in the oral part of the European Diploma in Intensive Care. Out of 427 candidates sitting the exam, completed data from 341 (80%) were available for analysis. The following candidates' factors were associated with increased chance of success: English as native language (odds ratio 4.3 (95% CI 1.7-10.7)), use of Patient-centred Acute Care Training e-learning programme module (odds ratios 2.0 (1.2-3.3)), working in an EU country (odds ratios 2.5 (1.5-4.3)), and better results in the written part of the European Diploma in Intensive Care (for each additional SD of 6.1 points odds ratios 1.9 (1.4-2.4)). Chance of success in the European Diploma in Intensive Care 2 decreased with increased candidates 'age (for each additional SD of 5.5 years odds ratios 0.67 (0.51-0.87)). Exam centres (7 in total) could be clustered into 3 groups with similar success rates. There were significant differences in exam outcomes among these 3 groups of exam centres even after adjustment to known candidates' factors (G1 vs G2 odds ratios 2.4 (1.4-4.1); G1 vs G3 odds ratios 9.7 (4.0-23.1) and G2 vs G3 odds ratios 3.9 (1.7-9.2)). A short data collection period (only one year) and 20% of missing candidates' data are the main limitations of this study. Younger age, English as native language, better results in written part of the exam, working at a European country and the use of PACT for preparation, were factors associated with success in the oral part of the European Diploma in Intensive Care exam. Despite the limitations of this study, the differences in outcome among the exam centres will need further investigation.

  4. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    PubMed

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  6. Supporting Positive Behaviour in Alberta Schools: An Intensive Individualized Approach

    ERIC Educational Resources Information Center

    Souveny, Dwaine

    2008-01-01

    Drawing on current research and best practices, this third part of the three-part resource, "Supporting Positive Behaviour in Alberta Schools," provides information and strategies for providing intensive, individualized support and instruction for the small percentage of students requiring a high degree of intervention. This system of…

  7. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  8. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  9. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  10. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  11. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  12. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  13. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  14. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  15. The Effects of Computer Assisted English Instruction on High School Preparatory Students' Attitudes towards Computers and English

    ERIC Educational Resources Information Center

    Ates, Alev; Altunay, Ugur; Altun, Eralp

    2006-01-01

    The aim of this research was to discern the effects of computer assisted English instruction on English language preparatory students' attitudes towards computers and English in a Turkish-medium high school with an intensive English program. A quasi-experimental time series research design, also called "before-after" or "repeated…

  16. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  17. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  18. Fermilab | Science at Fermilab | Experiments & Projects | Intensity

    Science.gov Websites

    Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator

  19. Data communication network at the ASRM facility

    NASA Astrophysics Data System (ADS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-08-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  20. Data communication network at the ASRM facility

    NASA Technical Reports Server (NTRS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-01-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  1. Computer-aided vs. tutor-delivered teaching of exposure therapy for phobia/panic: randomized controlled trial with pre-registration nursing students.

    PubMed

    Gega, L; Norman, I J; Marks, I M

    2007-03-01

    Exposure therapy is effective for phobic anxiety disorders (specific phobias, agoraphobia, social phobia) and panic disorder. Despite their high prevalence in the community, sufferers often get no treatment or if they do, it is usually after a long delay. This is largely due to the scarcity of healthcare professionals trained in exposure therapy, which is due, in part, to the high cost of training. Traditional teaching methods employed are labour intensive, being based mainly on role-play in small groups with feedback and coaching from experienced trainers. In an attempt to increase knowledge and skills in exposure therapy, there is now some interest in providing relevant teaching as part of pre-registration nurse education. Computers have been developed to teach terminology and simulate clinical scenarios for health professionals, and offer a potentially cost effective alternative to traditional teaching methods. To test whether student nurses would learn about exposure therapy for phobia/panic as well by computer-aided self-instruction as by face-to-face teaching, and to compare the individual and combined effects of two educational methods, traditional face-to-face teaching comprising a presentation with discussion and questions/answers by a specialist cognitive behaviour nurse therapist, and a computer-aided self-instructional programme based on a self-help programme for patients with phobia/panic called FearFighter, on students' knowledge, skills and satisfaction. Randomised controlled trial, with a crossover, completed in 2 consecutive days over a period of 4h per day. Ninety-two mental health pre-registration nursing students, of mixed gender, age and ethnic origin, with no previous training in cognitive behaviour therapy studying at one UK university. The two teaching methods led to similar improvements in knowledge and skills, and to similar satisfaction, when used alone. Using them in tandem conferred no added benefit. Computer-aided self-instruction was more efficient as it saved teacher preparation and delivery time, and needed no specialist tutor. Computer-aided self-instruction saved almost all preparation time and delivery effort for the expert teacher. When added to past results in medical students, the present results in nurses justify the use of computer-aided self-instruction for learning about exposure therapy and phobia/panic and of research into its value for other areas of health education.

  2. Does aging with a cortical lesion increase fall-risk: Examining effect of age versus stroke on intensity modulation of reactive balance responses from slip-like perturbations.

    PubMed

    Patel, Prakruti J; Bhatt, Tanvi

    2016-10-01

    We examined whether aging with and without a cerebral lesion such as stroke affects modulation of reactive balance response for recovery from increasing intensity of sudden slip-like stance perturbations. Ten young adults, older age-match adults and older chronic stroke survivors were exposed to three different levels of slip-like perturbations, level 1 (7.75m/s(2)), Level II (12.00m/s(2)) and level III (16.75m/s(2)) in stance. The center of mass (COM) state stability was computed as the shortest distance of the instantaneous COM position and velocity relative to base of support (BOS) from a theoretical threshold for backward loss of balance (BLOB). The COM position (XCOM/BOS) and velocity (ẊCOM/BOS) relative to BOS at compensatory step touchdown, compensatory step length and trunk angle at touchdown were also recorded. At liftoff, stability reduced with increasing perturbation intensity across all groups (main effect of intensity p<0.05). At touchdown, while the young group showed a linear improvement in stability with increasing perturbation intensity, such a trend was absent in other groups (intensity×group interaction, p<0.05). Between-group differences in stability at touchdown were thus observed at levels II and III. Further, greater stability at touchdown positively correlated with anterior XCOM/BOS however not with ẊCOM/BOS. Young adults maintained anterior XCOM/BOS by increasing compensatory step length and preventing greater trunk extension at higher perturbation intensities. The age-match group attempted to increase step length from intensity I to II to maintain stability however could not further increase step length at intensity III, resulting in lower stability on this level compared with the young group. Stroke group on the other hand was unable to modulate compensatory step length or control trunk extension at higher perturbation intensities resulting in reduced stability on levels II and III compared with the other groups. The findings reflect impaired modulation of recovery response with increasing intensity of sudden perturbations among stroke survivors compared with their healthy counter parts. Thus, aging superimposed with a cortical lesion could further impair reactive balance control, potentially contributing toward a higher fall risk in older stroke survivors. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. News in engineering education in Spain effective from 2010 in presence of external changes and mixed crisis, looking mostly to agro and civil engineers

    NASA Astrophysics Data System (ADS)

    Anton, J. M.; Sanchez, M. E.; Grau, J. B.; Andina, D.

    2012-04-01

    The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, "project of end of career", Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.

  4. Making Ceramic/Polymer Parts By Extrusion Stereolithography

    NASA Technical Reports Server (NTRS)

    Stuffle, Kevin; Mulligan, A.; Creegan, P.; Boulton, J. M.; Lombardi, J. L.; Calvert, P. D.

    1996-01-01

    Extrusion stereolithography developmental method of computer-controlled manufacturing of objects out of ceramic/polymer composite materials. Computer-aided design/computer-aided manufacturing (CAD/CAM) software used to create image of desired part and translate image into motion commands for combination of mechanisms moving resin dispenser. Extrusion performed in coordination with motion of dispenser so buildup of extruded material takes on size and shape of desired part. Part thermally cured after deposition.

  5. Predictors of change in life skills in schizophrenia after cognitive remediation.

    PubMed

    Kurtz, Matthew M; Seltzer, James C; Fujimoto, Marco; Shagan, Dana S; Wexler, Bruce E

    2009-02-01

    Few studies have investigated predictors of response to cognitive remediation interventions in patients with schizophrenia. Predictor studies to date have selected treatment outcome measures that were either part of the remediation intervention itself or closely linked to the intervention with few studies investigating factors that predict generalization to measures of everyday life-skills as an index of treatment-related improvement. In the current study we investigated the relationship between four measures of neurocognitive function, crystallized verbal ability, auditory sustained attention and working memory, verbal learning and memory, and problem-solving, two measures of symptoms, total positive and negative symptoms, and the process variables of treatment intensity and duration, to change on a performance-based measure of everyday life-skills after a year of computer-assisted cognitive remediation offered as part of intensive outpatient rehabilitation treatment. Thirty-six patients with schizophrenia or schizoaffective disorder were studied. Results of a linear regression model revealed that auditory attention and working memory predicted a significant amount of the variance in change in performance-based measures of everyday life skills after cognitive remediation, even when variance for all other neurocognitive variables in the model was controlled. Stepwise regression revealed that auditory attention and working memory predicted change in everyday life-skills across the trial even when baseline life-skill scores, symptoms and treatment process variables were controlled. These findings emphasize the importance of sustained auditory attention and working memory for benefiting from extended programs of cognitive remediation.

  6. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  7. Quantitative accuracy of the closed-form least-squares solution for targeted SPECT.

    PubMed

    Shcherbinin, S; Celler, A

    2010-10-07

    The aim of this study is to investigate the quantitative accuracy of the closed-form least-squares solution (LSS) for single photon emission computed tomography (SPECT). The main limitation for employing this method in actual clinical reconstructions is the computational cost related to operations with a large-sized system matrix. However, in some clinical situations, the size of the system matrix can be decreased using targeted reconstruction. For example, some oncology SPECT studies are characterized by intense tracer uptakes that are localized in relatively small areas, while the remaining parts of the patient body have only a low activity background. Conventional procedures reconstruct the activity distribution in the whole object, which leads to relatively poor image accuracy/resolution for tumors while computer resources are wasted, trying to rebuild diagnostically useless background. In this study, we apply a concept of targeted reconstruction to SPECT phantom experiments imitating such oncology scans. Our approach includes two major components: (i) disconnection of the entire imaging system of equations and extraction of only those parts that correspond to the targets, i.e., regions of interest (ROI) encompassing active containers/tumors and (ii) generation of the closed-form LSS for each target ROI. We compared these ROI-based LSS with those reconstructed by the conventional MLEM approach. The analysis of the five processed cases from two phantom experiments demonstrated that the LSS approach outperformed MLEM in terms of the noise level inside ROI. On the other hand, MLEM better recovered total activity if the number of iterations was large enough. For the experiment without background activity, the ROI-based LSS led to noticeably better spatial activity distribution inside ROI. However, the distributions pertaining to both approaches were practically identical for the experiment with the concentration ratio 7:1 between the containers and the background.

  8. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  9. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  10. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  11. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  12. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  13. 10 CFR Appendix II to Part 504 - Fuel Price Computation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Fuel Price Computation II Appendix II to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. II Appendix II to Part 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters...

  14. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 15 2011-07-01 2011-07-01 false Computer Program C Appendix C to Part... APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note: EPA will make copies of appendix C available from: Director, Stationary Source Compliance Division, EN...

  15. Data Intensive Computing on Amazon Web Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, S. A.

    The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less

  16. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  17. Volume and intensity of Medicare physicians' services: An overview

    PubMed Central

    Kay, Terrence L.

    1990-01-01

    From 1978 to 1987, Medicare spending for physicians' services increased at annual compound rates of 16 percent, far exceeding increases expected based on inflation and increases in beneficiaries. As a result, Medicare spending for Part B physicians' services has attracted considerable attention. This article contains an overview of expenditure trends for Part B physicians' services, a summary of recent research findings on issues related to volume and intensity of physicians' services, and a discussion of options for controlling volume and intensity. The possible impact of the recently enacted relative-value-based fee schedule on volume and intensity of services is discussed briefly. PMID:10113398

  18. Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Shafer, Mary Morello

    Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…

  19. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  20. BASIC Language Flow Charting Program (BASCHART). Technical Note 3-82.

    ERIC Educational Resources Information Center

    Johnson, Charles C.; And Others

    This document describes BASCHART, a computer aid designed to decipher and automatically flow chart computer program logic; it also provides the computer code necessary for this process. Developed to reduce the labor intensive manual process of producing a flow chart for an undocumented or inadequately documented program, BASCHART will…

  1. Computer Academy. Western Michigan University: Summer 1985-Present.

    ERIC Educational Resources Information Center

    Kramer, Jane E.

    The Computer Academy at Western Michigan University (Kalamazoo) is a series of intensive, one-credit-hour workshops to assist professionals in increasing their level of computer competence. At the time they were initiated, in 1985, the workshops targeted elementary and secondary school teachers and administrators, were offered on Apple IIe…

  2. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  3. Controlling Light Transmission Through Highly Scattering Media Using Semi-Definite Programming as a Phase Retrieval Computation Method.

    PubMed

    N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao

    2017-05-31

    Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.

  4. MS2PIP prediction server: compute and visualize MS2 peak intensity predictions for CID and HCD fragmentation.

    PubMed

    Degroeve, Sven; Maddelein, Davy; Martens, Lennart

    2015-07-01

    We present an MS(2) peak intensity prediction server that computes MS(2) charge 2+ and 3+ spectra from peptide sequences for the most common fragment ions. The server integrates the Unimod public domain post-translational modification database for modified peptides. The prediction model is an improvement of the previously published MS(2)PIP model for Orbitrap-LTQ CID spectra. Predicted MS(2) spectra can be downloaded as a spectrum file and can be visualized in the browser for comparisons with observations. In addition, we added prediction models for HCD fragmentation (Q-Exactive Orbitrap) and show that these models compute accurate intensity predictions on par with CID performance. We also show that training prediction models for CID and HCD separately improves the accuracy for each fragmentation method. The MS(2)PIP prediction server is accessible from http://iomics.ugent.be/ms2pip. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Exact analytical formulae for linearly distributed vortex and source sheets in uence computation in 2D vortex methods

    NASA Astrophysics Data System (ADS)

    Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.

    2017-11-01

    We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.

  6. Evaluating open-source cloud computing solutions for geosciences

    NASA Astrophysics Data System (ADS)

    Huang, Qunying; Yang, Chaowei; Liu, Kai; Xia, Jizhe; Xu, Chen; Li, Jing; Gui, Zhipeng; Sun, Min; Li, Zhenglong

    2013-09-01

    Many organizations start to adopt cloud computing for better utilizing computing resources by taking advantage of its scalability, cost reduction, and easy to access characteristics. Many private or community cloud computing platforms are being built using open-source cloud solutions. However, little has been done to systematically compare and evaluate the features and performance of open-source solutions in supporting Geosciences. This paper provides a comprehensive study of three open-source cloud solutions, including OpenNebula, Eucalyptus, and CloudStack. We compared a variety of features, capabilities, technologies and performances including: (1) general features and supported services for cloud resource creation and management, (2) advanced capabilities for networking and security, and (3) the performance of the cloud solutions in provisioning and operating the cloud resources as well as the performance of virtual machines initiated and managed by the cloud solutions in supporting selected geoscience applications. Our study found that: (1) no significant performance differences in central processing unit (CPU), memory and I/O of virtual machines created and managed by different solutions, (2) OpenNebula has the fastest internal network while both Eucalyptus and CloudStack have better virtual machine isolation and security strategies, (3) Cloudstack has the fastest operations in handling virtual machines, images, snapshots, volumes and networking, followed by OpenNebula, and (4) the selected cloud computing solutions are capable for supporting concurrent intensive web applications, computing intensive applications, and small-scale model simulations without intensive data communication.

  7. Computer-Intensive School Environments and the Reorganization of Knowledge and Learning: A Qualitative Assessment of Apple Computer's Classroom of Tomorrow.

    ERIC Educational Resources Information Center

    Levine, Harold G.

    The Apple Classroom of Tomorrow (ACOT) project is an attempt to alter the instructional premises of a selected group of seven experimental classrooms in the United States by saturating them with computer technology. A recent proposal submitted to Apple Computer described STAR (Sensible Technology Assessment/Research), which includes both…

  8. Randomised controlled trial of a digitally assisted low intensity intervention to promote personal recovery in persisting psychosis: SMART-Therapy study protocol.

    PubMed

    Thomas, Neil; Farhall, John; Foley, Fiona; Rossell, Susan L; Castle, David; Ladd, Emma; Meyer, Denny; Mihalopoulos, Cathrine; Leitan, Nuwan; Nunan, Cassy; Frankish, Rosalie; Smark, Tara; Farnan, Sue; McLeod, Bronte; Sterling, Leon; Murray, Greg; Fossey, Ellie; Brophy, Lisa; Kyrios, Michael

    2016-09-07

    Psychosocial interventions have an important role in promoting recovery in people with persisting psychotic disorders such as schizophrenia. Readily available, digital technology provides a means of developing therapeutic resources for use together by practitioners and mental health service users. As part of the Self-Management and Recovery Technology (SMART) research program, we have developed an online resource providing materials on illness self-management and personal recovery based on the Connectedness-Hope-Identity-Meaning-Empowerment (CHIME) framework. Content is communicated using videos featuring persons with lived experience of psychosis discussing how they have navigated issues in their own recovery. This was developed to be suitable for use on a tablet computer during sessions with a mental health worker to promote discussion about recovery. This is a rater-blinded randomised controlled trial comparing a low intensity recovery intervention of eight one-to-one face-to-face sessions with a mental health worker using the SMART website alongside routine care, versus an eight-session comparison condition, befriending. The recruitment target is 148 participants with a schizophrenia-related disorder or mood disorder with a history of psychosis, recruited from mental health services in Victoria, Australia. Following baseline assessment, participants are randomised to intervention, and complete follow up assessments at 3, 6 and 9 months post-baseline. The primary outcome is personal recovery measured using the Process of Recovery Questionnaire (QPR). Secondary outcomes include positive and negative symptoms assessed with the Positive and Negative Syndrome Scale, subjective experiences of psychosis, emotional symptoms, quality of life and resource use. Mechanisms of change via effects on self-stigma and self-efficacy will be examined. This protocol describes a novel intervention which tests new therapeutic methods including in-session tablet computer use and video-based peer modelling. It also informs a possible low intensity intervention model potentially viable for delivery across the mental health workforce. NCT02474524 , 24 May 2015, retrospectively registered during the recruitment phase.

  9. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  10. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  11. Computer Assistance in Information Work. Part I: Conceptual Framework for Improving the Computer/User Interface in Information Work. Part II: Catalog of Acceleration, Augmentation, and Delegation Functions in Information Work.

    ERIC Educational Resources Information Center

    Paisley, William; Butler, Matilda

    This study of the computer/user interface investigated the role of the computer in performing information tasks that users now perform without computer assistance. Users' perceptual/cognitive processes are to be accelerated or augmented by the computer; a long term goal is to delegate information tasks entirely to the computer. Cybernetic and…

  12. The Computer and Its Functions; How to Communicate with the Computer.

    ERIC Educational Resources Information Center

    Ward, Peggy M.

    A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…

  13. Click! 101 Computer Activities and Art Projects for Kids and Grown-Ups.

    ERIC Educational Resources Information Center

    Bundesen, Lynne; And Others

    This book presents 101 computer activities and projects geared toward children and adults. The activities for both personal computers (PCs) and Macintosh were developed on the Windows 95 computer operating system, but they are adaptable to non-Windows personal computers as well. The book is divided into two parts. The first part provides an…

  14. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  15. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  16. New variational bounds on convective transport. II. Computations and implications

    NASA Astrophysics Data System (ADS)

    Souza, Andre; Tobasco, Ian; Doering, Charles R.

    2016-11-01

    We study the maximal rate of scalar transport between parallel walls separated by distance h, by an incompressible fluid with scalar diffusion coefficient κ. Given velocity vector field u with intensity measured by the Péclet number Pe =h2 < | ∇ u |2 >1/2 / κ (where < . > is space-time average) the challenge is to determine the largest enhancement of wall-to-wall scalar flux over purely diffusive transport, i.e., the Nusselt number Nu . Variational formulations of the problem are studied numerically and optimizing flow fields are computed over a range of Pe . Implications of this optimal wall-to-wall transport problem for the classical problem of Rayleigh-Bénard convection are discussed: the maximal scaling Nu Pe 2 / 3 corresponds, via the identity Pe2 = Ra (Nu - 1) where Ra is the usual Rayleigh number, to Nu Ra 1 / 2 as Ra -> ∞ . Supported in part by National Science Foundation Graduate Research Fellowship DGE-0813964, awards OISE-0967140, PHY-1205219, DMS-1311833, and DMS-1515161, and the John Simon Guggenheim Memorial Foundation.

  17. Homo heuristicus: why biased minds make better inferences.

    PubMed

    Gigerenzer, Gerd; Brighton, Henry

    2009-01-01

    Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the discovery of less-is-more effects; (b) the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; (c) an advancement from vague labels to computational models of heuristics; (d) the development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an "adaptive toolbox;" and (e) the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people's adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. Copyright © 2009 Cognitive Science Society, Inc.

  18. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  19. Oxidoreductive capability of boar sperm mitochondria in fresh semen and during their preservation in BTS extender.

    PubMed

    Gaczarzewicz, Dariusz; Piasecka, Małgorzata; Udała, Jan; Błaszczyk, Barbara; Laszczyńska, Maria; Kram, Andrzej

    2003-07-01

    The purpose of our study was to determine the effect of dilution and liquid-preservation of boar sperm on oxidoreductive capability of their mitochondria. The semen was diluted with BTS extender produced from water purified by destillation or by reverse osmosis. The spermatozoa were stored over a four-day period at 16-18 degrees C. The function of sperm mitochondria was assessed using the screening cytochemical test for NADH-dependent oxidoreductases (diaphorase/NADH, related to flavoprotein). Morphological assessment of cytochemical reaction was carried out using a light microscope. The intensity of the reaction was evaluated by means of a computer image analysing system (Quantimet 600S), measuring the integrated optical density (IOD) and mean optical density (MOD) of the reaction product (formazans) occurring in the sperm midpieces. In the non-diluted semen, intensive cytochemical reaction throughout the length of the sperm midpiece was observed. Furthermore, spermatozoa with the intensive reaction displayed the high optical density values. After dilution the semen with two variants of experimental extender, and as the conservation time expired, the cytochemical reaction was less intensive. Moreover, the absence of formazan deposits in various parts of the sperm midpiece was also noted. These morphological features corresponded to low values of optical density. These findings suggest that the dilution of semen and the time of sperm preservation may be critical factors that handicap energy metabolism of sperm mitochondria. The type of water used in preparing BTS extender does not have any significant effect on the oxidoreductive capability of sperm boar mitochondria.

  20. The dynamics of color signals in male threespine sticklebacks Gasterosteus aculeatus

    PubMed Central

    Hiermes, Meike

    2016-01-01

    Abstract Body coloration and color patterns are ubiquitous throughout the animal kingdom and vary between and within species. Recent studies have dealt with individual dynamics of various aspects of coloration, as it is in many cases a flexible trait and changes in color expression may be context-dependent. During the reproductive phase, temporal changes of coloration in the visible spectral range (400–700 nm) have been shown for many animals but corresponding changes in the ultraviolet (UV) waveband (300–400 nm) have rarely been studied. Threespine stickleback Gasterosteus aculeatus males develop conspicuous orange–red breeding coloration combined with UV reflectance in the cheek region. We investigated dynamics of color patterns including UV throughout a male breeding cycle, as well as short-term changes in coloration in response to a computer-animated rival using reflectance spectrophotometry and visual modeling, to estimate how colors would be perceived by conspecifics. We found the orange–red component of coloration to vary during the breeding cycle with respect to hue (theta/R50) and intensity (achieved chroma/red chroma). Furthermore, color intensity in the orange–red spectral part (achieved chroma) tended to be increased after the presentation of an artificial rival. Dynamic changes in specific measures of hue and intensity in the UV waveband were not found. In general, the orange–red component of the signal seems to be dynamic with respect to color intensity and hue. This accounts in particular for color changes during the breeding cycle, presumably to signal reproductive status, and with limitations as well in the intrasexual context, most likely to signal dominance or inferiority. PMID:29491887

  1. The dynamics of color signals in male threespine sticklebacks Gasterosteus aculeatus.

    PubMed

    Hiermes, Meike; Rick, Ingolf P; Mehlis, Marion; Bakker, Theo C M

    2016-02-01

    Body coloration and color patterns are ubiquitous throughout the animal kingdom and vary between and within species. Recent studies have dealt with individual dynamics of various aspects of coloration, as it is in many cases a flexible trait and changes in color expression may be context-dependent. During the reproductive phase, temporal changes of coloration in the visible spectral range (400-700 nm) have been shown for many animals but corresponding changes in the ultraviolet (UV) waveband (300-400 nm) have rarely been studied. Threespine stickleback Gasterosteus aculeatus males develop conspicuous orange-red breeding coloration combined with UV reflectance in the cheek region. We investigated dynamics of color patterns including UV throughout a male breeding cycle, as well as short-term changes in coloration in response to a computer-animated rival using reflectance spectrophotometry and visual modeling, to estimate how colors would be perceived by conspecifics. We found the orange-red component of coloration to vary during the breeding cycle with respect to hue ( theta /R50) and intensity (achieved chroma/red chroma). Furthermore, color intensity in the orange-red spectral part (achieved chroma) tended to be increased after the presentation of an artificial rival. Dynamic changes in specific measures of hue and intensity in the UV waveband were not found. In general, the orange-red component of the signal seems to be dynamic with respect to color intensity and hue. This accounts in particular for color changes during the breeding cycle, presumably to signal reproductive status, and with limitations as well in the intrasexual context, most likely to signal dominance or inferiority.

  2. Today's Personal Computers: Products for Every Need--Part II.

    ERIC Educational Resources Information Center

    Personal Computing, 1981

    1981-01-01

    Looks at microcomputers manufactured by Altos Computer Systems, Cromemco, Exidy, Intelligent Systems, Intertec Data Systems, Mattel, Nippon Electronics, Northstar, Personal Micro Computers, and Sinclair. (Part I of this article, examining other computers, appeared in the May 1981 issue.) Journal availability: Hayden Publishing Company, 50 Essex…

  3. The dipole moment surface for hydrogen sulfide H2S

    NASA Astrophysics Data System (ADS)

    Azzam, Ala`a. A. A.; Lodi, Lorenzo; Yurchenko, Sergey N.; Tennyson, Jonathan

    2015-08-01

    In this work we perform a systematic ab initio study of the dipole moment surface (DMS) of H2S at various levels of theory and of its effect on the intensities of vibration-rotation transitions; H2S intensities are known from the experiment to display anomalies which have so far been difficult to reproduce by theoretical calculations. We use the transition intensities from the HITRAN database of 14 vibrational bands for our comparisons. The intensities of all fundamental bands show strong sensitivity to the ab initio method used for constructing the DMS while hot, overtone and combination bands up to 4000 cm-1 do not. The core-correlation and relativistic effects are found to be important for computed line intensities, for instance affecting the most intense fundamental band (ν2) by about 20%. Our recommended DMS, called ALYT2, is based on the CCSD(T)/aug-cc-pV(6+d)Z level of theory supplemented by a core-correlation/relativistic corrective surface obtained at the CCSD[T]/aug-cc-pCV5Z-DK level. The corresponding computed intensities agree significantly better (to within 10%) with experimental data taken directly from original papers. Worse agreement (differences of about 25%) is found for those HITRAN intensities obtained from fitted effective dipole models, suggesting the presence of underlying problems in those fits.

  4. A Set of Computer Projects for an Electromagnetic Fields Class.

    ERIC Educational Resources Information Center

    Gleeson, Ronald F.

    1989-01-01

    Presented are three computer projects: vector analysis, electric field intensities at various distances, and the Biot-Savart law. Programing suggestions and project results are provided. One month is suggested for each project. (MVL)

  5. Space Spurred Computer Graphics

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.

  6. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  7. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  8. Point spread function based classification of regions for linear digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Israni, Kenny; Avinash, Gopal; Li, Baojun

    2007-03-01

    In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.

  9. Evaluation of the infrared test method for the olympus thermal balance tests

    NASA Technical Reports Server (NTRS)

    Donato, M.; Stpierre, D.; Green, J.; Reeves, M.

    1986-01-01

    The performance of the infrared (IR) rig used for the thermal balance testing of the Olympus S/C thermal model is discussed. Included in this evaluation are the rig effects themselves, the IRFLUX computer code used to predict the radiation inputs, the Monitored Background Radiometers (MBR's) developed to measure the absorbed radiation flux intensity, the Uniform Temperature Reference (UTR) based temperature measurement system and the data acquisition system. A preliminary set of verification tests were performed on a 1 m x 1 m zone to assess the performance of the IR lamps, calrods, MBR's and aluminized baffles. The results were used, in part, to obtain some empirical data required for the IRFLUX code. This data included lamp and calrod characteristics, the absorptance function for various surface types, and the baffle reflectivities.

  10. A Numerical Simulation and Statistical Modeling of High Intensity Radiated Fields Experiment Data

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.

    2004-01-01

    Tests are conducted on a quad-redundant fault tolerant flight control computer to establish upset characteristics of an avionics system in an electromagnetic field. A numerical simulation and statistical model are described in this work to analyze the open loop experiment data collected in the reverberation chamber at NASA LaRC as a part of an effort to examine the effects of electromagnetic interference on fly-by-wire aircraft control systems. By comparing thousands of simulation and model outputs, the models that best describe the data are first identified and then a systematic statistical analysis is performed on the data. All of these efforts are combined which culminate in an extrapolation of values that are in turn used to support previous efforts used in evaluating the data.

  11. A Semantic Based Policy Management Framework for Cloud Computing Environments

    ERIC Educational Resources Information Center

    Takabi, Hassan

    2013-01-01

    Cloud computing paradigm has gained tremendous momentum and generated intensive interest. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this dissertation, we mainly focus on issues related to policy management and access…

  12.  The application of computational chemistry to lignin

    Treesearch

    Thomas Elder; Laura Berstis; Nele Sophie Zwirchmayr; Gregg T. Beckham; Michael F. Crowley

    2017-01-01

    Computational chemical methods have become an important technique in the examination of the structure and reactivity of lignin. The calculations can be based either on classical or quantum mechanics, with concomitant differences in computational intensity and size restrictions. The current paper will concentrate on results developed from the latter type of calculations...

  13. A Comparison of Student Perceptions of Their Computer Skills to Their Actual Abilities

    ERIC Educational Resources Information Center

    Grant, Donna M.; Malloy, Alisha D.; Murphy, Marianne C.

    2009-01-01

    In this technology intensive society, most students are required to be proficient in computer skills to compete in today's global job market. These computer skills usually consist of basic to advanced knowledge in word processing, presentation, and spreadsheet applications. In many U.S. states, students are required to demonstrate computer…

  14. Gender Differences in Computer Attitudes and the Choice of Technology-Related Occupations in a Sample of Secondary Students in Spain

    ERIC Educational Resources Information Center

    Sainz, Milagros; Lopez-Saez, Mercedes

    2010-01-01

    The dearth of women in technology and ICT-related fields continues to be a topic of interest for both the scientific community and decision-makers. Research on attitudes towards computers proves that women display more negative computer attitudes than men and also make less intense use of technology and computers than their male counterparts. For…

  15. Hot prominence detected in the core of a coronal mass ejection. II. Analysis of the C III line detected by SOHO/UVCS

    NASA Astrophysics Data System (ADS)

    Jejčič, S.; Susino, R.; Heinzel, P.; Dzifčáková, E.; Bemporad, A.; Anzer, U.

    2017-11-01

    Context. We study the physics of erupting prominences in the core of coronal mass ejections (CMEs) and present a continuation of a previous analysis. Aims: We determine the kinetic temperature and microturbulent velocity of an erupting prominence embedded in the core of a CME that occurred on August 2, 2000 using the Ultraviolet Coronagraph and Spectrometer observations (UVCS) on board the Solar and Heliospheric Observatory (SOHO) simultaneously in the hydrogen Lα and C III lines. We develop the non-LTE (departures from the local thermodynamic equilibrium - LTE) spectral diagnostics based on Lα and Lβ measured integrated intensities to derive other physical quantities of the hot erupting prominence. Based on this, we synthesize the C III line intensity to compare it with observations. Methods: Our method is based on non-LTE modeling of eruptive prominences. We used a general non-LTE radiative-transfer code only for optically thin prominence points because optically thick points do not allow the direct determination of the kinetic temperature and microturbulence from the line profiles. The input parameters of the code were the kinetic temperature and microturbulent velocity derived from the Lα and C III line widths, as well as the integrated intensity of the Lα and Lβ lines. The code runs in three loops to compute the radial flow velocity, electron density, and effective thickness as the best fit to the Lα and Lβ integrated intensities within the accuracy defined by the absolute radiometric calibration of UVCS data. Results: We analyzed 39 observational points along the whole erupting prominence because for these points we found a solution for the kinetic temperature and microturbulent velocity. For these points we ran the non-LTE code to determine best-fit models. All models with τ0(Lα) ≤ 0.3 and τ0(C III) ≤ 0.3 were analyzed further, for which we computed the integrated intensity of the C III line using a two-level atom. The best agreement between computed and observed integrated intensity led to 30 optically thin points along the prominence. The results are presented as histograms of the kinetic temperature, microturbulent velocity, effective thickness, radial flow velocity, electron density, and gas pressure. We also show the relation between the microturbulence and kinetic temperature together with a scatter plot of computed versus observed C III integrated intensities and the ratio of the computed to observed C III integrated intensities versus kinetic temperature. Conclusions: The erupting prominence embedded in the CME is relatively hot with a low electron density, a wide range of effective thicknesses, a rather narrow range of radial flow velocities, and a microturbulence of about 25 km s-1. This analysis shows a disagreement between observed and synthetic intensities of the C III line, the reason for which most probably is that photoionization is neglected in calculations of the ionization equilibrium. Alternatively, the disagreement might be due to non-equilibrium processes.

  16. Personalized, Shareable Geoscience Dataspaces For Simplifying Data Management and Improving Reproducibility

    NASA Astrophysics Data System (ADS)

    Malik, T.; Foster, I.; Goodall, J. L.; Peckham, S. D.; Baker, J. B. H.; Gurnis, M.

    2015-12-01

    Research activities are iterative, collaborative, and now data- and compute-intensive. Such research activities mean that even the many researchers who work in small laboratories must often create, acquire, manage, and manipulate much diverse data and keep track of complex software. They face difficult data and software management challenges, and data sharing and reproducibility are neglected. There is signficant federal investment in powerful cyberinfrastructure, in part to lesson the burden associated with modern data- and compute-intensive research. Similarly, geoscience communities are establishing research repositories to facilitate data preservation. Yet we observe a large fraction of the geoscience community continues to struggle with data and software management. The reason, studies suggest, is not lack of awareness but rather that tools do not adequately support time-consuming data life cycle activities. Through NSF/EarthCube-funded GeoDataspace project, we are building personalized, shareable dataspaces that help scientists connect their individual or research group efforts with the community at large. The dataspaces provide a light-weight multiplatform research data management system with tools for recording research activities in what we call geounits, so that a geoscientist can at any time snapshot and preserve, both for their own use and to share with the community, all data and code required to understand and reproduce a study. A software-as-a-service (SaaS) deployment model enhances usability of core components, and integration with widely used software systems. In this talk we will present the open-source GeoDataspace project and demonstrate how it is enabling reproducibility across geoscience domains of hydrology, space science, and modeling toolkits.

  17. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  18. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...

  19. All-optical computation system for solving differential equations based on optical intensity differentiator.

    PubMed

    Tan, Sisi; Wu, Zhao; Lei, Lei; Hu, Shoujin; Dong, Jianji; Zhang, Xinliang

    2013-03-25

    We propose and experimentally demonstrate an all-optical differentiator-based computation system used for solving constant-coefficient first-order linear ordinary differential equations. It consists of an all-optical intensity differentiator and a wavelength converter, both based on a semiconductor optical amplifier (SOA) and an optical filter (OF). The equation is solved for various values of the constant-coefficient and two considered input waveforms, namely, super-Gaussian and Gaussian signals. An excellent agreement between the numerical simulation and the experimental results is obtained.

  20. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  1. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  2. Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Omidi, Nazanin

    In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.

  3. Importance of selecting archaeomagnetic data for geomagnetic modelling: example of the new Western Europe directional and intensity secular variation curves from 1500 BC to 200 AD

    NASA Astrophysics Data System (ADS)

    Herve, Gwenael; Chauvin, Annick; Lanos, Philippe

    2014-05-01

    At the regional scale, the dispersion between archaeomagnetic data and especially archaeointensities suggests that some of them may be biased. As a consequence, it appears necessary to perform a selection of available data before to compute mean regional secular variation curves or geomagnetic models. However the definition of suitable selection criteria is not obvious and we need to know how to manage "old" data acquired during the 60-70s. The Western Europe directional and intensity data set from 1500 BC to 200 AD allows to discuss these issues. It has recently been enhanced by 39 new archaeodirections and 23 new archaeointensities (Hervé et al., 2013a and 2013b data sets and 5 unpublished data). First, the whole Western Europe data set was selected but the strong dispersion restricted the accuracy and the reliability of the new Western Europe secular variation curves at Paris. The causes of the dispersion appear different between archaeodirections and archaeointensities. In the directional data set, the main problem comes from some age errors in the oldest published data. Since their publication their archaeological dating may have changed of 50 years or more. For intensity data that were acquired much more recently, the dispersion mainly results from the use of unreliable archaeointensity protocols. We propose a weighting approach based on the number of specimens and the use of pTRM-checks, anisotropy and cooling rate corrections. Only 63% of available archaeodirections and 32% of archaeointensities were used to build the new Western Europe secular variation curves from 1500 BC to 200 AD. These curves reveal that selecting the reference data avoids wrong estimations of the shape of the secular variation curves, the secular variation rate, the dating of archaeomagnetic jerks... Finally, it is worth pointing out that current geomagnetic global models take into account almost all the data that we decided to reject. It could partly explain why their predictions at Paris do not fit our local secular variation curves. Hervé, G., Chauvin, A. & Lanos, P., 2013a. Geomagnetic field variations in Western Europe from 1500BC to 200AD. Part I : Directional secular variation curve, Phys. Earth Planet. Inter., 218, 1-13. Hervé, G., Chauvin, A. & Lanos, P., 2013b. Geomagnetic field variations in Western Europe from 1500BC to 200AD. Part II : New intensity secular variation curve, Phys. Earth Planet. Inter., 218, 51-65.

  4. Wind velocity profile reconstruction from intensity fluctuations of a plane wave propagating in a turbulent atmosphere.

    PubMed

    Banakh, V A; Marakasov, D A

    2007-08-01

    Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepke, Scott M.

    In this note, the laser focal plane intensity pro le for a beam modeled using the 3D ray trace package in HYDRA is determined. First, the analytical model is developed followed by a practical numerical model for evaluating the resulting computationally intensive normalization factor for all possible input parameters.

  6. Simulation Needs and Priorities of the Fermilab Intensity Frontier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elvira, V. D.; Genser, K. L.; Hatcher, R.

    2015-06-11

    Over a two-year period, the Physics and Detector Simulations (PDS) group of the Fermilab Scientific Computing Division (SCD), collected information from Fermilab Intensity Frontier experiments on their simulation needs and concerns. The process and results of these activities are documented here.

  7. Advances in medical image computing.

    PubMed

    Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P

    2009-01-01

    Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  8. Numerical developments for short-pulsed Near Infra-Red laser spectroscopy. Part I: direct treatment

    NASA Astrophysics Data System (ADS)

    Boulanger, Joan; Charette, André

    2005-03-01

    This two part study is devoted to the numerical treatment of short-pulsed laser near infra-red spectroscopy. The overall goal is to address the possibility of numerical inverse treatment based on a recently developed direct model to solve the transient radiative transfer equation. This model has been constructed in order to incorporate the last improvements in short-pulsed laser interaction with semi-transparent media and combine a discrete ordinates computing of the implicit source term appearing in the radiative transfer equation with an explicit treatment of the transport of the light intensity using advection schemes, a method encountered in reactive flow dynamics. The incident collimated beam is analytically solved through Bouger Beer Lambert extinction law. In this first part, the direct model is extended to fully non-homogeneous materials and tested with two different spatial schemes in order to be adapted to the inversion methods presented in the following second part. As a first point, fundamental methods and schemes used in the direct model are presented. Then, tests are conducted by comparison with numerical simulations given as references. In a third and last part, multi-dimensional extensions of the code are provided. This allows presentation of numerical results of short pulses propagation in 1, 2 and 3D homogeneous and non-homogeneous materials given some parametrical studies on medium properties and pulse shape. For comparison, an integral method adapted to non-homogeneous media irradiated by a pulsed laser beam is also developed for the 3D case.

  9. M = +1, ± 1 and ± 2 mode helicon wave excitation.

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Yun, S.-M.; Chang, H.-Y.

    1996-11-01

    The characteristics of M=+1, ± 1 and ± 2 modes helicon wave excited using a solenoid antenna, Nagoya type III and quadrupole antenna respectively are first investigated. The solenoid antenna is constructed by winding a copper cable on a quartz discharge tube. Two dimensional cross-field measurements of ArII optical emission induced by hot electrons are made to investigate RF power deposition: Components of the wave magnetic field measured with a single-turn, coaxial magnetic probe were compared with the field patterns computed for M=+1, ± 1 and ± 2 modes. The M=+1 mode plasma produced by the solenoid antenna has a cylindrical high intensity plasma column, which center is empty. This cylindrical high intensity column results from the rotation of the cross-sectional electric field pattern (right hand circularly polarization). The radial plasma density profile has a peak at r=2.5cm with axisymmetry. It has been found that the radial profile of the plasma density is in good agreement with the computed power deposition profile. The radial profiles of the wave magnetic field are in good agreement with computations. The plasma excited by Nagoya type III antenna has two high intensity columns which results from the linear combination of M=+1 and -1 modes (i.e. plane polarization). The radial plasma density profile is in good agreement with emission intensity profile of ArII line (488nm). The plasma excited by quadrupole antenna has four high intensity columns which results from the linear combination of M=+2 and -2 modes (i.e. plane polarization). In the M=± 2 modes, the radial plasma density profile is also in good agreement with emission intensity profile of ArII line.

  10. Protofit: A program for determining surface protonation constants from titration data

    NASA Astrophysics Data System (ADS)

    Turner, Benjamin F.; Fein, Jeremy B.

    2006-11-01

    Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.

  11. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  12. The fluid dynamics of canine olfaction: unique nasal airflow patterns as an explanation of macrosmia

    PubMed Central

    Craven, Brent A.; Paterson, Eric G.; Settles, Gary S.

    2010-01-01

    The canine nasal cavity contains hundreds of millions of sensory neurons, located in the olfactory epithelium that lines convoluted nasal turbinates recessed in the rear of the nose. Traditional explanations for canine olfactory acuity, which include large sensory organ size and receptor gene repertoire, overlook the fluid dynamics of odorant transport during sniffing. But odorant transport to the sensory part of the nose is the first critical step in olfaction. Here we report new experimental data on canine sniffing and demonstrate allometric scaling of sniff frequency, inspiratory airflow rate and tidal volume with body mass. Next, a computational fluid dynamics simulation of airflow in an anatomically accurate three-dimensional model of the canine nasal cavity, reconstructed from high-resolution magnetic resonance imaging scans, reveals that, during sniffing, spatially separate odour samples are acquired by each nostril that may be used for bilateral stimulus intensity comparison and odour source localization. Inside the nose, the computation shows that a unique nasal airflow pattern develops during sniffing, which is optimized for odorant transport to the olfactory part of the nose. These results contrast sharply with nasal airflow in the human. We propose that mammalian olfactory function and acuity may largely depend on odorant transport by nasal airflow patterns resulting from either the presence of a highly developed olfactory recess (in macrosmats such as the canine) or the lack of one (in microsmats including humans). PMID:20007171

  13. ICAN/PART: Particulate composite analyzer, user's manual and verification studies

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Murthy, Pappu L. N.; Mital, Subodh K.

    1996-01-01

    A methodology for predicting the equivalent properties and constituent microstresses for particulate matrix composites, based on the micromechanics approach, is developed. These equations are integrated into a computer code developed to predict the equivalent properties and microstresses of fiber reinforced polymer matrix composites to form a new computer code, ICAN/PART. Details of the flowchart, input and output for ICAN/PART are described, along with examples of the input and output. Only the differences between ICAN/PART and the original ICAN code are described in detail, and the user is assumed to be familiar with the structure and usage of the original ICAN code. Detailed verification studies, utilizing dim dimensional finite element and boundary element analyses, are conducted in order to verify that the micromechanics methodology accurately models the mechanics of particulate matrix composites. ne equivalent properties computed by ICAN/PART fall within bounds established by the finite element and boundary element results. Furthermore, constituent microstresses computed by ICAN/PART agree in average sense with results computed using the finite element method. The verification studies indicate that the micromechanics programmed into ICAN/PART do indeed accurately model the mechanics of particulate matrix composites.

  14. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  15. Task Assignment Heuristics for Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.

  16. A new procedure for investigating three-dimensional stress fields in a thin plate with a through-the-thickness crack

    NASA Astrophysics Data System (ADS)

    Yi, Dake; Wang, TzuChiang

    2018-06-01

    In the paper, a new procedure is proposed to investigate three-dimensional fracture problems of a thin elastic plate with a long through-the-thickness crack under remote uniform tensile loading. The new procedure includes a new analytical method and high accurate finite element simulations. In the part of theoretical analysis, three-dimensional Maxwell stress functions are employed in order to derive three-dimensional crack tip fields. Based on the theoretical analysis, an equation which can describe the relationship among the three-dimensional J-integral J( z), the stress intensity factor K( z) and the tri-axial stress constraint level T z ( z) is derived first. In the part of finite element simulations, a fine mesh including 153360 elements is constructed to compute the stress field near the crack front, J( z) and T z ( z). Numerical results show that in the plane very close to the free surface, the K field solution is still valid for in-plane stresses. Comparison with the numerical results shows that the analytical results are valid.

  17. Architecutres, Models, Algorithms, and Software Tools for Configurable Computing

    DTIC Science & Technology

    2000-03-06

    and J.G. Nash. The gated interconnection network for dynamic programming. Plenum, 1988 . [18] Ju wook Jang, Heonchul Park, and Viktor K. Prasanna. A ...Sep. 1997. [2] C. Ebeling, D. C. Cronquist , P. Franklin and C. Fisher, "RaPiD - A configurable computing architecture for compute-intensive...ABSTRACT (Maximum 200 words) The Models, Algorithms, and Architectures for Reconfigurable Computing (MAARC) project developed a sound framework for

  18. A Study of Computer-Aided Geometric Optical Design.

    DTIC Science & Technology

    1982-10-01

    short programs on tape. A computer account number and Cyber computer manuals were obtained. A familiarity with the use and maintenance of computer files...in the interpretation of the information. Ray fans, spot diagrams, wavefront variance, Strehl ratio, vignetting .- diagrams Pnd optical transfer...other surface begins to cut off these rays (20:113). This is characterized by a loss of intensity at the outside of the image. A known manual

  19. Effects of job-related stress and burnout on asthenopia among high-tech workers.

    PubMed

    Ostrovsky, Anat; Ribak, Joseph; Pereg, Avihu; Gaton, Dan

    2012-01-01

    Eye- and vision-related symptoms are the most frequent health problems among computer users. The findings of eye strain, tired eyes, eye irritation, burning sensation, redness, blurred vision and double vision, when appearing together, have recently been termed 'computer vision syndrome', or asthenopia. To examine the frequency and intensity of asthenopia among individuals employed in research and development departments of high-tech firms and the effects of job stress and burnout on ocular complaints, this study included 106 subjects, 42 high-tech workers (study group) and 64 bank employees (control group). All participants completed self-report questionnaires covering demographics, asthenopia, satisfaction with work environmental conditions, job-related stress and burnout. There was a significant between-group difference in the intensity of asthenopia, but not in its frequency. Burnout appeared to be a significant contributing factor to the intensity and frequency of asthenopia. This study shows that burnout is a significant factor in asthenopic complaints in high-tech workers. This manuscript analyses the effects of psychological environmental factors, such as job stress and burnout, on ocular complaints at the workplace of computer users. The findings may have an ergonomic impact on how to improve health, safety and comfort of the working environment among computer users, for better perception of the job environment, efficacy and production.

  20. Modeling and measurements of XRD spectra of extended solids under high pressure

    NASA Astrophysics Data System (ADS)

    Batyrev, I. G.; Coleman, S. P.; Stavrou, E.; Zaug, J. M.; Ciezak-Jenkins, J. A.

    2017-06-01

    We present results of evolutionary simulations based on density functional calculations of various extended solids: N-Si and N-H using variable and fixed concentration methods of USPEX. Predicted from the evolutionary simulations structures were analyzed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction spectra. Stability of the predicted system was estimated from convex-hull plots. X-ray diffraction spectra were calculated using a virtual diffraction algorithm which computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculations of thousands of XRD spectra were used to search for a structure of extended solids at certain pressures with best fits to experimental data according to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Comparison of Raman and IR spectra calculated for best fitted structures with available experimental data shows reasonable agreement for certain vibration modes. Part of this work was performed by LLNL, Contract DE-AC52-07NA27344. We thank the Joint DoD / DOE Munitions Technology Development Program, the HE C-II research program at LLNL and Advanced Light Source, supported by BES DOE, Contract No. DE-AC02-05CH112.

  1. Computer Games and Instruction

    ERIC Educational Resources Information Center

    Tobias, Sigmund, Ed.; Fletcher, J. D., Ed.

    2011-01-01

    There is intense interest in computer games. A total of 65 percent of all American households play computer games, and sales of such games increased 22.9 percent last year. The average amount of game playing time was found to be 13.2 hours per week. The popularity and market success of games is evident from both the increased earnings from games,…

  2. Task-Relevant Sound and User Experience in Computer-Mediated Firefighter Training

    ERIC Educational Resources Information Center

    Houtkamp, Joske M.; Toet, Alexander; Bos, Frank A.

    2012-01-01

    The authors added task-relevant sounds to a computer-mediated instructor in-the-loop virtual training for firefighter commanders in an attempt to raise the engagement and arousal of the users. Computer-mediated training for crew commanders should provide a sensory experience that is sufficiently intense to make the training viable and effective.…

  3. Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula

    PubMed Central

    Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian

    2017-01-01

    The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817

  4. AIP1OGREN: Aerosol Observing Station Intensive Properties Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koontz, Annette; Flynn, Connor

    The aip1ogren value-added product (VAP) computes several aerosol intensive properties. It requires as input calibrated, corrected, aerosol extensive properties (scattering and absorption coefficients, primarily) from the Aerosol Observing Station (AOS). Aerosol extensive properties depend on both the nature of the aerosol and the amount of the aerosol. We compute several properties as relationships between the various extensive properties. These intensive properties are independent of aerosol amount and instead relate to intrinsic properties of the aerosol itself. Along with the original extensive properties we report aerosol single-scattering albedo, hemispheric backscatter fraction, asymmetry parameter, and Ångström exponent for scattering and absorption withmore » one-minute averaging. An hourly averaged file is produced from the 1-minute files that includes all extensive and intensive properties as well as submicron scattering and submicron absorption fractions. Finally, in both the minutely and hourly files the aerosol radiative forcing efficiency is provided.« less

  5. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  6. Wavelet energy-guided level set-based active contour: a segmentation method to segment highly similar regions.

    PubMed

    Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi

    2010-07-01

    This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Development of a numerical procedure for mixed mode K-solutions and fatigue crack growth in FCC single crystal superalloys

    NASA Astrophysics Data System (ADS)

    Ranjan, Srikant

    2005-11-01

    Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the resolved shear stress on primary slip planes. The maximum value of Krss and DeltaKrss was found to determine the crack growth direction and the fatigue crack growth rate respectively. The fatigue crack driving force parameter, DeltaK rss, forms an important multiaxial fatigue damage parameter that can be used to predict life in superalloy components.

  8. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    De La Pierre, Marco, E-mail: cedric.carteret@univ-lorraine.fr, E-mail: marco.delapierre@unito.it; Maschio, Lorenzo; Orlando, Roberto

    Powder and single crystal Raman spectra of the two most common phases of calcium carbonate are calculated with ab initio techniques (using a “hybrid” functional and a Gaussian-type basis set) and measured both at 80 K and room temperature. Frequencies of the Raman modes are in very good agreement between calculations and experiments: the mean absolute deviation at 80 K is 4 and 8 cm{sup −1} for calcite and aragonite, respectively. As regards intensities, the agreement is in general good, although the computed values overestimate the measured ones in many cases. The combined analysis permits to identify almost all themore » fundamental experimental Raman peaks of the two compounds, with the exception of either modes with zero computed intensity or modes overlapping with more intense peaks. Additional peaks have been identified in both calcite and aragonite, which have been assigned to {sup 18}O satellite modes or overtones. The agreement between the computed and measured spectra is quite satisfactory; in particular, simulation permits to clearly distinguish between calcite and aragonite in the case of powder spectra, and among different polarization directions of each compound in the case of single crystal spectra.« less

  10. Quantitative computational infrared imaging of buoyant diffusion flames

    NASA Astrophysics Data System (ADS)

    Newale, Ashish S.

    Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.

  11. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  12. A single exercise bout and locomotor learning after stroke: physiological, behavioural, and computational outcomes.

    PubMed

    Charalambous, Charalambos C; Alcantara, Carolina C; French, Margaret A; Li, Xin; Matt, Kathleen S; Kim, Hyosub E; Morton, Susanne M; Reisman, Darcy S

    2018-05-15

    Previous work demonstrated an effect of a single high-intensity exercise bout coupled with motor practice on the retention of a newly acquired skilled arm movement, in both neurologically intact and impaired adults. In the present study, using behavioural and computational analyses we demonstrated that a single exercise bout, regardless of its intensity and timing, did not increase the retention of a novel locomotor task after stroke. Considering both present and previous work, we postulate that the benefits of exercise effect may depend on the type of motor learning (e.g. skill learning, sensorimotor adaptation) and/or task (e.g. arm accuracy-tracking task, walking). Acute high-intensity exercise coupled with motor practice improves the retention of motor learning in neurologically intact adults. However, whether exercise could improve the retention of locomotor learning after stroke is still unknown. Here, we investigated the effect of exercise intensity and timing on the retention of a novel locomotor learning task (i.e. split-belt treadmill walking) after stroke. Thirty-seven people post stroke participated in two sessions, 24 h apart, and were allocated to active control (CON), treadmill walking (TMW), or total body exercise on a cycle ergometer (TBE). In session 1, all groups exercised for a short bout (∼5 min) at low (CON) or high (TMW and TBE) intensity and before (CON and TMW) or after (TBE) the locomotor learning task. In both sessions, the locomotor learning task was to walk on a split-belt treadmill in a 2:1 speed ratio (100% and 50% fast-comfortable walking speed) for 15 min. To test the effect of exercise on 24 h retention, we applied behavioural and computational analyses. Behavioural data showed that neither high-intensity group showed greater 24 h retention compared to CON, and computational data showed that 24 h retention was attributable to a slow learning process for sensorimotor adaptation. Our findings demonstrated that acute exercise coupled with a locomotor adaptation task, regardless of its intensity and timing, does not improve retention of the novel locomotor task after stroke. We postulate that exercise effects on motor learning may be context specific (e.g. type of motor learning and/or task) and interact with the presence of genetic variant (BDNF Val66Met). © 2018 The Authors. The Journal of Physiology © 2018 The Physiological Society.

  13. Material and shape perception based on two types of intensity gradient information

    PubMed Central

    Nishida, Shin'ya

    2018-01-01

    Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644

  14. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  15. Cooperative and competitive concurrency in scientific computing. A full open-source upgrade of the program for dynamical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2011-06-01

    A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue identifier of previous version: ADVL_v3_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 709 Does the new version supersede the previous version?: Yes Nature of problem: Reflection high-energy electron diffraction (RHEED) is an important in-situ analysis technique, which is capable of giving quantitative information about the growth process of thin layers and its control. It can be used to calibrate growth rate, analyze surface morphology, calibrate surface temperature, monitor the arrangement of the surface atoms, and provide information about growth kinetics. Such control allows the development of structures where the electrons can be confined in space, giving quantum wells or even quantum dots. In order to determine the atomic positions of atoms in the first few layers, the RHEED intensity must be measured as a function of the scattering angles and then compared with dynamic calculations. The objective of this release is to address the design of architecture for application that simulates the rocking curves RHEED intensities during hetero-epitaxial growth process of thin films. Solution method: The GrowthCP is a complex numerical model that uses multiple threads for simulation of epitaxial growth of thin layers. This model consists of two transactional parts. The first part is a mathematical model being based on the Runge-Kutta method with adaptive step-size control. The second part represents first-principles of the one-dimensional RHEED computational model. This model is based on solving a one-dimensional Schrödinger equation. Several problems can arise when applications contain a mixture of data access code, numerical code, and presentation code. Such applications are difficult to maintain, because interdependencies between all the components cause strong ripple effects whenever a change is made anywhere. Adding new data views often requires reimplementing a numerical code, which then requires maintenance in multiple places. In order to solve problems of this type, the computational and threading layers of the project have been implemented in the form of one design pattern as a part of Model-View-Controller architecture. Reasons for new version: Responding to the users' feedback the Growth09 project has been upgraded to a standard that allows the carrying out of sample computations of the RHEED intensities for a disordered surface for a wide range of single- and epitaxial hetero-structures. The design pattern on which the project is based has also been improved. It is shown that this model can be effectively used for multithreaded growth simulations of thin epitaxial layers and corresponding RHEED intensities for a wide range of single- and hetero-structures. Responding to the users' feedback the present release has been implemented using a well-documented free compiler [1] not requiring the special configuration and installation additional libraries. Summary of revisions: The logical structure of the Growth09 program has been modified according to the scheme showed in Fig. 1. The class diagram in Fig. 1 is a static view of the main platform-specific elements of the GrowthCP architecture. Fig. 2 provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. The program requires the user to provide the appropriate parameters in the form of a knowledge base for the crystal structures under investigation. These parameters are loaded from the parameters. ini files at run-time. Instructions to prepare the .ini files can be found in the new distribution. The program enables carrying out different growth models and one-dimensional dynamical RHEED calculations for the fcc lattice with basis of three-atoms, fcc lattice with basis of two-atoms, fcc lattice with single atom basis, Zinc-Blende, Sodium Chloride, and Wurtzite crystalline structures and hetero-structures, but yet the Fourier component of the scattering potential in the TRHEEDCalculations.crystPotUgXXX() procedure can be modified and implemented according to users' specific application requirements. The Fourier component of the scattering potential of the whole crystalline hetero-structures can be determined as a sum of contributions coming from all thin slices of individual atomic layers. To carry out one-dimensional calculations of the scattering potentials, the program uses properly constructed self-consistent procedures. Each component of the system shown in Figs. 1 and 2 is fully extendable and can easily be adapted to new changeable requirements. Two essential logical elements of the system, i.e. TGrowthTransaction and TRHEEDCalculations classes, were designed and implemented in this way for them to pass the information to themselves without the need to use the data-exchange files given. In consequence each of them can be independently modified and/or extended. Implementing other types of differential equations and the different algorithm for solving them in the TGrowthTransaction class does not require another implementation of the TRHEEDCalculations class. Similarly, implementing other forms of scattering potential and different algorithm for RHEED calculation stays without the influence on the TGrowthTransaction class construction. Unusual features: The program is distributed in the form of main project GrowthCP.lpr, with associated files, and should be compiled using Lazarus IDE. The program should be compiled with English/USA regional and language options. Running time: The typical running time is machine and user-parameters dependent. References: http://sourceforge.net/projects/lazarus/files/.

  16. NEW GIS WATERSHED ANALYSIS TOOLS FOR SOIL CHARACTERIZATION AND EROSION AND SEDIMENTATION MODELING

    EPA Science Inventory

    A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed which utilizes a suite of automated scripts and a pair of processing-intensive executable programs operating on a personal computer platform.

  17. Surface acoustical intensity measurements on a diesel engine

    NASA Technical Reports Server (NTRS)

    Mcgary, M. C.; Crocker, M. J.

    1980-01-01

    The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.

  18. Computers in anesthesia and intensive care: lack of evidence that the central unit serves as reservoir of pathogens.

    PubMed

    Quinzio, Lorenzo; Blazek, Michael; Hartmann, Bernd; Röhrig, Rainer; Wille, Burkhard; Junger, Axel; Hempelmann, Gunter

    2005-01-01

    Computers are becoming increasingly visible in operating rooms (OR) and intensive care units (ICU) for use in bedside documentation. Recently, they have been suspected as possibly acting as reservoirs for microorganisms and vehicles for the transfer of pathogens to patients, causing nosocomial infections. The purpose of this study was to examine the microbiological (bacteriological and mycological) contamination of the central unit of computers used in an OR, a surgical and a pediatric ICU of a tertiary teaching hospital. Sterile swab samples were taken from five sites in each of 13 computers stationed at the two ICUs and 12 computers at the OR. Sample sites within the chassis housing of the computer processing unit (CPU) included the CPU fan, ventilator, and metal casing. External sites were the ventilator and the bottom of the computer tower. Quantitative and qualitative microbiological analyses were performed according to commonly used methods. One hundred and ninety sites were cultured for bacteria and fungi. Analyses of swabs taken at five equivalent sites inside and outside the computer chassis did not find any significant-number of potentially pathogenic bacteria or fungi. This can probably be attributed to either the absence or the low number of pathogens detected on the surfaces. Microbial contamination in the CPU of OR and ICU computers is too low for designating them as a reservoir for microorganisms.

  19. Development of computer games for assessment and training in post-stroke arm telerehabilitation.

    PubMed

    Rodriguez-de-Pablo, Cristina; Perry, Joel C; Cavallaro, Francesca I; Zabaleta, Haritz; Keller, Thierry

    2012-01-01

    Stroke is the leading cause of long term disability among adults in industrialized nations. The majority of these disabilities include deficiencies in arm function, which can make independent living very difficult. Research shows that better results in rehabilitation are obtained when patients receive more intensive therapy. However this intensive therapy is currently too expensive to be provided by the public health system, and at home few patients perform the repetitive exercises recommended by their therapists. Computer games can provide an affordable, enjoyable, and effective way to intensify treatment, while keeping the patient as well as their therapists informed about their progress. This paper presents the study, design, implementation and user-testing of a set of computer games for at-home assessment and training of upper-limb motor impairment after stroke.

  20. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  1. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  2. Influence of Prior Intense Exercise and Cold Water Immersion in Recovery for Performance and Physiological Response during Subsequent Exercise

    PubMed Central

    Christensen, Peter M.; Bangsbo, Jens

    2016-01-01

    Athletes in intense endurance sports (e.g., 4000-m track cycling) often perform maximally (~4 min) twice a day due to qualifying and finals being placed on the same day. The purpose of the present study was to evaluate repeated performance on the same day in a competitive setting (part A) and the influence from prior intense exercise on subsequent performance and physiological response to moderate and maximal exercise with and without the use of cold water immersion (CWI) in recovery (part B). In part A, performance times during eight World championships for male track cyclists were extracted from the qualifying and final races in 4000-m individual pursuit. In part B, twelve trained cyclists with an average (±SD) ⩒O2-peak of 67 ± 5 mL/min/kg performed a protocol mimicking a qualifying race (QUAL) followed 3 h later by a performance test (PT) with each exercise period encompassing intense exercise for ~4 min preceded by an identical warm-up period in both a control setting (CON) and using cold water immersion in recovery (CWI; 15 min at 15°C). Performance was lowered (P < 0.001) from qualification to finals (259 ± 3 vs. 261 ± 3 s) for the track cyclists during World championships in part A. In part B, mean power in PT was not different in CWI relative to CON (406 ± 43 vs. 405 ± 38 W). Peak ⩒O2 (5.04 ± 0.50 vs. 5.00 ± 0.49 L/min) and blood lactate (13 ± 3 vs. 14 ± 3 mmol/L) did not differ between QUAL and PT and cycling economy and potassium handling was not impaired by prior intense exercise. In conclusion, performance is reduced with repeated maximal exercise in world-class track cyclists during 4000-m individual pursuit lasting ~4 min, however prior intense exercise do not appear to impair peak ⩒O2, peak lactate, cycling economy, or potassium handling in trained cyclists and CWI in recovery does not improve subsequent performance. PMID:27445857

  3. A study of real-time computer graphic display technology for aeronautical applications

    NASA Technical Reports Server (NTRS)

    Rajala, S. A.

    1981-01-01

    The development, simulation, and testing of an algorithm for anti-aliasing vector drawings is discussed. The pseudo anti-aliasing line drawing algorithm is an extension to Bresenham's algorithm for computer control of a digital plotter. The algorithm produces a series of overlapping line segments where the display intensity shifts from one segment to the other in this overlap (transition region). In this algorithm the length of the overlap and the intensity shift are essentially constants because the transition region is an aid to the eye in integrating the segments into a single smooth line.

  4. Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Gatski, Thomas B.

    1997-01-01

    A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.

  5. New Media for Workers' Education and Training.

    ERIC Educational Resources Information Center

    Labour Education, 1995

    1995-01-01

    Includes "Introduction"; "Part One: Towards a New Technological System"; "Part Two: Communications and Training"; "Part Three: Production and Distribution of Media"; glossary; and 77-item bibliography. Covers the use of audiovisual aids, computer-assisted instruction, telecommunications, video, optical media, and computer networks for labor…

  6. Automated procedure for developing hybrid computer simulations of turbofan engines. Part 1: General description

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Krosel, S. M.; Bruton, W. M.

    1982-01-01

    A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.

  7. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  8. A study of Mariner 10 flight experiences and some flight piece part failure rate computations

    NASA Technical Reports Server (NTRS)

    Paul, F. A.

    1976-01-01

    The problems and failures encountered in Mariner flight are discussed and the data available through a quantitative accounting of all electronic piece parts on the spacecraft are summarized. It also shows computed failure rates for electronic piece parts. It is intended that these computed data be used in the continued updating of the failure rate base used for trade-off studies and predictions for future JPL space missions.

  9. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Procedures for the Computation of the Real Cost of Capital I Appendix I to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. I Appendix I to Part 504—Procedures for the Computation of the Real Cost of Capital (a) The firm's real after-tax weighted average...

  10. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  11. Writing. A Research-Based Writing Program for Students with High Access to Computers. ACOT Report #2.

    ERIC Educational Resources Information Center

    Hiebert, Elfrieda H.; And Others

    This report summarizes the curriculum development and research effort that took place at the Cupertino Apple Classrooms of Tomorrow (ACOT) site from January through June 1987. Based on the premise that computers make revising and editing much easier, the four major objectives emphasized by the computer-intensive writing program are fluency,…

  12. Fault-Tolerant Computing: An Overview

    DTIC Science & Technology

    1991-06-01

    Addison Wesley:, Reading, MA) 1984. [8] J. Wakerly , Error Detecting Codes, Self-Checking Circuits and Applications , (Elsevier North Holland, Inc.- New York... applicable to bit-sliced organi- zations of hardware. In the first time step, the normal computation is performed on the operands and the results...for error detection and fault tolerance in parallel processor systems while perform- ing specific computation-intensive applications [111. Contrary to

  13. Accelerating Climate and Weather Simulations through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  14. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  15. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    NASA Astrophysics Data System (ADS)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  16. Accessing the public MIMIC-II intensive care relational database for clinical research.

    PubMed

    Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G

    2013-01-10

    The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.

  17. Small-angle x-ray scattering in amorphous silicon: A computational study

    NASA Astrophysics Data System (ADS)

    Paudel, Durga; Atta-Fynn, Raymond; Drabold, David A.; Elliott, Stephen R.; Biswas, Parthapratim

    2018-05-01

    We present a computational study of small-angle x-ray scattering (SAXS) in amorphous silicon (a -Si) with particular emphasis on the morphology and microstructure of voids. The relationship between the scattering intensity in SAXS and the three-dimensional structure of nanoscale inhomogeneities or voids is addressed by generating large high-quality a -Si networks with 0.1%-0.3% volume concentration of voids, as observed in experiments using SAXS and positron annihilation spectroscopy. A systematic study of the variation of the scattering intensity in the small-angle scattering region with the size, shape, number density, and the spatial distribution of the voids in the networks is presented. Our results suggest that the scattering intensity in the small-angle region is particularly sensitive to the size and the total volume fraction of the voids, but the effect of the geometry or shape of the voids is less pronounced in the intensity profiles. A comparison of the average size of the voids obtained from the simulated values of the intensity, using the Guinier approximation and Kratky plots, with that of the same from the spatial distribution of the atoms in the vicinity of void surfaces is presented.

  18. Barriers and Incentives to Computer Usage in Teaching

    DTIC Science & Technology

    1988-09-29

    classes with one or two computers. Research Methods The two major methods of data-gathering employed in this study were intensive and extensive classroom ... observation and repeated extended interviews with students and teachers. Administrators were also interviewed when appropriate. Classroom observers used

  19. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential.

    PubMed

    Betowski, Don; Bevington, Charles; Allison, Thomas C

    2016-01-19

    Halogenated chemical substances are used in a broad array of applications, and new chemical substances are continually being developed and introduced into commerce. While recent research has considerably increased our understanding of the global warming potentials (GWPs) of multiple individual chemical substances, this research inevitably lags behind the development of new chemical substances. There are currently over 200 substances known to have high GWP. Evaluation of schemes to estimate radiative efficiency (RE) based on computational chemistry are useful where no measured IR spectrum is available. This study assesses the reliability of values of RE calculated using computational chemistry techniques for 235 chemical substances against the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models, and reasonable agreement with reported values is found. Significant improvement is obtained through scaling of both vibrational frequencies and intensities. The effect of varying the computational method and basis set used to calculate the frequency data is discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed RE values.

  20. Cutting tool form compensation system and method

    DOEpatents

    Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.

    1993-10-19

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.

  1. Cutting tool form compensaton system and method

    DOEpatents

    Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.

    1993-01-01

    A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.

  2. Effect of Intensive Statin Therapy on Coronary High-Intensity Plaques Detected by Noncontrast T1-Weighted Imaging: The AQUAMARINE Pilot Study.

    PubMed

    Noguchi, Teruo; Tanaka, Atsushi; Kawasaki, Tomohiro; Goto, Yoichi; Morita, Yoshiaki; Asaumi, Yasuhide; Nakao, Kazuhiro; Fujiwara, Reiko; Nishimura, Kunihiro; Miyamoto, Yoshihiro; Ishihara, Masaharu; Ogawa, Hisao; Koga, Nobuhiko; Narula, Jagat; Yasuda, Satoshi

    2015-07-21

    Coronary high-intensity plaques detected by noncontrast T1-weighted imaging may represent plaque instability. High-intensity plaques can be quantitatively assessed by a plaque-to-myocardium signal-intensity ratio (PMR). This pilot, hypothesis-generating study sought to investigate whether intensive statin therapy would lower PMR. Prospective serial noncontrast T1-weighted magnetic resonance imaging and computed tomography angiography were performed in 48 patients with coronary artery disease at baseline and after 12 months of intensive pitavastatin treatment with a target low-density lipoprotein cholesterol level <80 mg/dl. The control group consisted of coronary artery disease patients not treated with statins that were matched by propensity scoring (n = 48). The primary endpoint was the 12-month change in PMR. Changes in computed tomography angiography parameters and high-sensitivity C-reactive protein levels were analyzed. In the statin group, 12 months of statin therapy significantly improved low-density lipoprotein cholesterol levels (125 to 70 mg/dl; p < 0.001), PMR (1.38 to 1.11, an 18.9% reduction; p < 0.001), low-attenuation plaque volume, and the percentage of total atheroma volume on computed tomography. In the control group, the PMR increased significantly (from 1.22 to 1.49, a 19.2% increase; p < 0.001). Changes in PMR were correlated with changes in low-density lipoprotein cholesterol (r = 0.533; p < 0.001), high-sensitivity C-reactive protein (r = 0.347; p < 0.001), percentage of atheroma volume (r = 0.477; p < 0.001), and percentage of low-attenuation plaque volume (r = 0.416; p < 0.001). Statin treatment significantly reduced the PMR of high-intensity plaques. Noncontrast T1-weighted magnetic resonance imaging could become a useful technique for repeated quantitative assessment of plaque composition. (Attempts at Plaque Vulnerability Quantification with Magnetic Resonance Imaging Using Noncontrast T1-weighted Technique [AQUAMARINE]; UMIN000003567). Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  3. Limb Darkening and Planetary Transits: Testing Center-to-limb Intensity Variations and Limb-darkening Directly from Model Stellar Atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neilson, Hilding R.; Lester, John B.; McNeil, Joseph T.

    The transit method, employed by Microvariability and Oscillation of Stars ( MOST ), Kepler , and various ground-based surveys has enabled the characterization of extrasolar planets to unprecedented precision. These results are precise enough to begin to measure planet atmosphere composition, planetary oblateness, starspots, and other phenomena at the level of a few hundred parts per million. However, these results depend on our understanding of stellar limb darkening, that is, the intensity distribution across the stellar disk that is sequentially blocked as the planet transits. Typically, stellar limb darkening is assumed to be a simple parameterization with two coefficients thatmore » are derived from stellar atmosphere models or fit directly. In this work, we revisit this assumption and compute synthetic planetary-transit light curves directly from model stellar atmosphere center-to-limb intensity variations (CLIVs) using the plane-parallel Atlas and spherically symmetric SAtlas codes. We compare these light curves to those constructed using best-fit limb-darkening parameterizations. We find that adopting parametric stellar limb-darkening laws leads to systematic differences from the more geometrically realistic model stellar atmosphere CLIV of about 50–100 ppm at the transit center and up to 300 ppm at ingress/egress. While these errors are small, they are systematic, and they appear to limit the precision necessary to measure secondary effects. Our results may also have a significant impact on transit spectra.« less

  4. Development and Validation of 2D Difference Intensity Analysis for Chemical Library Screening by Protein-Detected NMR Spectroscopy.

    PubMed

    Egner, John M; Jensen, Davin R; Olp, Michael D; Kennedy, Nolan W; Volkman, Brian F; Peterson, Francis C; Smith, Brian C; Hill, R Blake

    2018-03-02

    An academic chemical screening approach was developed by using 2D protein-detected NMR, and a 352-chemical fragment library was screened against three different protein targets. The approach was optimized against two protein targets with known ligands: CXCL12 and BRD4. Principal component analysis reliably identified compounds that induced nonspecific NMR crosspeak broadening but did not unambiguously identify ligands with specific affinity (hits). For improved hit detection, a novel scoring metric-difference intensity analysis (DIA)-was devised that sums all positive and negative intensities from 2D difference spectra. Applying DIA quickly discriminated potential ligands from compounds inducing nonspecific NMR crosspeak broadening and other nonspecific effects. Subsequent NMR titrations validated chemotypes important for binding to CXCL12 and BRD4. A novel target, mitochondrial fission protein Fis1, was screened, and six hits were identified by using DIA. Screening these diverse protein targets identified quinones and catechols that induced nonspecific NMR crosspeak broadening, hampering NMR analyses, but are currently not computationally identified as pan-assay interference compounds. The results established a streamlined screening workflow that can easily be scaled and adapted as part of a larger screening pipeline to identify fragment hits and assess relative binding affinities in the range of 0.3-1.6 mm. DIA could prove useful in library screening and other applications in which NMR chemical shift perturbations are measured. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  6. Sound Velocity and Diffraction Intensity Measurements Based on Raman-Nath Theory of the Interaction of Light and Ultrasound

    ERIC Educational Resources Information Center

    Neeson, John F.; Austin, Stephen

    1975-01-01

    Describes a method for the measurement of the velocity of sound in various liquids based on the Raman-Nath theory of light-sound interaction. Utilizes an analog computer program to calculate the intensity of light scattered into various diffraction orders. (CP)

  7. Field Trip.

    ERIC Educational Resources Information Center

    Sanders, Bill

    1993-01-01

    Reports the results of a field trip to measure the intensity of electromagnetic fields generated by electronic devices in the home, in cars, at work, outside, and in places people visit during the day. Found that a person gets more intense exposure while working at a computer than by living next to an electrical substation. (MDH)

  8. Computer Generated Holography with Intensity-Graded Patterns

    PubMed Central

    Conti, Rossella; Assayag, Osnath; de Sars, Vincent; Guillon, Marc; Emiliani, Valentina

    2016-01-01

    Computer Generated Holography achieves patterned illumination at the sample plane through phase modulation of the laser beam at the objective back aperture. This is obtained by using liquid crystal-based spatial light modulators (LC-SLMs), which modulate the spatial phase of the incident laser beam. A variety of algorithms is employed to calculate the phase modulation masks addressed to the LC-SLM. These algorithms range from simple gratings-and-lenses to generate multiple diffraction-limited spots, to iterative Fourier-transform algorithms capable of generating arbitrary illumination shapes perfectly tailored on the base of the target contour. Applications for holographic light patterning include multi-trap optical tweezers, patterned voltage imaging and optical control of neuronal excitation using uncaging or optogenetics. These past implementations of computer generated holography used binary input profile to generate binary light distribution at the sample plane. Here we demonstrate that using graded input sources, enables generating intensity graded light patterns and extend the range of application of holographic light illumination. At first, we use intensity-graded holograms to compensate for LC-SLM position dependent diffraction efficiency or sample fluorescence inhomogeneity. Finally we show that intensity-graded holography can be used to equalize photo evoked currents from cells expressing different levels of chanelrhodopsin2 (ChR2), one of the most commonly used optogenetics light gated channels, taking into account the non-linear dependence of channel opening on incident light. PMID:27799896

  9. Room temperature linelists for CO2 asymmetric isotopologues with ab initio computed intensities

    NASA Astrophysics Data System (ADS)

    Zak, Emil J.; Tennyson, Jonathan; Polyansky, Oleg L.; Lodi, Lorenzo; Zobov, Nikolay F.; Tashkun, Sergei A.; Perevalov, Valery I.

    2017-12-01

    The present paper reports room temperature line lists for six asymmetric isotopologues of carbon dioxide: 16O12C18O (628), 16O12C17O (627), 16O13C18O (638),16O13C17O (637), 17O12C18O (728) and 17O13C18O (738), covering the range 0-8000 cm-1. Variational rotation-vibration wavefunctions and energy levels are computed using the DVR3D software suite and a high quality semi-empirical potential energy surface (PES), followed by computation of intensities using an ab initio dipole moment surface (DMS). A theoretical procedure for quantifying sensitivity of line intensities to minor distortions of the PES/DMS renders our theoretical model as critically evaluated. Several recent high quality measurements and theoretical approaches are discussed to provide a benchmark of our results against the most accurate available data. Indeed, the thesis of transferability of accuracy among different isotopologues with the use of mass-independent PES is supported by several examples. Thereby, we conclude that the majority of line intensities for strong bands are predicted with sub-percent accuracy. Accurate line positions are generated using an effective Hamiltonian, constructed from the latest experiments. This study completes the list of relevant isotopologues of carbon dioxide; these line lists are available to remote sensing studies and inclusion in databases.

  10. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  11. Guidelines for composite materials research related to general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Dow, N. F.; Humphreys, E. A.; Rosen, B. W.

    1983-01-01

    Guidelines for research on composite materials directed toward the improvement of all aspects of their applicability for general aviation aircraft were developed from extensive studies of their performance, manufacturability, and cost effectiveness. Specific areas for research and for manufacturing development were identified and evaluated. Inputs developed from visits to manufacturers were used in part to guide these evaluations, particularly in the area of cost effectiveness. Throughout the emphasis was to direct the research toward the requirements of general aviation aircraft, for which relatively low load intensities are encountered, economy of production is a prime requirement, and yet performance still commands a premium. A number of implications regarding further directions for developments in composites to meet these requirements also emerged from the studies. Chief among these is the need for an integrated (computer program) aerodynamic/structures approach to aircraft design.

  12. Tools for Physiology Labs: Inexpensive Equipment for Physiological Stimulation

    PubMed Central

    Land, Bruce R.; Johnson, Bruce R.; Wyttenbach, Robert A.; Hoy, Ronald R.

    2004-01-01

    We describe the design of inexpensive equipment and software for physiological stimulation in the neurobiology teaching laboratory. The core component is a stimulus isolation unit (SIU) that uses DC-DC converters, rather than expensive high-voltage batteries, to generate isolated power at high voltage. The SIU has no offset when inactive and produces pulses up to 100 V with moderately fast (50 μs) rise times. We also describe two methods of stimulus timing control. The first is a simplified conventional, stand-alone analog pulse generator. The second uses a digital microcontroller interfaced with a personal computer. The SIU has performed well and withstood intensive use in our undergraduate physiology laboratory. This project is part of our ongoing effort to make reliable low-cost physiology equipment available for both student teaching and faculty research laboratories. PMID:23493817

  13. Processors for wavelet analysis and synthesis: NIFS and TI-C80 MVP

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey W.

    1996-03-01

    Two processors are considered for image quadrature mirror filtering (QMF). The neuromorphic infrared focal-plane sensor (NIFS) is an existing prototype analog processor offering high speed spatio-temporal Gaussian filtering, which could be used for the QMF low- pass function, and difference of Gaussian filtering, which could be used for the QMF high- pass function. Although not designed specifically for wavelet analysis, the biologically- inspired system accomplishes the most computationally intensive part of QMF processing. The Texas Instruments (TI) TMS320C80 Multimedia Video Processor (MVP) is a 32-bit RISC master processor with four advanced digital signal processors (DSPs) on a single chip. Algorithm partitioning, memory management and other issues are considered for optimal performance. This paper presents these considerations with simulated results leading to processor implementation of high-speed QMF analysis and synthesis.

  14. Revisiting an old concept: the coupled oscillator model for VCD. Part 1: the generalised coupled oscillator mechanism and its intrinsic connection to the strength of VCD signals.

    PubMed

    Nicu, Valentin Paul

    2016-08-03

    Motivated by the renewed interest in the coupled oscillator (CO) model for VCD, in this work a generalised coupled oscillator (GCO) expression is derived by introducing the concept of a coupled oscillator origin. Unlike the standard CO expression, the GCO expression is exact within the harmonic approximation. Using two illustrative example molecules, the theoretical concepts introduced here are demonstrated by performing a GCO decomposition of the rotational strengths computed using DFT. This analysis shows that: (1) the contributions to the rotational strengths that are normally neglected in the standard CO model can be comparable to or larger than the CO contribution, and (2) the GCO mechanism introduced here can affect the VCD intensities of all types of modes in symmetric and asymmetric molecules.

  15. Optical determination of material abundances by using neural networks for the derivation of spectral filters

    NASA Astrophysics Data System (ADS)

    Krippner, Wolfgang; Wagner, Felix; Bauer, Sebastian; Puente León, Fernando

    2017-06-01

    Using appropriately designed spectral filters allows to optically determine material abundances. While an infinite number of possibilities exist for determining spectral filters, we take advantage of using neural networks to derive spectral filters leading to precise estimations. To overcome some drawbacks that regularly influence the determination of material abundances using hyperspectral data, we incorporate the spectral variability of the raw materials into the training of the considered neural networks. As a main result, we successfully classify quantized material abundances optically. Thus, the main part of the high computational load, which belongs to the use of neural networks, is avoided. In addition, the derived material abundances become invariant against spatially varying illumination intensity as a remarkable benefit in comparison with spectral filters based on the Moore-Penrose pseudoinverse, for instance.

  16. Building the Scientific Modeling Assistant: An interactive environment for specialized software design

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    The construction of scientific software models is an integral part of doing science, both within NASA and within the scientific community at large. Typically, model-building is a time-intensive and painstaking process, involving the design of very large, complex computer programs. Despite the considerable expenditure of resources involved, completed scientific models cannot easily be distributed and shared with the larger scientific community due to the low-level, idiosyncratic nature of the implemented code. To address this problem, we have initiated a research project aimed at constructing a software tool called the Scientific Modeling Assistant. This tool provides automated assistance to the scientist in developing, using, and sharing software models. We describe the Scientific Modeling Assistant, and also touch on some human-machine interaction issues relevant to building a successful tool of this type.

  17. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  18. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  19. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  20. 15 CFR Supplement No. 1 to Part 730 - Information Collection Requirements Under the Paperwork Reduction Act: OMB Control Numbers

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Report of Requests for Restrictive Trade Practice or Boycott—Single or Multiple Transactions part 760 and § 762.2(b). 0694-0013 Computers and Related Equipment EAR Supplement 2 to Part 748 part 774. 0694-0016... §§ 762.2(b) and 764.5. 0694-0073 Export Controls of High Performance Computers Supplement No. 2 to part...

  1. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services

    PubMed Central

    Castaño-Díez, Daniel

    2017-01-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909

  2. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services.

    PubMed

    Castaño-Díez, Daniel

    2017-06-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.

  3. Rationale, design and baseline characteristics of a randomized controlled trial of a web-based computer-tailored physical activity intervention for adults from Quebec City.

    PubMed

    Boudreau, François; Walthouwer, Michel Jean Louis; de Vries, Hein; Dagenais, Gilles R; Turbide, Ginette; Bourlaud, Anne-Sophie; Moreau, Michel; Côté, José; Poirier, Paul

    2015-10-09

    The relationship between physical activity and cardiovascular disease (CVD) protection is well documented. Numerous factors (e.g. patient motivation, lack of facilities, physician time constraints) can contribute to poor PA adherence. Web-based computer-tailored interventions offer an innovative way to provide tailored feedback and to empower adults to engage in regular moderate- to vigorous-intensity PA. To describe the rationale, design and content of a web-based computer-tailored PA intervention for Canadian adults enrolled in a randomized controlled trial (RCT). 244 men and women aged between 35 and 70 years, without CVD or physical disability, not participating in regular moderate- to vigorous-intensity PA, and familiar with and having access to a computer at home, were recruited from the Quebec City Prospective Urban and Rural Epidemiological (PURE) study centre. Participants were randomized into two study arms: 1) an experimental group receiving the intervention and 2) a waiting list control group. The fully automated web-based computer-tailored PA intervention consists of seven 10- to 15-min sessions over an 8-week period. The theoretical underpinning of the intervention is based on the I-Change Model. The aim of the intervention was to reach a total of 150 min per week of moderate- to vigorous-intensity aerobic PA. This study will provide useful information before engaging in a large RCT to assess the long-term participation and maintenance of PA, the potential impact of regular PA on CVD risk factors and the cost-effectiveness of a web-based computer-tailored intervention. ISRCTN36353353 registered on 24/07/2014.

  4. Blazing Signature Filter: a library for fast pairwise similarity comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan

    Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less

  5. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    NASA Astrophysics Data System (ADS)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  6. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  7. Active Control of Fan Noise: Feasibility Study. Volume 5; Numerical Computation of Acoustic Mode Reflection Coefficients for an Unflanged Cylindrical Duct

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.

    1996-01-01

    A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.

  8. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  9. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  10. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  11. 5 CFR 839.1102 - How are my retirement benefits computed if I elect FERS under this part?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How are my retirement benefits computed... Provisions § 839.1102 How are my retirement benefits computed if I elect FERS under this part? OPM will compute your retirement benefit as if you were properly put in FERS on the effective date of the error...

  12. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  13. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  14. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  15. L'ordinateur a visage humain (The Computer in Human Guise).

    ERIC Educational Resources Information Center

    Otman, Gabriel

    1986-01-01

    Discusses the tendency of humans to describe parts and functions of a computer with terminology that refers to human characteristics; for example, parts of the body (electronic brain), intellectual activities (optical memory), and physical activities (command). Computers are also described through metaphors, connotations, allusions, and analogies…

  16. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  17. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  18. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  19. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    PubMed

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  20. Analysis and trends of precipitation lapse rate and extreme indices over north Sikkim eastern Himalayas under CMIP5ESM-2M RCPs experiments

    NASA Astrophysics Data System (ADS)

    Singh, Vishal; Goyal, Manish Kumar

    2016-01-01

    This paper draws attention to highlight the spatial and temporal variability in precipitation lapse rate (PLR) and precipitation extreme indices (PEIs) through the mesoscale characterization of Teesta river catchment, which corresponds to north Sikkim eastern Himalayas. A PLR rate is an important variable for the snowmelt runoff models. In a mountainous region, the PLR could be varied from lower elevation parts to high elevation parts. In this study, a PLR was computed by accounting elevation differences, which varies from around 1500 m to 7000 m. A precipitation variability and extremity were analysed using multiple mathematical functions viz. quantile regression, spatial mean, spatial standard deviation, Mann-Kendall test and Sen's estimation. For this reason, a daily precipitation, in the historical (years 1980-2005) as measured/observed gridded points and projected experiments for the 21st century (years 2006-2100) simulated by CMIP5 ESM-2 M model (Coupled Model Intercomparison Project Phase 5 Earth System Model 2) employing three different radiative forcing scenarios (Representative Concentration Pathways), utilized for the research work. The outcomes of this study suggest that a PLR is significantly varied from lower elevation to high elevation parts. The PEI based analysis showed that the extreme high intensity events have been increased significantly, especially after 2040s. The PEI based observations also showed that the numbers of wet days are increased for all the RCPs. The quantile regression plots showed significant increments in the upper and lower quantiles of the various extreme indices. The Mann-Kendall test and Sen's estimation tests clearly indicated significant changing patterns in the frequency and intensity of the precipitation indices across all the sub-basins and RCP scenario in an intra-decadal time series domain. The RCP8.5 showed extremity of the projected outcomes.

  1. The European-Mediterranean Earthquake Catalogue (EMEC) for the last millennium

    NASA Astrophysics Data System (ADS)

    Grünthal, Gottfried; Wahlström, Rutger

    2012-07-01

    The catalogue by Grünthal et al. (J Seismol 13:517-541, 2009a) of earthquakes in central, northern, and north-western Europe with M w ≥ 3.5 (CENEC) has been expanded to cover also southern Europe and the Mediterranean area. It has also been extended in time (1000-2006). Due to the strongly increased seismicity in the new area, the threshold for events south of the latitude 44°N has here been set at M w ≥ 4.0, keeping the lower threshold in the northern catalogue part. This part has been updated with data from new and revised national and regional catalogues. The new Euro-Mediterranean Earthquake Catalogue (EMEC) is based on data from some 80 domestic catalogues and data files and over 100 special studies. Available original M w and M 0 data have been introduced. The analysis largely followed the lines of the Grünthal et al. (J Seismol 13:517-541, 2009a) study, i.e., fake and duplicate events were identified and removed, polygons were specified within each of which one or more of the catalogues or data files have validity, and existing magnitudes and intensities were converted to M w. Algorithms to compute M w are based on relations provided locally, or more commonly on those derived by Grünthal et al. (J Seismol 13:517-541, 2009a) or in the present study. The homogeneity of EMEC with respect to M w for the different constituents was investigated and improved where feasible. EMEC contains entries of some 45,000 earthquakes. For each event, the date, time, location (including focal depth if available), intensity I 0 (if given in the original catalogue), magnitude M w (with uncertainty when given), and source (catalogue or special study) are presented. Besides the main EMEC catalogue, large events before year 1000 in the SE part of the investigated area and fake events, respectively, are given in separate lists.

  2. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  3. VERCE: a productive e-Infrastructure and e-Science environment for data-intensive seismology research

    NASA Astrophysics Data System (ADS)

    Vilotte, J. P.; Atkinson, M.; Spinuso, A.; Rietbrock, A.; Michelini, A.; Igel, H.; Frank, A.; Carpené, M.; Schwichtenberg, H.; Casarotti, E.; Filgueira, R.; Garth, T.; Germünd, A.; Klampanos, I.; Krause, A.; Krischer, L.; Leong, S. H.; Magnoni, F.; Matser, J.; Moguilny, G.

    2015-12-01

    Seismology addresses both fundamental problems in understanding the Earth's internal wave sources and structures and augmented societal applications, like earthquake and tsunami hazard assessment and risk mitigation; and puts a premium on open-data accessible by the Federated Digital Seismological Networks. The VERCE project, "Virtual Earthquake and seismology Research Community e-science environment in Europe", has initiated a virtual research environment to support complex orchestrated workflows combining state-of-art wave simulation codes and data analysis tools on distributed computing and data infrastructures (DCIs) along with multiple sources of observational data and new capabilities to combine simulation results with observational data. The VERCE Science Gateway provides a view of all the available resources, supporting collaboration with shared data and methods, with data access controls. The mapping to DCIs handles identity management, authority controls, transformations between representations and controls, and access to resources. The framework for computational science that provides simulation codes, like SPECFEM3D, democratizes their use by getting data from multiple sources, managing Earth models and meshes, distilling them as input data, and capturing results with meta-data. The dispel4py data-intensive framework allows for developing data-analysis applications using Python and the ObsPy library, which can be executed on different DCIs. A set of tools allows coupling with seismology and external data services. Provenance driven tools validate results and show relationships between data to facilitate method improvement. Lessons learned from VERCE training lead us to conclude that solid-Earth scientists could make significant progress by using VERCE e-science environment. VERCE has already contributed to the European Plate Observation System (EPOS), and is part of the EPOS implementation phase. Its cross-disciplinary capabilities are being extended for the EPOS implantation phase.

  4. A comprehensive PIV measurement campaign on a fully equipped helicopter model

    NASA Astrophysics Data System (ADS)

    De Gregorio, Fabrizio; Pengel, Kurt; Kindler, Kolja

    2012-07-01

    The flow field around a helicopter is characterised by its inherent complexity including effects of fluid-structure interference, shock-boundary layer interaction, and dynamic stall. Since the advancement of computational fluid dynamics and computing capabilities has led to an increasing demand for experimental validation data, a comprehensive wind tunnel test campaign of a fully equipped and motorised generic medium transport helicopter was conducted in the framework of the GOAHEAD project. Different model configurations (with or without main/tail rotor blades) and several flight conditions were investigated. In this paper, the results of the three-component velocity field measurements around the model are surveyed. The effect of the interaction between the main rotor wake and the fuselage for cruise/tail shake flight conditions was analysed based on the flow characteristics downstream from the rotor hub and the rear fuselage hatch. The results indicated a sensible increment of the intensity of the vortex shedding from the lower part of the fuselage and a strong interaction between the blade vortex filaments and the wakes shed by the rotor hub and by the engine exhaust areas. The pitch-up phenomenon was addressed, detecting the blade tip vortices impacting on the horizontal tail plane. For high-speed forward flight, the shock wave formation on the advancing blade was detected, measuring the location on the blade chord and the intensity. Furthermore, dynamic stall on the retreating main rotor blade in high-speed forward flight was observed at r/ R = 0.5 and 0.6. The analysis of the substructures forming the dynamic stall vortex revealed an unexpected spatial concentration suggesting a rotational stabilisation of large-scale structures on the blade.

  5. Computational Cardiac Anatomy Using MRI

    PubMed Central

    Beg, Mirza Faisal; Helm, Patrick A.; McVeigh, Elliot; Miller, Michael I.; Winslow, Raimond L.

    2005-01-01

    Ventricular geometry and fiber orientation may undergo global or local remodeling in cardiac disease. However, there are as yet no mathematical and computational methods for quantifying variation of geometry and fiber orientation or the nature of their remodeling in disease. Toward this goal, a landmark and image intensity-based large deformation diffeomorphic metric mapping (LDDMM) method to transform heart geometry into common coordinates for quantification of shape and form was developed. Two automated landmark placement methods for modeling tissue deformations expected in different cardiac pathologies are presented. The transformations, computed using the combined use of landmarks and image intensities, yields high-registration accuracy of heart anatomies even in the presence of significant variation of cardiac shape and form. Once heart anatomies have been registered, properties of tissue geometry and cardiac fiber orientation in corresponding regions of different hearts may be quantified. PMID:15508155

  6. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  7. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  8. Ermittlung von Wortstaemmen in russischen wissenschaftlichen Fachsprachen mit Hilfe des Computers (Establishing Word Stems in Scientific Russian With the Aid of a Computer)

    ERIC Educational Resources Information Center

    Halbauer, Siegfried

    1976-01-01

    It was considered that students of intensive scientific Russian courses could learn vocabulary more efficiently if they were taught word stems and how to combine them with prefixes and suffixes to form scientific words. The computer programs developed to identify the most important stems is discussed. (Text is in German.) (FB)

  9. Validation of coupled atmosphere-fire behavior models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexitymore » of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.« less

  10. Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2004-06-01

    Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.

  11. A Contrast-Based Computational Model of Surprise and Its Applications.

    PubMed

    Macedo, Luis; Cardoso, Amílcar

    2017-11-19

    We review our work on a contrast-based computational model of surprise and its applications. The review is contextualized within related research from psychology, philosophy, and particularly artificial intelligence. Influenced by psychological theories of surprise, the model assumes that surprise-eliciting events initiate a series of cognitive processes that begin with the appraisal of the event as unexpected, continue with the interruption of ongoing activity and the focusing of attention on the unexpected event, and culminate in the analysis and evaluation of the event and the revision of beliefs. It is assumed that the intensity of surprise elicited by an event is a nonlinear function of the difference or contrast between the subjective probability of the event and that of the most probable alternative event (which is usually the expected event); and that the agent's behavior is partly controlled by actual and anticipated surprise. We describe applications of artificial agents that incorporate the proposed surprise model in three domains: the exploration of unknown environments, creativity, and intelligent transportation systems. These applications demonstrate the importance of surprise for decision making, active learning, creative reasoning, and selective attention. Copyright © 2017 Cognitive Science Society, Inc.

  12. A Distance Measure for Genome Phylogenetic Analysis

    NASA Astrophysics Data System (ADS)

    Cao, Minh Duc; Allison, Lloyd; Dix, Trevor

    Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.

  13. Measuring sperm movement within the female reproductive tract using Fourier analysis.

    PubMed

    Nicovich, Philip R; Macartney, Erin L; Whan, Renee M; Crean, Angela J

    2015-02-01

    The adaptive significance of variation in sperm phenotype is still largely unknown, in part due to the difficulties of observing and measuring sperm movement in its natural, selective environment (i.e., within the female reproductive tract). Computer-assisted sperm analysis systems allow objective and accurate measurement of sperm velocity, but rely on being able to track individual sperm, and are therefore unable to measure sperm movement in species where sperm move in trains or bundles. Here we describe a newly developed computational method for measuring sperm movement using Fourier analysis to estimate sperm tail beat frequency. High-speed time-lapse videos of sperm movement within the female tract of the neriid fly Telostylinus angusticollis were recorded, and a map of beat frequencies generated by converting the periodic signal of an intensity versus time trace at each pixel to the frequency domain using the Fourier transform. We were able to detect small decreases in sperm tail beat frequency over time, indicating the method is sensitive enough to identify consistent differences in sperm movement. Fourier analysis can be applied to a wide range of species and contexts, and should therefore facilitate novel exploration of the causes and consequences of variation in sperm movement.

  14. Discrete crack growth analysis methodology for through cracks in pressurized fuselage structures

    NASA Technical Reports Server (NTRS)

    Potyondy, David O.; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1994-01-01

    A methodology for simulating the growth of long through cracks in the skin of pressurized aircraft fuselage structures is described. Crack trajectories are allowed to be arbitrary and are computed as part of the simulation. The interaction between the mechanical loads acting on the superstructure and the local structural response near the crack tips is accounted for by employing a hierarchical modeling strategy. The structural response for each cracked configuration is obtained using a geometrically nonlinear shell finite element analysis procedure. Four stress intensity factors, two for membrane behavior and two for bending using Kirchhoff plate theory, are computed using an extension of the modified crack closure integral method. Crack trajectories are determined by applying the maximum tangential stress criterion. Crack growth results in localized mesh deletion, and the deletion regions are remeshed automatically using a newly developed all-quadrilateral meshing algorithm. The effectiveness of the methodology and its applicability to performing practical analyses of realistic structures is demonstrated by simulating curvilinear crack growth in a fuselage panel that is representative of a typical narrow-body aircraft. The predicted crack trajectory and fatigue life compare well with measurements of these same quantities from a full-scale pressurized panel test.

  15. Monte Carlo simulation of electrothermal atomization on a desktop personal computer

    NASA Astrophysics Data System (ADS)

    Histen, Timothy E.; Güell, Oscar A.; Chavez, Iris A.; Holcombea, James A.

    1996-07-01

    Monte Carlo simulations have been applied to electrothermal atomization (ETA) using a tubular atomizer (e.g. graphite furnace) because of the complexity in the geometry, heating, molecular interactions, etc. The intense computational time needed to accurately model ETA often limited its effective implementation to the use of supercomputers. However, with the advent of more powerful desktop processors, this is no longer the case. A C-based program has been developed and can be used under Windows TM or DOS. With this program, basic parameters such as furnace dimensions, sample placement, furnace heating and kinetic parameters such as activation energies for desorption and adsorption can be varied to show the absorbance profile dependence on these parameters. Even data such as time-dependent spatial distribution of analyte inside the furnace can be collected. The DOS version also permits input of external temperaturetime data to permit comparison of simulated profiles with experimentally obtained absorbance data. The run-time versions are provided along with the source code. This article is an electronic publication in Spectrochimica Acta Electronica (SAE), the electronic section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by a diskette with a program (PC format), data files and text files.

  16. "Ask Argonne" - Charlie Catlett, Computer Scientist, Part 2

    ScienceCinema

    Catlett, Charlie

    2018-02-14

    A few weeks back, computer scientist Charlie Catlett talked a bit about the work he does and invited questions from the public during Part 1 of his "Ask Argonne" video set (http://bit.ly/1joBtzk). In Part 2, he answers some of the questions that were submitted. Enjoy!

  17. "Ask Argonne" - Charlie Catlett, Computer Scientist, Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catlett, Charlie

    2014-06-17

    A few weeks back, computer scientist Charlie Catlett talked a bit about the work he does and invited questions from the public during Part 1 of his "Ask Argonne" video set (http://bit.ly/1joBtzk). In Part 2, he answers some of the questions that were submitted. Enjoy!

  18. 18 CFR Appendix C to Part 2 - Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Continental U.S.-1972 Data (Docket No. R-478) C Appendix C to Part 2 Conservation of Power and Water Resources... INTERPRETATIONS Pt. 2, App. C Appendix C to Part 2—Nationwide Proceeding Computation of Federal Income Tax...

  19. 18 CFR Appendix C to Part 2 - Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Continental U.S.-1972 Data (Docket No. R-478) C Appendix C to Part 2 Conservation of Power and Water Resources... INTERPRETATIONS Pt. 2, App. C Appendix C to Part 2—Nationwide Proceeding Computation of Federal Income Tax...

  20. Quantum Spin Glasses, Annealing and Computation

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Bikas K.; Inoue, Jun-ichi; Tamura, Ryo; Tanaka, Shu

    2017-05-01

    List of tables; List of figures, Preface; 1. Introduction; Part I. Quantum Spin Glass, Annealing and Computation: 2. Classical spin models from ferromagnetic spin systems to spin glasses; 3. Simulated annealing; 4. Quantum spin glass; 5. Quantum dynamics; 6. Quantum annealing; Part II. Additional Notes: 7. Notes on adiabatic quantum computers; 8. Quantum information and quenching dynamics; 9. A brief historical note on the studies of quantum glass, annealing and computation.

  1. Efficient Memory Access with NumPy Global Arrays using Local Memory Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Berghofer, Dan C.

    This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less

  2. Realization of rapid debugging for detection circuit of optical fiber gas sensor: Using an analog signal source

    NASA Astrophysics Data System (ADS)

    Tian, Changbin; Chang, Jun; Wang, Qiang; Wei, Wei; Zhu, Cunguang

    2015-03-01

    An optical fiber gas sensor mainly consists of two parts: optical part and detection circuit. In the debugging for the detection circuit, the optical part usually serves as a signal source. However, in the debugging condition, the optical part can be easily influenced by many factors, such as the fluctuation of ambient temperature or driving current resulting in instability of the wavelength and intensity for the laser; for dual-beam sensor, the different bends and stresses of the optical fiber will lead to the fluctuation of the intensity and phase; the intensity noise from the collimator, coupler, and other optical devices in the system will also result in the impurity of the optical part based signal source. In order to dramatically improve the debugging efficiency of the detection circuit and shorten the period of research and development, this paper describes an analog signal source, consisting of a single chip microcomputer (SCM), an amplifier circuit, and a voltage-to-current conversion circuit. It can be used to realize the rapid debugging detection circuit of the optical fiber gas sensor instead of optical part based signal source. This analog signal source performs well with many other advantages, such as the simple operation, small size, and light weight.

  3. Computers in the English Classroom.

    ERIC Educational Resources Information Center

    Scioli, Frances; And Others

    Intended to provide help for teachers and supervisors in using computers in English classes as an enhancement of the instructional program, this guide is organized in three parts. Part 1 focuses on the many management issues related to computer use. This section of the guide presents ideas for helping students with limited keyboarding skills as…

  4. Counselor Computer Competence: Future Agenda for Counselor Educators.

    ERIC Educational Resources Information Center

    Dickel, C. Timothy

    This paper asserts that the computer has become an integral part of communication within the world culture and that it has tremendous utility for the counseling profession. Counselor educators are encouraged to incorporate computer competence into their curriculum. This report is divided into four parts. First, there is a brief discussion of the…

  5. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  6. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  7. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  8. Integrated learning in practical machine element design course: a case study of V-pulley design

    NASA Astrophysics Data System (ADS)

    Tantrabandit, Manop

    2014-06-01

    To achieve an effective integrated learning in Machine Element Design course, it is of importance to bridge the basic knowledge and skills of element designs. The multiple core learning leads the pathway which consists of two main parts. The first part involves teaching documents of which the contents are number of V-groove formulae, standard of V-grooved pulleys, and parallel key dimension's formulae. The second part relates to the subjects that the students have studied prior to participating in this integrated learning course, namely Material Selection, Manufacturing Process, Applied Engineering Drawing, CAD (Computer Aided Design) animation software. Moreover, an intensive cooperation between a lecturer and students is another key factor to fulfill the success of integrated learning. Last but not least, the students need to share their knowledge within the group and among the other groups aiming to gain knowledge of and skills in 1) the application of CAD-software to build up manufacture part drawings, 2) assembly drawing, 3) simulation to verify the strength of loaded pulley by method of Finite Element Analysis (FEA), 4) the software to create animation of mounting and dismounting of a pulley to a shaft, and 5) an instruction manual. The end product of this integrated learning, as a result of the above 1 to 5 knowledge and skills obtained, the participating students can create an assembly derived from manufacture part drawings and a video presentation with bilingual (English-Thai) audio description of Vpulley with datum diameter of 250 mm, 4 grooves, and type of groove: SPA.

  9. The Influence of Computer-Mediated Communication Systems on Community

    ERIC Educational Resources Information Center

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  10. Determinants of Computer Utilization by Extension Personnel: A Structural Equations Approach

    ERIC Educational Resources Information Center

    Sivakumar, Paramasivan Sethuraman; Parasar, Bibudha; Das, Raghu Nath; Anantharaman, Mathevanpillai

    2014-01-01

    Purpose: Information technology (IT) has tremendous potential for fostering grassroots development and the Indian government has created various capital-intensive computer networks to promote agricultural development. However, research studies have shown that information technology investments are not always translated into productivity gains due…

  11. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  12. X-ray luminescence computed tomography imaging via multiple intensity weighted narrow beam irradiation

    NASA Astrophysics Data System (ADS)

    Feng, Bo; Gao, Feng; Zhao, Huijuan; Zhang, Limin; Li, Jiao; Zhou, Zhongxing

    2018-02-01

    The purpose of this work is to introduce and study a novel x-ray beam irradiation pattern for X-ray Luminescence Computed Tomography (XLCT), termed multiple intensity-weighted narrow-beam irradiation. The proposed XLCT imaging method is studied through simulations of x-ray and diffuse lights propagation. The emitted optical photons from X-ray excitable nanophosphors were collected by optical fiber bundles from the right-side surface of the phantom. The implementation of image reconstruction is based on the simulated measurements from 6 or 12 angular projections in terms of 3 or 5 x-ray beams scanning mode. The proposed XLCT imaging method is compared against the constant intensity weighted narrow-beam XLCT. From the reconstructed XLCT images, we found that the Dice similarity and quantitative ratio of targets have a certain degree of improvement. The results demonstrated that the proposed method can offer simultaneously high image quality and fast image acquisition.

  13. Stress Intensity Factors for Cracking Metal Structures under Rapid Thermal Loading. Volume 2. Theoretical Background

    DTIC Science & Technology

    1989-08-01

    thermal pulse loadings. The work couples a Green’s function integration technique for transient thermal stresses with the well-known influence ... function approach for calculating stress intensity factors. A total of seven most commonly used crack models were investigated in this study. A computer

  14. Challenges in reusing transactional data for daily documentation in neonatal intensive care.

    PubMed

    Kim, G R; Lawson, E E; Lehmann, C U

    2008-11-06

    The reuse of transactional data for clinical documentation requires navigation of computational, institutional and adaptive barriers. We describe organizational and technical issues in developing and deploying a daily progress note tool in a tertiary neonatal intensive care unit that reuses and aggregates data from a commercial integrated clinical information system.

  15. Intensity dependence of focused ultrasound lesion position

    NASA Astrophysics Data System (ADS)

    Meaney, Paul M.; Cahill, Mark D.; ter Haar, Gail R.

    1998-04-01

    Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical to predicting lesion formation in focused ultrasound surgery. To date most models have used linear propagation models to predict the intensity profiles needed to compute the temporally varying temperature distributions. These can be used to compute thermal dose contours that can in turn be used to predict the extent of thermal damage. However, these simulations fail to adequately describe the abnormal lesion formation behavior observed for in vitro experiments in cases where the transducer drive levels are varied over a wide range. For these experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increasing transducer drive levels than would be predicted using linear propagation models. The simulations described herein, utilize the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused ultrasound waves, to demonstrate that the positions of the peak intensity and the lesion do indeed move closer to the transducer. This illustrates that for accurate modeling of heating during FUS, nonlinear effects must be considered.

  16. High-intensity positron microprobe at Jefferson Lab

    DOE PAGES

    Golge, Serkan; Vlahovic, Branislav; Wojtsekhowski, Bogdan B.

    2014-06-19

    We present a conceptual design for a novel continuous wave electron-linac based high-intensity slow-positron production source with a projected intensity on the order of 10 10 e +/s. Reaching this intensity in our design relies on the transport of positrons (T + below 600 keV) from the electron-positron pair production converter target to a low-radiation and low-temperature area for moderation in a high-efficiency cryogenic rare gas moderator, solid Ne. The performance of the integrated beamline has been verified through computational studies. The computational results include Monte Carlo calculations of the optimized electron/positron beam energies, converter target thickness, synchronized raster system,more » transport of the beam from the converter target to the moderator, extraction of the beam from the channel, and moderation efficiency calculations. For the extraction of positrons from the magnetic channel a magnetic field terminator plug prototype has been built and experimental data on the effectiveness of this prototype are presented. The dissipation of the heat away from the converter target and radiation protection measures are also discussed.« less

  17. Frozen lattice and absorptive model for high angle annular dark field scanning transmission electron microscopy: A comparison study in terms of integrated intensity and atomic column position measurement.

    PubMed

    Alania, M; Lobato, I; Van Aert, S

    2018-01-01

    In this paper, both the frozen lattice (FL) and the absorptive potential (AP) approximation models are compared in terms of the integrated intensity and the precision with which atomic columns can be located from an image acquired using high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM). The comparison is made for atoms of Cu, Ag, and Au. The integrated intensity is computed for both an isolated atomic column and an atomic column inside an FCC structure. The precision has been computed using the so-called Cramér-Rao Lower Bound (CRLB), which provides a theoretical lower bound on the variance with which parameters can be estimated. It is shown that the AP model results into accurate measurements for the integrated intensity only for small detector ranges under relatively low angles and for small thicknesses. In terms of the attainable precision, both methods show similar results indicating picometer range precision under realistic experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Digital image processing for information extraction.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  19. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  20. Computer program for determining rotational line intensity factors for diatomic molecules

    NASA Technical Reports Server (NTRS)

    Whiting, E. E.

    1973-01-01

    A FORTRAN IV computer program, that provides a new research tool for determining reliable rotational line intensity factors (also known as Honl-London factors), for most electric and magnetic dipole allowed diatomic transitions, is described in detail. This users manual includes instructions for preparing the input data, a program listing, detailed flow charts, and three sample cases. The program is applicable to spin-allowed dipole transitions with either or both states intermediate between Hund's case (a) and Hund's case (b) coupling and to spin-forbidden dipole transitions with either or both states intermediate between Hund's case (c) and Hund's case (b) coupling.

  1. A general-purpose computer program for studying ultrasonic beam patterns generated with acoustic lenses

    NASA Technical Reports Server (NTRS)

    Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.

    1988-01-01

    A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.

  2. One-step fabrication of nanostructure-covered microstructures using selective aluminum anodization based on non-uniform electric field

    NASA Astrophysics Data System (ADS)

    Park, Yong Min; Kim, Byeong Hee; Seo, Young Ho

    2016-06-01

    This paper presents a selective aluminum anodization technique for the fabrication of microstructures covered by nanoscale dome structures. It is possible to fabricate bulging microstructures, utilizing the different growth rates of anodic aluminum oxide in non-uniform electric fields, because the growth rate of anodic aluminum oxide depends on the intensity of electric field, or current density. After anodizing under a non-uniform electric field, bulging microstructures covered by nanostructures were fabricated by removing the residual aluminum layer. The non-uniform electric field induced by insulative micropatterns was estimated by computational simulations and verified experimentally. Utilizing computational simulations, the intensity profile of the electric field was calculated according to the ratio of height and width of the insulative micropatterns. To compare computational simulation results and experimental results, insulative micropatterns were fabricated using SU-8 photoresist. The results verified that the shape of the bottom topology of anodic alumina was strongly dependent on the intensity profile of the applied electric field, or current density. The one-step fabrication of nanostructure-covered microstructures can be applied to various fields, such as nano-biochip and nano-optics, owing to its simplicity and cost effectiveness.

  3. Empirical improvements for estimating earthquake response spectra with random‐vibration theory

    USGS Publications Warehouse

    Boore, David; Thompson, Eric M.

    2012-01-01

    The stochastic method of ground‐motion simulation is often used in combination with the random‐vibration theory to directly compute ground‐motion intensity measures, thereby bypassing the more computationally intensive time‐domain simulations. Key to the application of random‐vibration theory to simulate response spectra is determining the duration (Drms) used in computing the root‐mean‐square oscillator response. Boore and Joyner (1984) originally proposed an equation for Drms , which was improved upon by Liu and Pezeshk (1999). Though these equations are both substantial improvements over using the duration of the ground‐motion excitation for Drms , we document systematic differences between the ground‐motion intensity measures derived from the random‐vibration and time‐domain methods for both of these Drms equations. These differences are generally less than 10% for most magnitudes, distances, and periods of engineering interest. Given the systematic nature of the differences, however, we feel that improved equations are warranted. We empirically derive new equations from time‐domain simulations for eastern and western North America seismological models. The new equations improve the random‐vibration simulations over a wide range of magnitudes, distances, and oscillator periods.

  4. Mechanical Properties versus Morphology of Ordered Polymers. Volume III. Part I

    DTIC Science & Technology

    1982-08-01

    measured by wide angle x-ray scattering and differential scanning calorimetry, is unrelated to the diffuse scattered intensity [62]. Cellulose acetate which...increasing void fraction, in air swollen cellulose . Comparison of the volume fraction of voids calculated from the SAXS integrated intensity with...1964). 63. P.H. Hermans, D. Heikens, and A. Weidinger, "A Quantitative Investigation on the X-Ray Small Angle Scattering of Cellulose Fibers. Part II

  5. Strain Measurements within Fibre Boards. Part II: Strain Concentrations at the Crack Tip of MDF Specimens Tested by the Wedge Splitting Method

    PubMed Central

    Sinn, Gerhard; Müller, Ulrich; Konnerth, Johannes; Rathke, Jörn

    2012-01-01

    This is the second part of an article series where the mechanical and fracture mechanical properties of medium density fiberboard (MDF) were studied. While the first part of the series focused on internal bond strength and density profiles, this article discusses the fracture mechanical properties of the core layer. Fracture properties were studied with a wedge splitting setup. The critical stress intensity factors as well as the specific fracture energies were determined. Critical stress intensity factors were calculated from maximum splitting force and two-dimensional isotropic finite elements simulations of the specimen geometry. Size and shape of micro crack zone were measured with electronic laser speckle interferometry. The process zone length was approx. 5 mm. The specific fracture energy was determined to be 45.2 ± 14.4 J/m2 and the critical stress intensity factor was 0.11 ± 0.02 MPa.

  6. GPU-Accelerated Stony-Brook University 5-class Microphysics Scheme in WRF

    NASA Astrophysics Data System (ADS)

    Mielikainen, J.; Huang, B.; Huang, A.

    2011-12-01

    The Weather Research and Forecasting (WRF) model is a next-generation mesoscale numerical weather prediction system. Microphysics plays an important role in weather and climate prediction. Several bulk water microphysics schemes are available within the WRF, with different numbers of simulated hydrometeor classes and methods for estimating their size fall speeds, distributions and densities. Stony-Brook University scheme (SBU-YLIN) is a 5-class scheme with riming intensity predicted to account for mixed-phase processes. In the past few years, co-processing on Graphics Processing Units (GPUs) has been a disruptive technology in High Performance Computing (HPC). GPUs use the ever increasing transistor count for adding more processor cores. Therefore, GPUs are well suited for massively data parallel processing with high floating point arithmetic intensity. Thus, it is imperative to update legacy scientific applications to take advantage of this unprecedented increase in computing power. CUDA is an extension to the C programming language offering programming GPU's directly. It is designed so that its constructs allow for natural expression of data-level parallelism. A CUDA program is organized into two parts: a serial program running on the CPU and a CUDA kernel running on the GPU. The CUDA code consists of three computational phases: transmission of data into the global memory of the GPU, execution of the CUDA kernel, and transmission of results from the GPU into the memory of CPU. CUDA takes a bottom-up point of view of parallelism is which thread is an atomic unit of parallelism. Individual threads are part of groups called warps, within which every thread executes exactly the same sequence of instructions. To test SBU-YLIN, we used a CONtinental United States (CONUS) benchmark data set for 12 km resolution domain for October 24, 2001. A WRF domain is a geographic region of interest discretized into a 2-dimensional grid parallel to the ground. Each grid point has multiple levels, which correspond to various vertical heights in the atmosphere. The size of the CONUS 12 km domain is 433 x 308 horizontal grid points with 35 vertical levels. First, the entire SBU-YLIN Fortran code was rewritten in C in preparation of GPU accelerated version. After that, C code was verified against Fortran code for identical outputs. Default compiler options from WRF were used for gfortran and gcc compilers. The processing time for the original Fortran code is 12274 ms and 12893 ms for C version. The processing times for GPU implementation of SBU-YLIN microphysics scheme with I/O are 57.7 ms and 37.2 ms for 1 and 2 GPUs, respectively. The corresponding speedups are 213x and 330x compared to a Fortran implementation. Without I/O the speedup is 896x on 1 GPU. Obviously, ignoring I/O time speedup scales linearly with GPUs. Thus, 2 GPUs have a speedup of 1788x without I/O. Microphysics computation is just a small part of the whole WRF model. After having completely implemented WRF on GPU, the inputs for SBU-YLIN do not have to be transferred from CPU. Instead they are results of previous WRF modules. Therefore, the role of I/O is greatly diminished once all of WRF have been converted to run on GPUs. In the near future, we expect to have a WRF running completely on GPUs for a superior performance.

  7. Visual color matching system based on RGB LED light source

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Huang, Qingmei; Feng, Chen; Li, Wei; Wang, Chaofeng

    2018-01-01

    In order to study the property and performance of LED as RGB primary color light sources on color mixture in visual psychophysical experiments, and to find out the difference between LED light source and traditional light source, a visual color matching experiment system based on LED light sources as RGB primary colors has been built. By simulating traditional experiment of metameric color matching in CIE 1931 RGB color system, it can be used for visual color matching experiments to obtain a set of the spectral tristimulus values which we often call color-matching functions (CMFs). This system consists of three parts: a monochromatic light part using blazed grating, a light mixing part where the summation of 3 LED illuminations are to be visually matched with a monochromatic illumination, and a visual observation part. The three narrow band LEDs used have dominant wavelengths of 640 nm (red), 522 nm (green) and 458 nm (blue) respectively and their intensities can be controlled independently. After the calibration of wavelength and luminance of LED sources with a spectrophotometer, a series of visual color matching experiments have been carried out by 5 observers. The results are compared with those from CIE 1931 RGB color system, and have been used to compute an average locus for the spectral colors in the color triangle, with white at the center. It has been shown that the use of LED is feasible and has the advantages of easy control, good stability and low cost.

  8. Travelogue--a newcomer encounters statistics and the computer.

    PubMed

    Bruce, Peter

    2011-11-01

    Computer-intensive methods have revolutionized statistics, giving rise to new areas of analysis and expertise in predictive analytics, image processing, pattern recognition, machine learning, genomic analysis, and more. Interest naturally centers on the new capabilities the computer allows the analyst to bring to the table. This article, instead, focuses on the account of how computer-based resampling methods, with their relative simplicity and transparency, enticed one individual, untutored in statistics or mathematics, on a long journey into learning statistics, then teaching it, then starting an education institution.

  9. Measurement of transmission loss characteristics using acoustic intensity techniques at the KU-FRL Acoustic Test Facility

    NASA Technical Reports Server (NTRS)

    Roskam, J.

    1983-01-01

    The transmission loss characteristics of panels using the acoustic intensity technique is presented. The theoretical formulation, installation of hardware, modifications to the test facility, and development of computer programs and test procedures are described. A listing of all the programs is also provided. The initial test results indicate that the acoustic intensity technique is easily adapted to measure transmission loss characteristics of panels. Use of this method will give average transmission loss values. The fixtures developed to position the microphones along the grid points are very useful in plotting the intensity maps of vibrating panels.

  10. [Features of control of electromagnetic radiation emitted by personal computers].

    PubMed

    Pal'tsev, Iu P; Buzov, A L; Kol'chugin, Iu I

    1996-01-01

    Measurements of PC electromagnetic irradiation show that the main sources are PC blocks emitting the waves of certain frequencies. Use of wide-range detectors measuring field intensity in assessment of PC electromagnetic irradiation gives unreliable results. More precise measurements by selective devices are required. Thus, it is expedient to introduce a term "spectral density of field intensity" and its maximal allowable level. In this case a frequency spectrum of PC electromagnetic irradiation is divided into 4 ranges, one of which is subjected to calculation of field intensity for each harmonic frequency, and others undergo assessment of spectral density of field intensity.

  11. Processing of Signals from Fiber Bragg Gratings Using Unbalanced Interferometers

    NASA Technical Reports Server (NTRS)

    Adamovsky, Grigory; Juergens, Jeff; Floyd, Bertram

    2005-01-01

    Fiber Bragg gratings (FBG) have become preferred sensory structures in fiber optic sensing system. High sensitivity, embedability, and multiplexing capabilities make FBGs superior to other sensor configurations. The main feature of FBGs is that they respond in the wavelength domain with the wavelength of the returned signal as the indicator of the measured parameter. The wavelength is then converted to optical intensity by a photodetector to detect corresponding changes in intensity. This wavelength-to-intensity conversion is a crucial part in any FBG-based sensing system. Among the various types of wavelength-to-intensity converters, unbalanced interferometers are especially attractive because of their small weight and volume, lack of moving parts, easy integration, and good stability. In this paper we investigate the applicability of unbalanced interferometers to analyze signals reflected from Bragg gratings. Analytical and experimental data are presented.

  12. Analysis of muscle activation in each body segment in response to the stimulation intensity of whole-body vibration.

    PubMed

    Lee, Dae-Yeon

    2017-02-01

    [Purpose] The purpose of this study was to investigate the effects of a whole-body vibration exercise, as well as to discuss the scientific basis to establish optimal intensity by analyzing differences between muscle activations in each body part, according to the stimulation intensity of the whole-body vibration. [Subjects and Methods ] The study subjects included 10 healthy men in their 20s without orthopedic disease. Representative muscles from the subjects' primary body segments were selected while the subjects were in upright positions on exercise machines; electromyography electrodes were attached to the selected muscles. Following that, the muscle activities of each part were measured at different intensities. No vibration, 50/80 in volume, and 10/25/40 Hz were mixed and applied when the subjects were on the whole-vibration exercise machines in upright positions. After that, electromyographic signals were collected and analyzed with the root mean square of muscular activation. [Results] As a result of the analysis, it was found that the muscle activation effects had statistically meaningful differences according to changes in exercise intensity in all 8 muscles. When the no-vibration status was standardized and analyzed as 1, the muscle effect became lower at higher frequencies, but became higher at larger volumes. [Conclusion] In conclusion, it was shown that the whole-body vibration stimulation promoted muscle activation across the entire body part, and the exercise effects in each muscle varied depending on the exercise intensities.

  13. New evidence of a fast secular variation of the geomagnetic field 1000 BCE: archaeomagnetic study of Bavarian potteries

    NASA Astrophysics Data System (ADS)

    Hervé, G.; Gilder, S.; Fassbinder, J.; Metzler-Nebelsick, C.; Schnepp, E.; Geisweid, L.; Putz, A.; Reuss, S.; Riedel, G.; Westhausen, I.; Wittenborn, F.

    2016-12-01

    This study presents new archaeointensity results obtained on 350 pottery sherds from 45 graves and pits from 12 sites around Munich (Germany). The features are dated between 1400 and 400 BCE by ceramic and metallic artifacts, radiocarbon and dendrochronology. We collected only red- or partly red-colored sherds in order to minimize mineralogical alteration during laboratory experiments. Rock magnetism analyses show that the remanent magnetization is mainly carried by titanomagnetite. Archaeointensities were determined using the Thellier-Thellier protocol with corrections of TRM anisotropy and cooling rate on one to three specimens per sherd. The experiments were completed using Triaxe and multispecimen (MSP-DSC) methods. Around 60 per cent of the sherds provide reliable results, allowing the computation of 35 mean archaeointensity values. This quadruples the number of previously published data in Western Europe. The secular variation of the geomagnetic field strength is low from 1400 to 1200 BCE with intensities close to 50 µT then the intensity increased to 70 µT around 1000-900 BCE. After a minimum 50 µT near 750 BCE, the intensity increased again to 90 µT at 650 BCE. This high secular variation rate (0.4 µT/year) is especially apparent in the sherds from a fountain dated between 750 and 650 BCE. Next, the intensity remained high until 400 BCE before rapidly decreasing to 200 BCE. As the sharp change in geomagnetic direction around 800 BCE is not contemporaneous with an intensity high, this period is probably not characterized by an archaeomagnetic jerk. The trend of secular variation with two intensity maxima is similar to the one observed in the Near East. The Virtual Axial Dipole Moments of the two regions are approximately the same after 700 BCE, but before they are systematically 1-2 × 1022 Am2 higher in the Near East. This difference may be a further proof of a geomagnetic field anomaly in this area 1000 BCE, yet there is no evidence for a geomagnetic spike in Western Europe. Finally, the fast rate of secular variation will provide an improved dating tool for archaeologists together with the available directional secular variation curves.

  14. MAGIC Computer Simulation. Volume 2: Analyst Manual, Part 1

    DTIC Science & Technology

    1971-05-01

    A review of the subject Magic Computer Simulation User and Analyst Manuals has been conducted based upon a request received from the US Army...1971 4. TITLE AND SUBTITLE MAGIC Computer Simulation Analyst Manual Part 1 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...14. ABSTRACT The MAGIC computer simulation generates target description data consisting of item-by-item listings of the target’s components and air

  15. Characteristics of skylight at the zenith during twilight as indicators of atmospheric turbidity. 2: Intensity and color ratio.

    PubMed

    Coulson, K L

    1981-05-01

    This is the second of two papers based on an extensive series of measurements of the intensity and polarization of light from the zenith sky during periods of twilight made at an altitude of 3400 m on the island of Hawaii. Part 1 dealt with the skylight polarization; part 2 is on the measured intensity and quantities derived from the intensity. The principal results are that (1) the polarization and intensity of light from the zenith during twilight are sensitive indicators of the existence of turbid layers in the stratosphere and upper troposphere, and (2) at least at Mauna Loa primary scattering of the sunlight incident on the upper atmosphere during twilight is strongly dominant over secondary or multiple scattering at wavelengths beyond ~0.60microm, whereas this is much less true at shorter wavelengths. It is suggested that the development and general use of a simple twilight polarimeter would greatly facilitate determinations of turbidity in the upper layers of the atmosphere.

  16. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  17. Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.

    PubMed

    Banakh, Victor A; Marakasov, Dimitrii A

    2007-10-01

    Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.

  18. Method and apparatus for determining the coordinates of an object

    DOEpatents

    Pedersen, Paul S; Sebring, Robert

    2003-01-01

    A method and apparatus is described for determining the coordinates on the surface of an object which is illuminated by a beam having pixels which have been modulated according to predetermined mathematical relationships with pixel position within the modulator. The reflected illumination is registered by an image sensor at a known location which registers the intensity of the pixels as received. Computations on the intensity, which relate the pixel intensities received to the pixel intensities transmitted at the modulator, yield the proportional loss of intensity and planar position of the originating pixels. The proportional loss and position information can then be utilized within triangulation equations to resolve the coordinates of associated surface locations on the object.

  19. Rescattering effects on intensity interferometry and initial conditions in relativistic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Li, Yang

    The properties of the quark-gluon plasma are being thoroughly studied by utilizing relativistic heavy ion collisions. After its invention in astronomy in the 1950s, intensity interferometry was found to be a robust method to probe the spatial and temporal information of the nuclear collisions also. Although rescattering effects are negligible in elementary particle collisions, it may be very important for heavy ion collisions at RHIC and in the future LHC. Rescattering after production will modify the measured correlation function and make it harder to extract the dynamical information from data. To better understand the data which are dimmed by this final state process, we derive a general formula for intensity interferometry which can calculate rescattering effects easily. The formula can be used both non-relativistically and relativistically. Numerically, we found that rescattering effects on kaon interferometry for RHIC experiments can modify the measured ratio of the outward radius to the sideward radius, which is a sensitive probe to the equation of state, by as large as 15%. It is a nontrivial contribution which should be included to understand the data more accurately. The second part of this thesis is on the initial conditions in relativistic heavy ion collisions. Although relativistic hydrodynamics is successful in explaining many aspects of the data, it is only valid after some finite time after nuclear contact. The results depend on the choice of initial conditions which, so far, have been very uncertain. I describe a formula based on the McLerran-Venugopalan model to compute the initial energy density. The soft gluon fields produced immediately after the overlap of the nuclei can be expanded as a power series of the proper time t. Solving Yang-Mills equations with color current conservation can give us the analytical formulas for the fields. The local color charges on the transverse plane are stochastic variables and have to be taken care of by random walks. It is found that the fields are mainly longitudinal at early time. The initial energy densities are computed both for RHIC and LHC.

  20. Special interests and subjective wellbeing in autistic adults.

    PubMed

    Grove, Rachel; Hoekstra, Rosa A; Wierda, Marlies; Begeer, Sander

    2018-05-01

    Special interests form part of the core features of autism. However, to date there has been limited research focusing on the role of special interests in the lives of autistic adults. This study surveyed autistic adults on their special interest topics, intensity, and motivation. It also assessed the relationship between special interests and a range of quality of life measures including subjective wellbeing and domain specific life satisfaction. About two thirds of the sample reported having a special interest, with relatively more males reporting a special interest than females. Special interest topics included computers, autism, music, nature and gardening. Most autistic adults engaged in more than one special interest, highlighting that these interests may not be as narrow as previously described. There were no differences in subjective wellbeing between autistic adults with and without special interests. However, for autistic adults who did have special interests, motivation for engaging in special interests was associated with increased subjective wellbeing. This indicates that motivation may play an important role in our understanding of special interests in autism. Special interests had a positive impact on autistic adults and were associated with higher subjective wellbeing and satisfaction across specific life domains including social contact and leisure. However, a very high intensity of engagement with special interests was negatively related to wellbeing. Combined, these findings have important implications for the role of special interests in the lives of autistic adults. Autism Res 2018, 11: 766-775. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Autistic adults reported having special interests in a range of topics, including computers, music, autism, nature and gardening. Special interests were associated with a number of positive outcomes for autistic adults. They were also related to subjective wellbeing and satisfaction across specific life domains including social contact and leisure. Very high intensity of engagement with special interests was related to lower levels of wellbeing. This highlights the important role that special interests play in the lives of autistic adults. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

Top