Sample records for computationally intensive analysis

  1. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  2. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  3. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  4. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  5. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. QPROT: Statistical method for testing differential expression using protein-level intensity data in label-free quantitative proteomics.

    PubMed

    Choi, Hyungwon; Kim, Sinae; Fermin, Damian; Tsou, Chih-Chiang; Nesvizhskii, Alexey I

    2015-11-03

    We introduce QPROT, a statistical framework and computational tool for differential protein expression analysis using protein intensity data. QPROT is an extension of the QSPEC suite, originally developed for spectral count data, adapted for the analysis using continuously measured protein-level intensity data. QPROT offers a new intensity normalization procedure and model-based differential expression analysis, both of which account for missing data. Determination of differential expression of each protein is based on the standardized Z-statistic based on the posterior distribution of the log fold change parameter, guided by the false discovery rate estimated by a well-known Empirical Bayes method. We evaluated the classification performance of QPROT using the quantification calibration data from the clinical proteomic technology assessment for cancer (CPTAC) study and a recently published Escherichia coli benchmark dataset, with evaluation of FDR accuracy in the latter. QPROT is a statistical framework with computational software tool for comparative quantitative proteomics analysis. It features various extensions of QSPEC method originally built for spectral count data analysis, including probabilistic treatment of missing values in protein intensity data. With the increasing popularity of label-free quantitative proteomics data, the proposed method and accompanying software suite will be immediately useful for many proteomics laboratories. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Application verification research of cloud computing technology in the field of real time aerospace experiment

    NASA Astrophysics Data System (ADS)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  8. Modern Computational Techniques for the HMMER Sequence Analysis

    PubMed Central

    2013-01-01

    This paper focuses on the latest research and critical reviews on modern computing architectures, software and hardware accelerated algorithms for bioinformatics data analysis with an emphasis on one of the most important sequence analysis applications—hidden Markov models (HMM). We show the detailed performance comparison of sequence analysis tools on various computing platforms recently developed in the bioinformatics society. The characteristics of the sequence analysis, such as data and compute-intensive natures, make it very attractive to optimize and parallelize by using both traditional software approach and innovated hardware acceleration technologies. PMID:25937944

  9. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  10. From cosmos to connectomes: the evolution of data-intensive science.

    PubMed

    Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S

    2014-09-17

    The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  12. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  13. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  14. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  15. Analysis of intensity variability in multislice and cone beam computed tomography.

    PubMed

    Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde

    2011-08-01

    The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.

  16. CAPER 3.0: A Scalable Cloud-Based System for Data-Intensive Analysis of Chromosome-Centric Human Proteome Project Data Sets.

    PubMed

    Yang, Shuai; Zhang, Xinlei; Diao, Lihong; Guo, Feifei; Wang, Dan; Liu, Zhongyang; Li, Honglei; Zheng, Junjie; Pan, Jingshan; Nice, Edouard C; Li, Dong; He, Fuchu

    2015-09-04

    The Chromosome-centric Human Proteome Project (C-HPP) aims to catalog genome-encoded proteins using a chromosome-by-chromosome strategy. As the C-HPP proceeds, the increasing requirement for data-intensive analysis of the MS/MS data poses a challenge to the proteomic community, especially small laboratories lacking computational infrastructure. To address this challenge, we have updated the previous CAPER browser into a higher version, CAPER 3.0, which is a scalable cloud-based system for data-intensive analysis of C-HPP data sets. CAPER 3.0 uses cloud computing technology to facilitate MS/MS-based peptide identification. In particular, it can use both public and private cloud, facilitating the analysis of C-HPP data sets. CAPER 3.0 provides a graphical user interface (GUI) to help users transfer data, configure jobs, track progress, and visualize the results comprehensively. These features enable users without programming expertise to easily conduct data-intensive analysis using CAPER 3.0. Here, we illustrate the usage of CAPER 3.0 with four specific mass spectral data-intensive problems: detecting novel peptides, identifying single amino acid variants (SAVs) derived from known missense mutations, identifying sample-specific SAVs, and identifying exon-skipping events. CAPER 3.0 is available at http://prodigy.bprc.ac.cn/caper3.

  17. The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials

    NASA Technical Reports Server (NTRS)

    Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.

    2005-01-01

    The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.

  18. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    ERIC Educational Resources Information Center

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  19. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  20. A Set of Computer Projects for an Electromagnetic Fields Class.

    ERIC Educational Resources Information Center

    Gleeson, Ronald F.

    1989-01-01

    Presented are three computer projects: vector analysis, electric field intensities at various distances, and the Biot-Savart law. Programing suggestions and project results are provided. One month is suggested for each project. (MVL)

  1. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  2. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  3. Travelogue--a newcomer encounters statistics and the computer.

    PubMed

    Bruce, Peter

    2011-11-01

    Computer-intensive methods have revolutionized statistics, giving rise to new areas of analysis and expertise in predictive analytics, image processing, pattern recognition, machine learning, genomic analysis, and more. Interest naturally centers on the new capabilities the computer allows the analyst to bring to the table. This article, instead, focuses on the account of how computer-based resampling methods, with their relative simplicity and transparency, enticed one individual, untutored in statistics or mathematics, on a long journey into learning statistics, then teaching it, then starting an education institution.

  4. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  5. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  6. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, M.S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device. 27 figs.

  7. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.; Wang, Chunwei; Jevons, Luis C.; Bernhart, Derek H.; Lipshutz, Robert J.

    2004-05-11

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  8. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1998-08-18

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  9. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2003-08-19

    A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

  10. Latent Semantic Analysis as a Method of Content-Based Image Retrieval in Medical Applications

    ERIC Educational Resources Information Center

    Makovoz, Gennadiy

    2010-01-01

    The research investigated whether a Latent Semantic Analysis (LSA)-based approach to image retrieval can map pixel intensity into a smaller concept space with good accuracy and reasonable computational cost. From a large set of M computed tomography (CT) images, a retrieval query found all images for a particular patient based on semantic…

  11. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  12. Digital image processing for information extraction.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  13. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  14. NEW GIS WATERSHED ANALYSIS TOOLS FOR SOIL CHARACTERIZATION AND EROSION AND SEDIMENTATION MODELING

    EPA Science Inventory

    A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed which utilizes a suite of automated scripts and a pair of processing-intensive executable programs operating on a personal computer platform.

  15. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    1999-10-26

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  16. Computer-aided visualization and analysis system for sequence evaluation

    DOEpatents

    Chee, Mark S.

    2001-06-05

    A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

  17. ASTEC: Controls analysis for personal computers

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  18. Impedance computations and beam-based measurements: A problem of discrepancy

    NASA Astrophysics Data System (ADS)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  19. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  20. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  1. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  2. Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis

    NASA Technical Reports Server (NTRS)

    Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.

    2012-01-01

    MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.

  3. Kernel analysis in TeV gamma-ray selection

    NASA Astrophysics Data System (ADS)

    Moriarty, P.; Samuelson, F. W.

    2000-06-01

    We discuss the use of kernel analysis as a technique for selecting gamma-ray candidates in Atmospheric Cherenkov astronomy. The method is applied to observations of the Crab Nebula and Markarian 501 recorded with the Whipple 10 m Atmospheric Cherenkov imaging system, and the results are compared with the standard Supercuts analysis. Since kernel analysis is computationally intensive, we examine approaches to reducing the computational load. Extension of the technique to estimate the energy of the gamma-ray primary is considered. .

  4. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    PubMed

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  5. Evaluating a Computerized Aid for Conducting a Cognitive Task Analysis

    DTIC Science & Technology

    2000-01-01

    in conducting a cognitive task analysis . The conduct of a cognitive task analysis is costly and labor intensive. As a result, a few computerized aids...evaluation of a computerized aid, specifically CAT-HCI (Cognitive Analysis Tool - Human Computer Interface), for the conduct of a detailed cognitive task analysis . A

  6. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  7. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  8. Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.

    PubMed

    Maryn, Youri; Zarowski, Andrzej

    2015-11-01

    Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.

  9. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  10. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  11. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE PAGES

    Smaluk, Victor

    2018-04-21

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  12. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smaluk, Victor

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  13. Computational chemistry and aeroassisted orbital transfer vehicles

    NASA Technical Reports Server (NTRS)

    Cooper, D. M.; Jaffe, R. L.; Arnold, J. O.

    1985-01-01

    An analysis of the radiative heating phenomena encountered during a typical aeroassisted orbital transfer vehicle (AOTV) trajectory was made to determine the potential impact of computational chemistry on AOTV design technology. Both equilibrium and nonequilibrium radiation mechanisms were considered. This analysis showed that computational chemistry can be used to predict (1) radiative intensity factors and spectroscopic data; (2) the excitation rates of both atoms and molecules; (3) high-temperature reaction rate constants for metathesis and charge exchange reactions; (4) particle ionization and neutralization rates and cross sections; and (5) spectral line widths.

  14. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  15. Hot prominence detected in the core of a coronal mass ejection. II. Analysis of the C III line detected by SOHO/UVCS

    NASA Astrophysics Data System (ADS)

    Jejčič, S.; Susino, R.; Heinzel, P.; Dzifčáková, E.; Bemporad, A.; Anzer, U.

    2017-11-01

    Context. We study the physics of erupting prominences in the core of coronal mass ejections (CMEs) and present a continuation of a previous analysis. Aims: We determine the kinetic temperature and microturbulent velocity of an erupting prominence embedded in the core of a CME that occurred on August 2, 2000 using the Ultraviolet Coronagraph and Spectrometer observations (UVCS) on board the Solar and Heliospheric Observatory (SOHO) simultaneously in the hydrogen Lα and C III lines. We develop the non-LTE (departures from the local thermodynamic equilibrium - LTE) spectral diagnostics based on Lα and Lβ measured integrated intensities to derive other physical quantities of the hot erupting prominence. Based on this, we synthesize the C III line intensity to compare it with observations. Methods: Our method is based on non-LTE modeling of eruptive prominences. We used a general non-LTE radiative-transfer code only for optically thin prominence points because optically thick points do not allow the direct determination of the kinetic temperature and microturbulence from the line profiles. The input parameters of the code were the kinetic temperature and microturbulent velocity derived from the Lα and C III line widths, as well as the integrated intensity of the Lα and Lβ lines. The code runs in three loops to compute the radial flow velocity, electron density, and effective thickness as the best fit to the Lα and Lβ integrated intensities within the accuracy defined by the absolute radiometric calibration of UVCS data. Results: We analyzed 39 observational points along the whole erupting prominence because for these points we found a solution for the kinetic temperature and microturbulent velocity. For these points we ran the non-LTE code to determine best-fit models. All models with τ0(Lα) ≤ 0.3 and τ0(C III) ≤ 0.3 were analyzed further, for which we computed the integrated intensity of the C III line using a two-level atom. The best agreement between computed and observed integrated intensity led to 30 optically thin points along the prominence. The results are presented as histograms of the kinetic temperature, microturbulent velocity, effective thickness, radial flow velocity, electron density, and gas pressure. We also show the relation between the microturbulence and kinetic temperature together with a scatter plot of computed versus observed C III integrated intensities and the ratio of the computed to observed C III integrated intensities versus kinetic temperature. Conclusions: The erupting prominence embedded in the CME is relatively hot with a low electron density, a wide range of effective thicknesses, a rather narrow range of radial flow velocities, and a microturbulence of about 25 km s-1. This analysis shows a disagreement between observed and synthetic intensities of the C III line, the reason for which most probably is that photoionization is neglected in calculations of the ionization equilibrium. Alternatively, the disagreement might be due to non-equilibrium processes.

  16. On a 3-D singularity element for computation of combined mode stress intensities

    NASA Technical Reports Server (NTRS)

    Atluri, S. N.; Kathiresan, K.

    1976-01-01

    A special three-dimensional singularity element is developed for the computation of combined modes 1, 2, and 3 stress intensity factors, which vary along an arbitrarily curved crack front in three dimensional linear elastic fracture problems. The finite element method is based on a displacement-hybrid finite element model, based on a modified variational principle of potential energy, with arbitrary element interior displacements, interelement boundary displacements, and element boundary tractions as variables. The special crack-front element used in this analysis contains the square root singularity in strains and stresses, where the stress-intensity factors K(1), K(2), and K(3) are quadratically variable along the crack front and are solved directly along with the unknown nodal displacements.

  17. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    PubMed

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  18. Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.

  19. Development of a Computer-Based Visualised Quantitative Learning System for Playing Violin Vibrato

    ERIC Educational Resources Information Center

    Ho, Tracy Kwei-Liang; Lin, Huann-shyang; Chen, Ching-Kong; Tsai, Jih-Long

    2015-01-01

    Traditional methods of teaching music are largely subjective, with the lack of objectivity being particularly challenging for violin students learning vibrato because of the existence of conflicting theories. By using a computer-based analysis method, this study found that maintaining temporal coincidence between the intensity peak and the target…

  20. A Fuzzy Computing Model for Identifying Polarity of Chinese Sentiment Words

    PubMed Central

    Huang, Yongfeng; Wu, Xian; Li, Xing

    2015-01-01

    With the spurt of online user-generated contents on web, sentiment analysis has become a very active research issue in data mining and natural language processing. As the most important indicator of sentiment, sentiment words which convey positive and negative polarity are quite instrumental for sentiment analysis. However, most of the existing methods for identifying polarity of sentiment words only consider the positive and negative polarity by the Cantor set, and no attention is paid to the fuzziness of the polarity intensity of sentiment words. In order to improve the performance, we propose a fuzzy computing model to identify the polarity of Chinese sentiment words in this paper. There are three major contributions in this paper. Firstly, we propose a method to compute polarity intensity of sentiment morphemes and sentiment words. Secondly, we construct a fuzzy sentiment classifier and propose two different methods to compute the parameter of the fuzzy classifier. Thirdly, we conduct extensive experiments on four sentiment words datasets and three review datasets, and the experimental results indicate that our model performs better than the state-of-the-art methods. PMID:26106409

  1. MeDICi Software Superglue for Data Analysis Pipelines

    ScienceCinema

    Ian Gorton

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework is an integrated middleware platform developed to solve data analysis and processing needs of scientists across many domains. MeDICi is scalable, easily modified, and robust to multiple languages, protocols, and hardware platforms, and in use today by PNNL scientists for bioinformatics, power grid failure analysis, and text analysis.

  2. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  3. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  4. Quantitative Assay for Starch by Colorimetry Using a Desktop Scanner

    ERIC Educational Resources Information Center

    Matthews, Kurt R.; Landmark, James D.; Stickle, Douglas F.

    2004-01-01

    The procedure to produce standard curve for starch concentration measurement by image analysis using a color scanner and computer for data acquisition and color analysis is described. Color analysis is performed by a Visual Basic program that measures red, green, and blue (RGB) color intensities for pixels within the scanner image.

  5. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  6. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  7. SU-F-J-94: Development of a Plug-in Based Image Analysis Tool for Integration Into Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, D; Anderson, C; Mayo, C

    Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less

  8. Computational Methods for Analyzing Health News Coverage

    ERIC Educational Resources Information Center

    McFarlane, Delano J.

    2011-01-01

    Researchers that investigate the media's coverage of health have historically relied on keyword searches to retrieve relevant health news coverage, and manual content analysis methods to categorize and score health news text. These methods are problematic. Manual content analysis methods are labor intensive, time consuming, and inherently…

  9. A computer program to evaluate optical systems

    NASA Technical Reports Server (NTRS)

    Innes, D.

    1972-01-01

    A computer program is used to evaluate a 25.4 cm X-ray telescope at a field angle of 20 minutes of arc by geometrical analysis. The object is regarded as a point source of electromagnetic radiation, and the optical surfaces are treated as boundary conditions in the solution of the electromagnetic wave propagation equation. The electric field distribution is then determined in the region of the image and the intensity distribution inferred. A comparison of wave analysis results and photographs taken through the telescope shows excellent agreement.

  10. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  11. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  12. A general-purpose computer program for studying ultrasonic beam patterns generated with acoustic lenses

    NASA Technical Reports Server (NTRS)

    Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.

    1988-01-01

    A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.

  13. Evaluation of factors associated with severe and frequent back pain in high school athletes.

    PubMed

    Noll, Matias; Silveira, Erika Aparecida; Avelar, Ivan Silveira de

    2017-01-01

    Several studies have shown that half of all young athletes experience back pain (BP). However, high intensity and frequency of BP may be harmful, and the factors associated with BP severity have not been investigated in detail. Here, we investigated the factors associated with a high intensity and high frequency of BP in high school athletes. We included 251 athletes (173 boys and 78 girls [14-20 years old]) in this cross-sectional study. The dependent variables were a high frequency and high intensity of BP, and the independent variables were demographic, socioeconomic, psychosocial, hereditary, anthropometric, behavioural, and postural factors and the level of exercise. The effect measure is presented as prevalence ratio (PR) with 95% confidence interval (CI). Of 251 athletes, 104 reported BP; thus, only these athletes were included in the present analysis. Results of multivariable analysis showed an association between high BP intensity and time spent using a computer (PR: 1.15, CI: 1.01-1.33), posture while writing (PR: 1.41, CI: 1.27-1.58), and posture while using a computer (PR: 1.39, CI: 1.26-1.54). Multivariable analysis also revealed an association of high BP frequency with studying in bed (PR: 1.19, CI: 1.01-1.40) and the method of carrying a backpack (PR: 1.19, CI: 1.01-1.40). In conclusion, we found that behavioural and postural factors are associated with a high intensity and frequency of BP. To the best of our knowledge, this study is the first to compare different intensities and frequencies of BP, and our results may help physicians and coaches to better understand BP in high school athletes.

  14. Evaluation of factors associated with severe and frequent back pain in high school athletes

    PubMed Central

    Noll, Matias; Silveira, Erika Aparecida; de Avelar, Ivan Silveira

    2017-01-01

    Several studies have shown that half of all young athletes experience back pain (BP). However, high intensity and frequency of BP may be harmful, and the factors associated with BP severity have not been investigated in detail. Here, we investigated the factors associated with a high intensity and high frequency of BP in high school athletes. We included 251 athletes (173 boys and 78 girls [14–20 years old]) in this cross-sectional study. The dependent variables were a high frequency and high intensity of BP, and the independent variables were demographic, socioeconomic, psychosocial, hereditary, anthropometric, behavioural, and postural factors and the level of exercise. The effect measure is presented as prevalence ratio (PR) with 95% confidence interval (CI). Of 251 athletes, 104 reported BP; thus, only these athletes were included in the present analysis. Results of multivariable analysis showed an association between high BP intensity and time spent using a computer (PR: 1.15, CI: 1.01–1.33), posture while writing (PR: 1.41, CI: 1.27–1.58), and posture while using a computer (PR: 1.39, CI: 1.26–1.54). Multivariable analysis also revealed an association of high BP frequency with studying in bed (PR: 1.19, CI: 1.01–1.40) and the method of carrying a backpack (PR: 1.19, CI: 1.01–1.40). In conclusion, we found that behavioural and postural factors are associated with a high intensity and frequency of BP. To the best of our knowledge, this study is the first to compare different intensities and frequencies of BP, and our results may help physicians and coaches to better understand BP in high school athletes. PMID:28222141

  15. The Virtual Earthquake and Seismology Research Community e-science environment in Europe (VERCE) FP7-INFRA-2011-2 project

    NASA Astrophysics Data System (ADS)

    Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.

    2012-04-01

    Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.

  16. Multiscale intensity homogeneity transformation method and its application to computer-aided detection of pulmonary embolism in computed tomographic pulmonary angiography (CTPA)

    NASA Astrophysics Data System (ADS)

    Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Wei, Jun; Chughtai, Aamer; Sundaram, Baskaran; Hadjiiski, Lubomir M.; Patel, Smita; Kazerooni, Ella A.

    2013-04-01

    A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.

  17. Computational approaches to computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The various techniques by which the goal of computational aeroacoustics (the calculation and noise prediction of a fluctuating fluid flow) may be achieved are reviewed. The governing equations for compressible fluid flow are presented. The direct numerical simulation approach is shown to be computationally intensive for high Reynolds number viscous flows. Therefore, other approaches, such as the acoustic analogy, vortex models and various perturbation techniques that aim to break the analysis into a viscous part and an acoustic part are presented. The choice of the approach is shown to be problem dependent.

  18. Northwest Trajectory Analysis Capability: A Platform for Enhancing Computational Biophysics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Elena S.; Stephan, Eric G.; Corrigan, Abigail L.

    2008-07-30

    As computational resources continue to increase, the ability of computational simulations to effectively complement, and in some cases replace, experimentation in scientific exploration also increases. Today, large-scale simulations are recognized as an effective tool for scientific exploration in many disciplines including chemistry and biology. A natural side effect of this trend has been the need for an increasingly complex analytical environment. In this paper, we describe Northwest Trajectory Analysis Capability (NTRAC), an analytical software suite developed to enhance the efficiency of computational biophysics analyses. Our strategy is to layer higher-level services and introduce improved tools within the user’s familiar environmentmore » without preventing researchers from using traditional tools and methods. Our desire is to share these experiences to serve as an example for effectively analyzing data intensive large scale simulation data.« less

  19. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity

    PubMed Central

    2013-01-01

    Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209

  20. Extracting the Data From the LCM vk4 Formatted Output File

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less

  1. SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments

    NASA Technical Reports Server (NTRS)

    Leonard, R. F.

    1977-01-01

    A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.

  2. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  3. Ganalyzer: A tool for automatic galaxy image analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-05-01

    Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.

  4. A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure

    PubMed Central

    Fontaine, Michael D.

    2013-01-01

    Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690

  5. A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.

    PubMed

    Xia, Yingjie; Hu, Jia; Fontaine, Michael D

    2013-01-01

    Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.

  6. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  7. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  8. Pteros: fast and easy to use open-source C++ library for molecular analysis.

    PubMed

    Yesylevskyy, Semen O

    2012-07-15

    An open-source Pteros library for molecular modeling and analysis of molecular dynamics trajectories for C++ programming language is introduced. Pteros provides a number of routine analysis operations ranging from reading and writing trajectory files and geometry transformations to structural alignment and computation of nonbonded interaction energies. The library features asynchronous trajectory reading and parallel execution of several analysis routines, which greatly simplifies development of computationally intensive trajectory analysis algorithms. Pteros programming interface is very simple and intuitive while the source code is well documented and easily extendible. Pteros is available for free under open-source Artistic License from http://sourceforge.net/projects/pteros/. Copyright © 2012 Wiley Periodicals, Inc.

  9. Custom blending of lamp phosphors

    NASA Technical Reports Server (NTRS)

    Klemm, R. E.

    1978-01-01

    Spectral output of fluorescent lamps can be precisely adjusted by using computer-assisted analysis for custom blending lamp phosphors. With technique, spectrum of main bank of lamps is measured and stored in computer memory along with emission characteristics of commonly available phosphors. Computer then calculates ratio of green and blue intensities for each phosphor according to manufacturer's specifications and plots them as coordinates on graph. Same ratios are calculated for measured spectrum. Once proper mix is determined, it is applied as coating to fluorescent tubing.

  10. A novel visual saliency analysis model based on dynamic multiple feature combination strategy

    NASA Astrophysics Data System (ADS)

    Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao

    2017-06-01

    The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.

  11. Cloud-based Jupyter Notebooks for Water Data Analysis

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Brazil, L.; Seul, M.

    2017-12-01

    The development and adoption of technologies by the water science community to improve our ability to openly collaborate and share workflows will have a transformative impact on how we address the challenges associated with collaborative and reproducible scientific research. Jupyter notebooks offer one solution by providing an open-source platform for creating metadata-rich toolchains for modeling and data analysis applications. Adoption of this technology within the water sciences, coupled with publicly available datasets from agencies such as USGS, NASA, and EPA enables researchers to easily prototype and execute data intensive toolchains. Moreover, implementing this software stack in a cloud-based environment extends its native functionality to provide researchers a mechanism to build and execute toolchains that are too large or computationally demanding for typical desktop computers. Additionally, this cloud-based solution enables scientists to disseminate data processing routines alongside journal publications in an effort to support reproducibility. For example, these data collection and analysis toolchains can be shared, archived, and published using the HydroShare platform or downloaded and executed locally to reproduce scientific analysis. This work presents the design and implementation of a cloud-based Jupyter environment and its application for collecting, aggregating, and munging various datasets in a transparent, sharable, and self-documented manner. The goals of this work are to establish a free and open source platform for domain scientists to (1) conduct data intensive and computationally intensive collaborative research, (2) utilize high performance libraries, models, and routines within a pre-configured cloud environment, and (3) enable dissemination of research products. This presentation will discuss recent efforts towards achieving these goals, and describe the architectural design of the notebook server in an effort to support collaborative and reproducible science.

  12. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    De La Pierre, Marco, E-mail: cedric.carteret@univ-lorraine.fr, E-mail: marco.delapierre@unito.it; Maschio, Lorenzo; Orlando, Roberto

    Powder and single crystal Raman spectra of the two most common phases of calcium carbonate are calculated with ab initio techniques (using a “hybrid” functional and a Gaussian-type basis set) and measured both at 80 K and room temperature. Frequencies of the Raman modes are in very good agreement between calculations and experiments: the mean absolute deviation at 80 K is 4 and 8 cm{sup −1} for calcite and aragonite, respectively. As regards intensities, the agreement is in general good, although the computed values overestimate the measured ones in many cases. The combined analysis permits to identify almost all themore » fundamental experimental Raman peaks of the two compounds, with the exception of either modes with zero computed intensity or modes overlapping with more intense peaks. Additional peaks have been identified in both calcite and aragonite, which have been assigned to {sup 18}O satellite modes or overtones. The agreement between the computed and measured spectra is quite satisfactory; in particular, simulation permits to clearly distinguish between calcite and aragonite in the case of powder spectra, and among different polarization directions of each compound in the case of single crystal spectra.« less

  14. METCAN: The metal matrix composite analyzer

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Murthy, Pappu L. N.

    1988-01-01

    Metal matrix composites (MMC) are the subject of intensive study and are receiving serious consideration for critical structural applications in advanced aerospace systems. MMC structural analysis and design methodologies are studied. Predicting the mechanical and thermal behavior and the structural response of components fabricated from MMC requires the use of a variety of mathematical models. These models relate stresses to applied forces, stress intensities at the tips of cracks to nominal stresses, buckling resistance to applied force, or vibration response to excitation forces. The extensive research in computational mechanics methods for predicting the nonlinear behavior of MMC are described. This research has culminated in the development of the METCAN (METal Matrix Composite ANalyzer) computer code.

  15. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  16. Ames S-32 O-16 O-18 Line List for High-Resolution Experimental IR Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2016-01-01

    By comparing to the most recent experimental data and spectra of the SO2 628 ?1/?3 bands (see Ulenikov et al., JQSRT 168 (2016) 29-39), this study illustrates the reliability and accuracy of the Ames-296K SO2 line list, which is accurate enough to facilitate such high-resolution spectroscopic analysis. The SO2 628 IR line list is computed on a recently improved potential energy surface (PES) refinement, denoted Ames-Pre2, and the published purely ab initio CCSD(T)/aug-cc-pVQZ dipole moment surface. Progress has been made in both energy level convergence and rovibrational quantum number assignments agreeing with laboratory analysis models. The accuracy of the computed 628 energy levels and line list is similar to what has been achieved and reported for SO2 626 and 646, i.e. 0.01-0.03 cm(exp -1) for bands up to 5500 cm(exp -1). During the comparison, we found some discrepancies in addition to overall good agreements. The three-IR-list based feature-by-feature analysis in a 0.25 cm(exp -1) spectral window clearly demonstrates the power of the current Ames line lists with new assignments, correction of some errors, and intensity contributions from varied sources including other isotopologues. We are inclined to attribute part of detected discrepancies to an incomplete experimental analysis and missing intensity in the model. With complete line position, intensity, and rovibrational quantum numbers determined at 296 K, spectroscopic analysis is significantly facilitated especially for a spectral range exhibiting such an unusually high density of lines. The computed 628 rovibrational levels and line list are accurate enough to provide alternatives for the missing bands or suspicious assignments, as well as helpful to identify these isotopologues in various celestial environments. The next step will be to revisit the SO2 828 and 646 spectral analyses.

  17. Computing moment to moment BOLD activation for real-time neurofeedback

    PubMed Central

    Hinds, Oliver; Ghosh, Satrajit; Thompson, Todd W.; Yoo, Julie J.; Whitfield-Gabrieli, Susan; Triantafyllou, Christina; Gabrieli, John D.E.

    2013-01-01

    Estimating moment to moment changes in blood oxygenation level dependent (BOLD) activation levels from functional magnetic resonance imaging (fMRI) data has applications for learned regulation of regional activation, brain state monitoring, and brain-machine interfaces. In each of these contexts, accurate estimation of the BOLD signal in as little time as possible is desired. This is a challenging problem due to the low signal-to-noise ratio of fMRI data. Previous methods for real-time fMRI analysis have either sacrificed the ability to compute moment to moment activation changes by averaging several acquisitions into a single activation estimate or have sacrificed accuracy by failing to account for prominent sources of noise in the fMRI signal. Here we present a new method for computing the amount of activation present in a single fMRI acquisition that separates moment to moment changes in the fMRI signal intensity attributable to neural sources from those due to noise, resulting in a feedback signal more reflective of neural activation. This method computes an incremental general linear model fit to the fMRI timeseries, which is used to calculate the expected signal intensity at each new acquisition. The difference between the measured intensity and the expected intensity is scaled by the variance of the estimator in order to transform this residual difference into a statistic. Both synthetic and real data were used to validate this method and compare it to the only other published real-time fMRI method. PMID:20682350

  18. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  19. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  20. Evolution of the ATLAS PanDA workload management system for exascale computational science

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  1. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  2. A Note on Procrustean Rotation in Exploratory Factor Analysis: A Computer Intensive Approach to Goodness-of-Fit Evaluation.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Little, Todd D.

    1999-01-01

    Describes a method for evaluating results of Procrustean rotation to a target factor pattern matrix in exploratory factor analysis. The approach, based on the bootstrap method, yields empirical approximations of the sampling distributions of: (1) differences between target elements and rotated factor pattern matrices; and (2) the overall…

  3. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  4. Effectiveness of speech language therapy either alone or with add-on computer-based language therapy software (Malayalam version) for early post stroke aphasia: A feasibility study.

    PubMed

    Kesav, Praveen; Vrinda, S L; Sukumaran, Sajith; Sarma, P S; Sylaja, P N

    2017-09-15

    This study aimed to assess the feasibility of professional based conventional speech language therapy (SLT) either alone (Group A/less intensive) or assisted by novel computer based local language software (Group B/more intensive) for rehabilitation in early post stroke aphasia. Comprehensive Stroke Care Center of a tertiary health care institute situated in South India, with the study design being prospective open randomised controlled trial with blinded endpoint evaluation. This study recruited 24 right handed first ever acute ischemic stroke patients above 15years of age affecting middle cerebral artery territory within 90days of stroke onset with baseline Western Aphasia Battery (WAB) Aphasia Quotient (AQ) score of <93.8 between September 2013 and January 2016.The recruited subjects were block randomised into either Group A/less intensive or Group B/more intensive therapy arms, in order to receive 12 therapy sessions of conventional professional based SLT of 1h each in both groups, with an additional 12h of computer based language therapy in Group B over 4weeks on a thrice weekly basis, with a follow up WAB performed at four and twelve weeks after baseline assessment. The trial was registered with Clinical trials registry India [2016/08/0120121]. All the statistical analysis was carried out with IBM SPSS Statistics for Windows version 21. 20 subjects [14 (70%) Males; Mean age: 52.8years±SD12.04] completed the study (9 in the less intensive and 11 in the more intensive arm). The mean four weeks follow up AQ showed a significant improvement from the baseline in the total group (p value: 0.01). The rate of rise of AQ from the baseline to four weeks follow up (ΔAQ %) showed a significantly greater value for the less intensive treatment group as against the more intensive treatment group [155% (SD: 150; 95% CI: 34-275) versus 52% (SD: 42%; 95% CI: 24-80) respectively: p value: 0.053]. Even though the more intensive treatment arm incorporating combined professional based SLT and computer software based training fared poorer than the less intensive therapy group, this study nevertheless reinforces the feasibility of SLT in augmenting recovery of early post stroke aphasia. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Integrative prescreening in analysis of multiple cancer genomic studies

    PubMed Central

    2012-01-01

    Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431

  6. Analysis of the Effect of Cooling Intensity Under Volume-Surface Hardening on Formation of Hardened Structures in Steel 20GL

    NASA Astrophysics Data System (ADS)

    Evseev, D. G.; Savrukhin, A. V.; Neklyudov, A. N.

    2018-01-01

    Computer simulation of the kinetics of thermal processes and structural and phase transformations in the wall of a bogie side frame produced from steel 20GL is performed with allowance for the differences in the cooling intensity under volume-surface hardening. The simulation is based on the developed method employing the diagram of decomposition of austenite at different cooling rates. The data obtained are used to make conclusion on the effect of the cooling intensity on propagation of martensite structure over the wall section.

  7. 0-6767 : evaluation of existing smartphone applications and data needs for travel survey.

    DOT National Transportation Integrated Search

    2014-08-01

    Current and reliable data on traffic movements : play a key role in transportation planning, : modeling, and air quality analysis. Traditional : travel surveys conducted via paper or computer : are costly, time consuming, and labor intensive : for su...

  8. Influence of Adaptive Statistical Iterative Reconstruction on coronary plaque analysis in coronary computed tomography angiography.

    PubMed

    Precht, Helle; Kitslaar, Pieter H; Broersen, Alexander; Dijkstra, Jouke; Gerke, Oke; Thygesen, Jesper; Egstrup, Kenneth; Lambrechtsen, Jess

    The purpose of this study was to study the effect of iterative reconstruction (IR) software on quantitative plaque measurements in coronary computed tomography angiography (CCTA). Thirty patients with a three clinical risk factors for coronary artery disease (CAD) had one CCTA performed. Images were reconstructed using FBP, 30% and 60% adaptive statistical IR (ASIR). Coronary plaque analysis was performed as per patient and per vessel (LM, LAD, CX and RCA) measurements. Lumen and vessel volumes and plaque burden measurements were based on automatic detected contours in each reconstruction. Lumen and plaque intensity measurements and HU based plaque characterization were based on corrected contours copied to each reconstruction. No significant changes between FBP and 30% ASIR were found except for lumen- (-2.53 HU) and plaque intensities (-1.28 HU). Between FBP and 60% ASIR the change in total volume showed an increase of 0.94%, 4.36% and 2.01% for lumen, plaque and vessel, respectively. The change in total plaque burden between FBP and 60% ASIR was 0.76%. Lumen and plaque intensities decreased between FBP and 60% ASIR with -9.90 HU and -1.97 HU, respectively. The total plaque component volume changes were all small with a maximum change of -1.13% of necrotic core between FBP and 60% ASIR. Quantitative plaque measurements only showed modest differences between FBP and the 60% ASIR level. Differences were increased lumen-, vessel- and plaque volumes, decreased lumen- and plaque intensities and a small percentage change in the individual plaque component volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  9. Frequency domain analysis of the random loading of cracked panels

    NASA Technical Reports Server (NTRS)

    Doyle, James F.

    1994-01-01

    The primary effort concerned the development of analytical methods for the accurate prediction of the effect of random loading on a panel with a crack. Of particular concern was the influence of frequency on the stress intensity factor behavior. Many modern structures, such as those found in advanced aircraft, are lightweight and susceptible to critical vibrations, and consequently dynamic response plays a very important role in their analysis. The presence of flaws and cracks can have catastrophic consequences. The stress intensity factor, K, emerges as a very significant parameter that characterizes the crack behavior. In analyzing the dynamic response of panels that contain cracks, the finite element method is used, but because this type of problem is inherently computationally intensive, a number of ways of calculating K more efficiently are explored.

  10. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  11. Computer assisted data analysis in intensive care: the ICDEV project--development of a scientific database system for intensive care (Intensive Care Data Evaluation Project).

    PubMed

    Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M

    1995-01-01

    Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.

  12. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  13. Data-intensive computing on numerically-insensitive supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, James P; Fasel, Patricia K; Habib, Salman

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  14. Probabilistic analysis of tsunami hazards

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2006-01-01

    Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).

  15. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  16. Accurate Vibrational-Rotational Parameters and Infrared Intensities of 1-Bromo-1-fluoroethene: A Joint Experimental Analysis and Ab Initio Study.

    PubMed

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Giorgianni, Santi; Bloino, Julien; Tasinato, Nicola; Carnimeo, Ivan; Biczysko, Malgorzata; Puzzarini, Cristina

    2017-05-04

    The medium-resolution gas-phase infrared (IR) spectra of 1-bromo-1-fluoroethene (BrFC═CH 2 , 1,1-C 2 H 2 BrF) were investigated in the range 300-6500 cm -1 , and the vibrational analysis led to the assignment of all fundamentals as well as many overtone and combination bands up to three quanta, thus giving an accurate description of its vibrational structure. Integrated band intensity data were determined with high precision from the measurements of their corresponding absorption cross sections. The vibrational analysis was supported by high-level ab initio investigations. CCSD(T) computations accounting for extrapolation to the complete basis set and core correlation effects were employed to accurately determine the molecular structure and harmonic force field. The latter was then coupled to B2PLYP and MP2 computations in order to account for mechanical and electrical anharmonicities. Second-order perturbative vibrational theory was then applied to the thus obtained hybrid force fields to support the experimental assignment of the IR spectra.

  17. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  18. Graphic Novels in Libraries: An Expert's Opinion

    ERIC Educational Resources Information Center

    Foster, Katy

    2004-01-01

    Barbara Gordon a librarian and computer expert from Gotham city is a genius level intellect and photographic memory expert at research and analysis. According to her, graphic novels and comics are wildly appealing to readers of all ages and intensely popular with adolescents.

  19. Constructing storyboards based on hierarchical clustering analysis

    NASA Astrophysics Data System (ADS)

    Hasebe, Satoshi; Sami, Mustafa M.; Muramatsu, Shogo; Kikuchi, Hisakazu

    2005-07-01

    There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.

  20. Randomization Procedures Applied to Analysis of Ballistic Data

    DTIC Science & Technology

    1991-06-01

    test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE

  1. Cloud computing applications for biomedical science: A perspective.

    PubMed

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  2. Cloud computing applications for biomedical science: A perspective

    PubMed Central

    2018-01-01

    Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176

  3. Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.

    PubMed

    Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice

    2017-08-01

    Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    PubMed

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.

  5. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures

    PubMed Central

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2016-01-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829

  6. On the Modeling and Management of Cloud Data Analytics

    NASA Astrophysics Data System (ADS)

    Castillo, Claris; Tantawi, Asser; Steinder, Malgorzata; Pacifici, Giovanni

    A new era is dawning where vast amount of data is subjected to intensive analysis in a cloud computing environment. Over the years, data about a myriad of things, ranging from user clicks to galaxies, have been accumulated, and continue to be collected, on storage media. The increasing availability of such data, along with the abundant supply of compute power and the urge to create useful knowledge, gave rise to a new data analytics paradigm in which data is subjected to intensive analysis, and additional data is created in the process. Meanwhile, a new cloud computing environment has emerged where seemingly limitless compute and storage resources are being provided to host computation and data for multiple users through virtualization technologies. Such a cloud environment is becoming the home for data analytics. Consequently, providing good performance at run-time to data analytics workload is an important issue for cloud management. In this paper, we provide an overview of the data analytics and cloud environment landscapes, and investigate the performance management issues related to running data analytics in the cloud. In particular, we focus on topics such as workload characterization, profiling analytics applications and their pattern of data usage, cloud resource allocation, placement of computation and data and their dynamic migration in the cloud, and performance prediction. In solving such management problems one relies on various run-time analytic models. We discuss approaches for modeling and optimizing the dynamic data analytics workload in the cloud environment. All along, we use the Map-Reduce paradigm as an illustration of data analytics.

  7. [Eye symptoms in office employees working at computer stations].

    PubMed

    Kowalska, Małgorzata; Zejda, Jan E; Bugajska, Joanna; Braczkowska, Bogumiła; Brozek, Grzegorz; Malińska, Marzena

    2011-01-01

    The aim of the study was to measure the prevalence and intensity of eye symptoms in office workers who use computers on a regular basis, and to find out if the symptoms depend on the duration of computer use and other work-related factors. Office workers employed at large social services companies in two cities (Warszawa and Katowice) were invited to fill in a questionnaire (cross-sectional study). The questions included work history and history of last-week eye symptoms and eye-related complains. Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences were tested by the chi-square test and verified by the results of multivariate logistic analysis. The examined effects included the role of daily computer use and lighting conditions at work stations. The examined persons complained of such eye symptoms as eye strain, visual acuity impairment and mucosal dryness or eye burning. The following values of symptom prevalence were found in women and men, respectively: eye strain 50.7% and 32.6%, disturbed visual acuity 38.3% and 21.2%, mucosal dryness and eye burning 46.5% and 24.2%. The results of multivariate analysis confirmed the statistically significant effects of lighting intensity and screen flickering on the occurrence of symptoms. Frequent occurrence of eye symptoms and their associatation with some characteristics of the work environment point to the need of observing ergonomic standards of work stations and of the usage of computers at work.

  8. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  9. A User-Friendly Software Package for HIFU Simulation

    NASA Astrophysics Data System (ADS)

    Soneson, Joshua E.

    2009-04-01

    A freely-distributed, MATLAB (The Mathworks, Inc., Natick, MA)-based software package for simulating axisymmetric high-intensity focused ultrasound (HIFU) beams and their heating effects is discussed. The package (HIFU_Simulator) consists of a propagation module which solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and a heating module which solves Pennes' bioheat transfer (BHT) equation. The pressure, intensity, heating rate, temperature, and thermal dose fields are computed, plotted, the output is released to the MATLAB workspace for further user analysis or postprocessing.

  10. Computation of energy interaction parameters as well as electric dipole intensity parameters for the absorption spectral study of the interaction of Pr(III) with L-phenylalanine, L-glycine, L-alanine and L-aspartic acid in the presence and absence of Ca 2+ in organic solvents

    NASA Astrophysics Data System (ADS)

    Moaienla, T.; Singh, Th. David; Singh, N. Rajmuhon; Devi, M. Indira

    2009-10-01

    Studying the absorption difference and comparative absorption spectra of the interaction of Pr(III) and Nd(III) with L-phenylalanine, L-glycine, L-alanine and L-aspartic acid in the presence and absence of Ca 2+ in organic solvents, various energy interaction parameters like Slater-Condon ( FK), Racah ( Ek), Lande factor ( ξ4f), nephelauxetic ratio ( β), bonding ( b1/2), percentage-covalency ( δ) have been evaluated applying partial and multiple regression analysis. The values of oscillator strength ( P) and Judd-Ofelt electric dipole intensity parameter Tλ ( λ = 2, 4, 6) for different 4f-4f transitions have been computed. On analysis of the variation of the various energy interaction parameters as well as the changes in the oscillator strength ( P) and Tλ values reveal the mode of binding with different ligands.

  11. Streaming support for data intensive cloud-based sequence analysis.

    PubMed

    Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  12. Time-Dependent Modeling of Underwater Explosions by Convolving Similitude Source with Bandlimited Impulse from the CASS/GRAB Model

    DTIC Science & Technology

    2015-06-30

    401) 832-8689 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 i TABLE OF CONTENTS Section Page LIST OF ILLUSTRATIONS...calculated with a high degree of accuracy—leading to intensive computational calculations and long computational times when dealing with range-depth fields...be obtained using similitude analysis; it allows the comparison of differing explosive weights and provides the means to scale the pressure, energy

  13. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  14. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  15. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  16. A Longitudinal Investigation of the Effects of Computer Anxiety on Performance in a Computing-Intensive Environment

    ERIC Educational Resources Information Center

    Buche, Mari W.; Davis, Larry R.; Vician, Chelley

    2007-01-01

    Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…

  17. AUTOMATED GIS WATERSHED ANALYSIS TOOLS FOR RUSLE/SEDMOD SOIL EROSION AND SEDIMENTATION MODELING

    EPA Science Inventory

    A comprehensive procedure for computing soil erosion and sediment delivery metrics has been developed using a suite of automated Arc Macro Language (AML ) scripts and a pair of processing- intensive ANSI C++ executable programs operating on an ESRI ArcGIS 8.x Workstation platform...

  18. Comparison of mixed-mode stress-intensity factors obtained through displacement correlation, J-integral formulation, and modified crack-closure integral

    NASA Astrophysics Data System (ADS)

    Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.

    This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.

  19. Expression Analysis of p16, c-Myc, and mSin3A in Non-small Cell Lung Cancer by Computer Aided Scoring and Analysis (CASA).

    PubMed

    Salmaninejad, Arash; Estiar, Mehrdad Asghari; Gill, Rajbir K; Shih, Joanna H; Hewitt, Stephen; Jeon, Hyo-Sung; Fukuoka, Junya; Shilo, Konstantin; Shakoori, Abbas; Jen, Jin

    2015-01-01

    Immunohistochemical analysis (IHC) of tissue microarray (TMA) slides enables large sets of tissue samples to be analyzed simultaneously on a single slide. However, manual evaluation of small cores on a TMA slide is time consuming and error prone. We describe a computer aided scoring and analysis (CASA) method to allow facile and reliable scoring of IHC staining using TMA containing 300 non-small cell lung cancer (NSCLC) cases. In the two previous published papers utilizing our TMA slides of lung cancer we examined 18 proteins involved in the chromatin machinery. We developed our study using more proteins of the chromatin complex and several transcription factors that facilitate the chromatin machinery. Then, a total of 78 antibodies were evaluated by CASA to derive a normalized intensity value that correlated with the overall staining status of the targeting protein. The intensity values for TMA cores were then examined for association to clinical variables and predictive significance individually and with other factors. RESULTs: Using our TMA, the intensity of several protein pairs were significantly correlated with an increased risk of death in NSCLC. These included c-Myc with p16, mSin3A with p16 and c-Myc with mSinA. Predictive values of these pairs remained significant when evaluated based on standard IHC scores. Our results demonstrate the usefulness of CASA as a valuable tool for systematic assessment of TMA slides to identify potential predictive biomarkers using a large set of primary human tissues.

  20. Digital computer processing of peach orchard multispectral aerial photography

    NASA Technical Reports Server (NTRS)

    Atkinson, R. J.

    1976-01-01

    Several methods of analysis using digital computers applicable to digitized multispectral aerial photography, are described, with particular application to peach orchard test sites. This effort was stimulated by the recent premature death of peach trees in the Southeastern United States. The techniques discussed are: (1) correction of intensity variations by digital filtering, (2) automatic detection and enumeration of trees in five size categories, (3) determination of unhealthy foliage by infrared reflectances, and (4) four band multispectral classification into healthy and declining categories.

  1. Integration of lyoplate based flow cytometry and computational analysis for standardized immunological biomarker discovery.

    PubMed

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R; Nestle, Frank O

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases.

  2. Integration of Lyoplate Based Flow Cytometry and Computational Analysis for Standardized Immunological Biomarker Discovery

    PubMed Central

    Villanova, Federica; Di Meglio, Paola; Inokuma, Margaret; Aghaeepour, Nima; Perucha, Esperanza; Mollon, Jennifer; Nomura, Laurel; Hernandez-Fuentes, Maria; Cope, Andrew; Prevost, A. Toby; Heck, Susanne; Maino, Vernon; Lord, Graham; Brinkman, Ryan R.; Nestle, Frank O.

    2013-01-01

    Discovery of novel immune biomarkers for monitoring of disease prognosis and response to therapy in immune-mediated inflammatory diseases is an important unmet clinical need. Here, we establish a novel framework for immunological biomarker discovery, comparing a conventional (liquid) flow cytometry platform (CFP) and a unique lyoplate-based flow cytometry platform (LFP) in combination with advanced computational data analysis. We demonstrate that LFP had higher sensitivity compared to CFP, with increased detection of cytokines (IFN-γ and IL-10) and activation markers (Foxp3 and CD25). Fluorescent intensity of cells stained with lyophilized antibodies was increased compared to cells stained with liquid antibodies. LFP, using a plate loader, allowed medium-throughput processing of samples with comparable intra- and inter-assay variability between platforms. Automated computational analysis identified novel immunophenotypes that were not detected with manual analysis. Our results establish a new flow cytometry platform for standardized and rapid immunological biomarker discovery with wide application to immune-mediated diseases. PMID:23843942

  3. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  4. [Upper extremities, neck and back symptoms in office employees working at computer stations].

    PubMed

    Zejda, Jan E; Bugajska, Joanna; Kowalska, Małgorzata; Krzych, Lukasz; Mieszkowska, Marzena; Brozek, Grzegorz; Braczkowska, Bogumiła

    2009-01-01

    To obtain current data on the occurrence ofwork-related symptoms of office computer users in Poland we implemented a questionnaire survey. Its goal was to assess the prevalence and intensity of symptoms of upper extremities, neck and back in office workers who use computers on a regular basis, and to find out if the occurrence of symptoms depends on the duration of computer use and other work-related factors. Office workers in two towns (Warszawa and Katowice), employed in large social services companies, were invited to fill in the Polish version of Nordic Questionnaire. The questions included work history and history of last-week symptoms of pain of hand/wrist, elbow, arm, neck and upper and lower back (occurrence and intensity measured by visual scale). Altogether 477 men and women returned the completed questionnaires. Between-group symptom differences (chi-square test) were verified by multivariate analysis (GLM). The prevalence of symptoms in individual body parts was as follows: neck, 55.6%; arm, 26.9%; elbow, 13.3%; wrist/hand, 29.9%; upper back, 49.6%; and lower back, 50.1%. Multivariate analysis confirmed the effect of gender, age and years of computer use on the occurrence of symptoms. Among other determinants, forearm support explained pain of wrist/hand, wrist support of elbow pain, and chair adjustment of arm pain. Association was also found between low back pain and chair adjustment and keyboard position. The findings revealed frequent occurrence of symptoms of pain in upper extremities and neck in office workers who use computers on a regular basis. Seating position could also contribute to the frequent occurrence of back pain in the examined population.

  5. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  6. Skills and Knowledge for Data-Intensive Environmental Research

    PubMed Central

    Hampton, Stephanie E.; Jones, Matthew B.; Wasser, Leah A.; Schildhauer, Mark P.; Supp, Sarah R.; Brun, Julien; Hernandez, Rebecca R.; Boettiger, Carl; Collins, Scott L.; Gross, Louis J.; Fernández, Denny S.; Budden, Amber; White, Ethan P.; Teal, Tracy K.; Aukema, Juliann E.

    2017-01-01

    Abstract The scale and magnitude of complex and pressing environmental issues lend urgency to the need for integrative and reproducible analysis and synthesis, facilitated by data-intensive research approaches. However, the recent pace of technological change has been such that appropriate skills to accomplish data-intensive research are lacking among environmental scientists, who more than ever need greater access to training and mentorship in computational skills. Here, we provide a roadmap for raising data competencies of current and next-generation environmental researchers by describing the concepts and skills needed for effectively engaging with the heterogeneous, distributed, and rapidly growing volumes of available data. We articulate five key skills: (1) data management and processing, (2) analysis, (3) software skills for science, (4) visualization, and (5) communication methods for collaboration and dissemination. We provide an overview of the current suite of training initiatives available to environmental scientists and models for closing the skill-transfer gap. PMID:28584342

  7. Skills and Knowledge for Data-Intensive Environmental Research.

    PubMed

    Hampton, Stephanie E; Jones, Matthew B; Wasser, Leah A; Schildhauer, Mark P; Supp, Sarah R; Brun, Julien; Hernandez, Rebecca R; Boettiger, Carl; Collins, Scott L; Gross, Louis J; Fernández, Denny S; Budden, Amber; White, Ethan P; Teal, Tracy K; Labou, Stephanie G; Aukema, Juliann E

    2017-06-01

    The scale and magnitude of complex and pressing environmental issues lend urgency to the need for integrative and reproducible analysis and synthesis, facilitated by data-intensive research approaches. However, the recent pace of technological change has been such that appropriate skills to accomplish data-intensive research are lacking among environmental scientists, who more than ever need greater access to training and mentorship in computational skills. Here, we provide a roadmap for raising data competencies of current and next-generation environmental researchers by describing the concepts and skills needed for effectively engaging with the heterogeneous, distributed, and rapidly growing volumes of available data. We articulate five key skills: (1) data management and processing, (2) analysis, (3) software skills for science, (4) visualization, and (5) communication methods for collaboration and dissemination. We provide an overview of the current suite of training initiatives available to environmental scientists and models for closing the skill-transfer gap.

  8. Preparing the Next Generation of Environmental Scientists to Work at the Frontier of Data-Intensive Research

    NASA Astrophysics Data System (ADS)

    Hampton, S. E.

    2015-12-01

    The science necessary to unravel complex environmental problems confronts severe computational challenges - coping with huge volumes of heterogeneous data, spanning vast spatial scales at high resolution, and requiring integration of disparate measurements from multiple disciplines. But as cyberinfrastructure advances to support such work, scientists in many fields lack sufficient computational skills to participate in interdisciplinary, data-intensive research. In response, we developed innovative training workshops for early-career scientists, in order to explore both the needs and solutions for training next-generation scientists in skills for data-intensive environmental research. In 2013 and 2014 we ran intensive 3-week training workshops for early-career researchers. One of the workshops was run concurrently in California and North Carolina, connected by virtual technologies and coordinated schedules. We attracted applicants to the workshop with the opportunity to pursue data-intensive small-group research projects that they proposed. This approach presented a realistic possibility that publishable products could result from 3 weeks of focused hands-on classroom instruction combined with self-directed group research in which instructors were present to assist trainees. Instruction addressed 1) collaboration modes and technologies, 2) data management, preservation, and sharing, 3) preparing data for analysis using scripting, 4) reproducible research, 5) sustainable software practices, 6) data analysis and modeling, and 7) communicating results to broad communities. The most dramatic improvements in technical skills were in data management, version control, and working with spatial data outside of proprietary software. In addition, participants built strong networks and collaborative skills that later resulted in a successful student-led grant proposal, published manuscripts, and participants reported that the training was a highly influential experience.

  9. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  10. Data intensive computing at Sandia.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    2010-09-01

    Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

  11. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  12. Ab initio calculation of the G peak intensity of graphene: Laser-energy and Fermi-energy dependence and importance of quantum interference effects

    NASA Astrophysics Data System (ADS)

    Reichardt, Sven; Wirtz, Ludger

    2017-05-01

    We present the results of a diagrammatic, fully ab initio calculation of the G peak intensity of graphene. The flexibility and generality of our approach enables us to go beyond the previous analytical calculations in the low-energy regime. We study the laser and Fermi energy dependence of the G peak intensity and analyze the contributions from resonant and nonresonant electronic transitions. In particular, we explicitly demonstrate the importance of quantum interference and nonresonant states for the G peak process. Our method of analysis and computational concept is completely general and can easily be applied to study other materials as well.

  13. Measurement of myocardial perfusion and infarction size using computer-aided diagnosis system for myocardial contrast echocardiography.

    PubMed

    Du, Guo-Qing; Xue, Jing-Yi; Guo, Yanhui; Chen, Shuang; Du, Pei; Wu, Yan; Wang, Yu-Hang; Zong, Li-Qiu; Tian, Jia-Wei

    2015-09-01

    Proper evaluation of myocardial microvascular perfusion and assessment of infarct size is critical for clinicians. We have developed a novel computer-aided diagnosis (CAD) approach for myocardial contrast echocardiography (MCE) to measure myocardial perfusion and infarct size. Rabbits underwent 15 min of coronary occlusion followed by reperfusion (group I, n = 15) or 60 min of coronary occlusion followed by reperfusion (group II, n = 15). Myocardial contrast echocardiography was performed before and 7 d after ischemia/reperfusion, and images were analyzed with the CAD system on the basis of eliminating particle swarm optimization clustering analysis. The myocardium was quickly and accurately detected using contrast-enhanced images, myocardial perfusion was quantitatively calibrated and a color-coded map calibrated by contrast intensity and automatically produced by the CAD system was used to outline the infarction region. Calibrated contrast intensity was significantly lower in infarct regions than in non-infarct regions, allowing differentiation of abnormal and normal myocardial perfusion. Receiver operating characteristic curve analysis documented that -54-pixel contrast intensity was an optimal cutoff point for the identification of infarcted myocardium with a sensitivity of 95.45% and specificity of 87.50%. Infarct sizes obtained using myocardial perfusion defect analysis of original contrast images and the contrast intensity-based color-coded map in computerized images were compared with infarct sizes measured using triphenyltetrazolium chloride staining. Use of the proposed CAD approach provided observers with more information. The infarct sizes obtained with myocardial perfusion defect analysis, the contrast intensity-based color-coded map and triphenyltetrazolium chloride staining were 23.72 ± 8.41%, 21.77 ± 7.8% and 18.21 ± 4.40% (% left ventricle) respectively (p > 0.05), indicating that computerized myocardial contrast echocardiography can accurately measure infarct size. On the basis of the results, we believe the CAD method can quickly and automatically measure myocardial perfusion and infarct size and will, it is hoped, be very helpful in clinical therapeutics. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  14. Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.

    PubMed

    Das, Biman; Drake, Eli; Jack, John

    2004-02-01

    Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.

  15. Structural study, NCA, FT-IR, FT-Raman spectral investigations, NBO analysis, thermodynamic functions of N-acetyl-l-phenylalanine.

    PubMed

    Raja, B; Balachandran, V; Revathi, B

    2015-03-05

    The FT-IR and FT-Raman spectra of N-acetyl-l-phenylalanine were recorded and analyzed. Natural bond orbital analysis has been carried out for various intramolecular interactions that are responsible for the stabilization of the molecule. HOMO-LUMO energy gap has been computed with the help of density functional theory. The statistical thermodynamic functions (heat capacity, entropy, vibrational partition function and Gibbs energy) were obtained for the range of temperature 100-1000K. The polarizability, first hyperpolarizability, anisotropy polarizability invariant has been computed using quantum chemical calculations. The infrared and Raman spectra were also predicted from the calculated intensities. Comparison of the experimental and theoretical spectra values provides important information about the ability of the computational method to describe the vibrational modes. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. A Computational Approach for Probabilistic Analysis of Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2009-01-01

    NASA's development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  17. A mixed-mode crack analysis of isotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Yau, J. F.; Wang, S. S.; Corten, H. T.

    1980-01-01

    A simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems is presented. The analysis is formulated on the basis of conservation laws of elasticity and of fundamental relationships in fracture mechanics. The problem is reduced to the determination of mixed-mode stress-intensity factor solutions in terms of conservation integrals involving known auxiliary solutions. One of the salient features of the present analysis is that the stress-intensity solutions can be determined directly by using information extracted in the far field. Several examples with solutions available in the literature are solved to examine the accuracy and other characteristics of the current approach. This method is demonstrated to be superior in its numerical simplicity and computational efficiency to other approaches. Solutions of more complicated and practical engineering fracture problems dealing with the crack emanating from a circular hole are presented also to illustrate the capacity of this method

  18. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    PubMed Central

    Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461

  19. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    NASA Astrophysics Data System (ADS)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  20. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    PubMed

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  1. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    PubMed

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  2. Performance analysis of parallel branch and bound search with the hypercube architecture

    NASA Technical Reports Server (NTRS)

    Mraz, Richard T.

    1987-01-01

    With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.

  3. Automating FEA programming

    NASA Technical Reports Server (NTRS)

    Sharma, Naveen

    1992-01-01

    In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.

  4. Spectral mapping of soil organic matter

    NASA Technical Reports Server (NTRS)

    Kristof, S. J.; Baumgardner, M. F.; Johannsen, C. J.

    1974-01-01

    Multispectral remote sensing data were examined for use in the mapping of soil organic matter content. Computer-implemented pattern recognition techniques were used to analyze data collected in May 1969 and May 1970 by an airborne multispectral scanner over a 40-km flightline. Two fields within the flightline were selected for intensive study. Approximately 400 surface soil samples from these fields were obtained for organic matter analysis. The analytical data were used as training sets for computer-implemented analysis of the spectral data. It was found that within the geographical limitations included in this study, multispectral data and automatic data processing techniques could be used very effectively to delineate and map surface soils areas containing different levels of soil organic matter.

  5. Integration of Russian Tier-1 Grid Center with High Performance Computers at NRC-KI for LHC experiments and beyond HENP

    NASA Astrophysics Data System (ADS)

    Belyaev, A.; Berezhnaya, A.; Betev, L.; Buncic, P.; De, K.; Drizhuk, D.; Klimentov, A.; Lazin, Y.; Lyalin, I.; Mashinistov, R.; Novikov, A.; Oleynik, D.; Polyakov, A.; Poyda, A.; Ryabinkin, E.; Teslyuk, A.; Tkachenko, I.; Yasnopolskiy, L.

    2015-12-01

    The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.

  6. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  7. Stress intensities for cracks emanating from pin-loaded holes

    NASA Technical Reports Server (NTRS)

    Smith, C. W.; Jolles, M.; Peters, W. H.

    1977-01-01

    A series of stress freezing photoelastic experiments were conducted on large plates containing central holes with cracks emanating from the edge formed by the intersection of the hole with the plate surface. Loads were applied through rigid pins with neat fits in the holes. Stress-intensity factors (SIF) were estimated by a computer assisted least squares analysis of the photoelastic data taken from slices near the points of intersection of the flaw border with the hole boundary and the plate surface. Results indicate that the local mode of loading changes from Mode 1 near the hole boundary to mixed mode near the plate surface. The analysis is extended to include mixed mode loading, and results are compared with an existing approximate theory.

  8. Analysis of the effects of periodic forcing in the spike rate and spike correlation's in semiconductor lasers with optical feedback

    NASA Astrophysics Data System (ADS)

    Quintero-Quiroz, C.; Sorrentino, Taciano; Torrent, M. C.; Masoller, Cristina

    2016-04-01

    We study the dynamics of semiconductor lasers with optical feedback and direct current modulation, operating in the regime of low frequency fluctuations (LFFs). In the LFF regime the laser intensity displays abrupt spikes: the intensity drops to zero and then gradually recovers. We focus on the inter-spike-intervals (ISIs) and use a method of symbolic time-series analysis, which is based on computing the probabilities of symbolic patterns. We show that the variation of the probabilities of the symbols with the modulation frequency and with the intrinsic spike rate of the laser allows to identify different regimes of noisy locking. Simulations of the Lang-Kobayashi model are in good qualitative agreement with experimental observations.

  9. Plasma wave excitation by intense microwave transmission from a space vehicle

    NASA Astrophysics Data System (ADS)

    Kimura, I.; Matsumoto, H.; Kaya, N.; Miyatake, S.

    An impact of intense microwave upon the ionospheric plasma was empirically investigated by an active rocket experiment (MINIX). The rocket carried two high-power (830W) transmitters of 2.45 GHz microwave on the mother section of the rocket. The ionospheric plasma response to the intense microwave was measured by a diagnostic package installed on both mother and daughter sections. The daughter section was separated from the mother with a slow speed of 15 cm/sec. The plasma wave analyzers revealed that various plasma waves are nonlinearly excited by the microwave. Among them, the most intense are electron cyclotron waves, followed by electron plasma waves. Extremely low frequency waves (several tens of Hz) are also found. The results of the data analysis as well as comparative computer simulations are given in this paper.

  10. Use of application containers and workflows for genomic data analysis.

    PubMed

    Schulz, Wade L; Durant, Thomas J S; Siddon, Alexa J; Torres, Richard

    2016-01-01

    The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia.

  11. Climatic response variability and machine learning: development of a modular technology framework for predicting bio-climatic change in pacific northwest ecosystems"

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.

    2015-12-01

    The creation and use of large amounts of data in scientific investigations has become common practice. Data collection and analysis for large scientific computing efforts are not only increasing in volume as well as number, the methods and analysis procedures are evolving toward greater complexity (Bell, 2009, Clarke, 2009, Maimon, 2010). In addition, the growth of diverse data-intensive scientific computing efforts (Soni, 2011, Turner, 2014, Wu, 2008) has demonstrated the value of supporting scientific data integration. Efforts to bridge this gap between the above perspectives have been attempted, in varying degrees, with modular scientific computing analysis regimes implemented with a modest amount of success (Perez, 2009). This constellation of effects - 1) an increasing growth in the volume and amount of data, 2) a growing data-intensive science base that has challenging needs, and 3) disparate data organization and integration efforts - has created a critical gap. Namely, systems of scientific data organization and management typically do not effectively enable integrated data collaboration or data-intensive science-based communications. Our research efforts attempt to address this gap by developing a modular technology framework for data science integration efforts - with climate variation as the focus. The intention is that this model, if successful, could be generalized to other application areas. Our research aim focused on the design and implementation of a modular, deployable technology architecture for data integration. Developed using aspects of R, interactive python, SciDB, THREDDS, Javascript, and varied data mining and machine learning techniques, the Modular Data Response Framework (MDRF) was implemented to explore case scenarios for bio-climatic variation as they relate to pacific northwest ecosystem regions. Our preliminary results, using historical NETCDF climate data for calibration purposes across the inland pacific northwest region (Abatzoglou, Brown, 2011), show clear ecosystems shifting over a ten-year period (2001-2011), based on multiple supervised classifier methods for bioclimatic indicators.

  12. Fluorescence-based enhanced reality (FLER) for real-time estimation of bowel perfusion in minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Diana, Michele

    2016-03-01

    Pre-anastomotic bowel perfusion is a key factor for a successful healing process. Clinical judgment has limited accuracy to evaluate intestinal microperfusion. Fluorescence videography is a promising tool for image-guided intraoperative assessment of the bowel perfusion at the future anastomotic site in the setting of minimally invasive procedures. The standard configuration for fluorescence videography includes a Near-Infrared endoscope able to detect the signal emitted by a fluorescent dye, more frequently Indocyanine Green (ICG), which is administered by intravenous injection. Fluorescence intensity is proportional to the amount of fluorescent dye diffusing in the tissue and consequently is a surrogate marker of tissue perfusion. However, fluorescence intensity alone remains a subjective approach and an integrated computer-based analysis of the over-time evolution of the fluorescence signal is required to obtain quantitative data. We have developed a solution integrating computer-based analysis for intra-operative evaluation of the optimal resection site, based on the bowel perfusion as determined by the dynamic fluorescence intensity. The software can generate a "virtual perfusion cartography", based on the "fluorescence time-to-peak". The virtual perfusion cartography can be overlapped onto real-time laparoscopic images to obtain the Enhanced Reality effect. We have defined this approach FLuorescence-based Enhanced Reality (FLER). This manuscript describes the stepwise development of the FLER concept.

  13. Analysis of computer images in the presence of metals

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Ingacheva, Anastasia; Prun, Victor; Nikolaev, Dmitry; Chukalina, Marina; Ferrero, Claudio; Asadchikov, Victor

    2018-04-01

    Artifacts caused by intensely absorbing inclusions are encountered in computed tomography via polychromatic scanning and may obscure or simulate pathologies in medical applications. To improve the quality of reconstruction if high-Z inclusions in presence, previously we proposed and tested with synthetic data an iterative technique with soft penalty mimicking linear inequalities on the photon-starved rays. This note reports a test at the tomographic laboratory set-up at the Institute of Crystallography FSRC "Crystallography and Photonics" RAS in which tomographic scans were successfully made of temporary tooth without inclusion and with Pb inclusion.

  14. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    PubMed

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Erythrocyte 2,3-diphosphoglycerate depletion associated with hypophosphatemia detected by routine arterial blood gas analysis.

    PubMed

    Larsen, V H; Waldau, T; Gravesen, H; Siggaard-Andersen, O

    1996-01-01

    To describe a clinical case where an extremely low erythrocyte 2,3-diphosphoglycerate concentration (2,3-DPG) was discovered by routine blood gas analysis supplemented by computer calculation of derived quantities. The finding of a low 2,3-DPG revealed a severe hypophosphatemia. Open uncontrolled study of a patient case. Intensive care observation during 41 days. A 44 year old woman with an abdominal abscess. Surgical drainage, antibiotics and parenteral nutrition. daily routine blood gas analyses with computer calculation of the hemoglobin oxygen affinity and estimation of the 2,3-DPG. An abrupt decline of 2,3-DPG was observed late in the course coincident with a pronounced hypophosphatemia. The fall in 2,3-DPG was verified by enzymatic analysis. 2,3-DPG may be estimated by computer calculation of routine blood gas data. A low 2,3-DPG which may be associated with hypophosphatemia causes an unfavorable increase in hemoglobin oxygen affinity which reduces the oxygen release to the tissues.

  16. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  17. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  18. A Comparative Analysis of Computational Approaches to Relative Protein Quantification Using Peptide Peak Intensities in Label-free LC-MS Proteomics Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Melissa M.; Brown, Joseph N.; Gritsenko, Marina A.

    2013-02-01

    Liquid chromatography coupled with mass spectrometry (LC-MS) is widely used to identify and quantify peptides in complex biological samples. In particular, label-free shotgun proteomics is highly effective for the identification of peptides and subsequently obtaining a global protein profile of a sample. As a result, this approach is widely used for discovery studies. Typically, the objective of these discovery studies is to identify proteins that are affected by some condition of interest (e.g. disease, exposure). However, for complex biological samples, label-free LC-MS proteomics experiments measure peptides and do not directly yield protein quantities. Thus, protein quantification must be inferred frommore » one or more measured peptides. In recent years, many computational approaches to relative protein quantification of label-free LC-MS data have been published. In this review, we examine the most commonly employed quantification approaches to relative protein abundance from peak intensity values, evaluate their individual merits, and discuss challenges in the use of the various computational approaches.« less

  19. Integrating the Apache Big Data Stack with HPC for Big Data

    NASA Astrophysics Data System (ADS)

    Fox, G. C.; Qiu, J.; Jha, S.

    2014-12-01

    There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However, the same is not so true for data intensive computing, even though commercially clouds devote much more resources to data analytics than supercomputers devote to simulations. We look at a sample of over 50 big data applications to identify characteristics of data intensive applications and to deduce needed runtime and architectures. We suggest a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks and use these to identify a few key classes of hardware/software architectures. Our analysis builds on combining HPC and ABDS the Apache big data software stack that is well used in modern cloud computing. Initial results on clouds and HPC systems are encouraging. We propose the development of SPIDAL - Scalable Parallel Interoperable Data Analytics Library -- built on system aand data abstractions suggested by the HPC-ABDS architecture. We discuss how it can be used in several application areas including Polar Science.

  20. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  1. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  2. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  3. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  4. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  5. SURROGATE MODEL DEVELOPMENT AND VALIDATION FOR RELIABILITY ANALYSIS OF REACTOR PRESSURE VESSELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, William M.; Riley, Matthew E.; Spencer, Benjamin W.

    In nuclear light water reactors (LWRs), the reactor coolant, core and shroud are contained within a massive, thick walled steel vessel known as a reactor pressure vessel (RPV). Given the tremendous size of these structures, RPVs typically contain a large population of pre-existing flaws introduced in the manufacturing process. After many years of operation, irradiation-induced embrittlement makes these vessels increasingly susceptible to fracture initiation at the locations of the pre-existing flaws. Because of the uncertainty in the loading conditions, flaw characteristics and material properties, probabilistic methods are widely accepted and used in assessing RPV integrity. The Fracture Analysis of Vesselsmore » – Oak Ridge (FAVOR) computer program developed by researchers at Oak Ridge National Laboratory is widely used for this purpose. This program can be used in order to perform deterministic and probabilistic risk-informed analyses of the structural integrity of an RPV subjected to a range of thermal-hydraulic events. FAVOR uses a one-dimensional representation of the global response of the RPV, which is appropriate for the beltline region, which experiences the most embrittlement, and employs an influence coefficient technique to rapidly compute stress intensity factors for axis-aligned surface-breaking flaws. The Grizzly code is currently under development at Idaho National Laboratory (INL) to be used as a general multiphysics simulation tool to study a variety of degradation mechanisms in nuclear power plant components. The first application of Grizzly has been to study fracture in embrittled RPVs. Grizzly can be used to model the thermo-mechanical response of an RPV under transient conditions observed in a pressurized thermal shock (PTS) scenario. The global response of the vessel provides boundary conditions for local 3D models of the material in the vicinity of a flaw. Fracture domain integrals are computed to obtain stress intensity factors, which can in turn be used to assess whether a fracture would initiate at a pre-existing flaw. To use Grizzly for probabilistic analysis, it is necessary to have a way to rapidly evaluate stress intensity factors. To accomplish this goal, a reduced order model (ROM) has been developed to efficiently represent the behavior of a detailed 3D Grizzly model used to calculate fracture parameters. This approach uses the stress intensity factor influence coefficient method that has been used with great success in FAVOR. Instead of interpolating between tabulated solutions, as FAVOR does, the ROM approach uses a response surface methodology to compute fracture solutions based on a sampled set of results used to train the ROM. The main advantages of this approach are that the process of generating the training data can be fully automated, and the procedure can be readily used to consider more general flaw configurations. This paper demonstrates the procedure used to generate a ROM to rapidly compute stress intensity factors for axis-aligned flaws. The results from this procedure are in good agreement with those produced using the traditional influence coefficient interpolation procedure, which gives confidence in this method. This paves the way for applying this procedure for more general flaw configurations.« less

  6. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  7. Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method

    ERIC Educational Resources Information Center

    Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.

    2013-01-01

    Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…

  8. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  9. Hybrid cloud and cluster computing paradigms for life science applications

    PubMed Central

    2010-01-01

    Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982

  10. Hybrid cloud and cluster computing paradigms for life science applications.

    PubMed

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  11. Hadoop for High-Performance Climate Analytics: Use Cases and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Tamkin, Glenn

    2013-01-01

    Scientific data services are a critical aspect of the NASA Center for Climate Simulations mission (NCCS). Hadoop, via MapReduce, provides an approach to high-performance analytics that is proving to be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. The NCCS is particularly interested in the potential of Hadoop to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we prototyped a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. The initial focus was on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. After preliminary results suggested that this approach improves efficiencies within data intensive analytic workflows, we invested in building a cyber infrastructure resource for developing a new generation of climate data analysis capabilities using Hadoop. This resource is focused on reducing the time spent in the preparation of reanalysis data used in data-model inter-comparison, a long sought goal of the climate community. This paper summarizes the related use cases and lessons learned.

  12. Thermomechanical analysis of fast-burst reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, J.D.

    1994-08-01

    Fast-burst reactors are designed to provide intense, short-duration pulses of neutrons. The fission reaction also produces extreme time-dependent heating of the nuclear fuel. An existing transient-dynamic finite element code was modified specifically to compute the time-dependent stresses and displacements due to thermal shock loads of reactors. Thermomechanical analysis was then applied to determine structural feasibility of various concepts for an EDNA-type reactor and to optimize the mechanical design of the new SPR III-M reactor.

  13. Fragmentation of care and the use of head computed tomography in patients with ischemic stroke.

    PubMed

    Bekelis, Kimon; Roberts, David W; Zhou, Weiping; Skinner, Jonathan S

    2014-05-01

    Computed tomographic (CT) scans are central diagnostic tests for ischemic stroke. Their inefficient use is a negative quality measure tracked by the Centers for Medicare and Medicaid Services. We performed a retrospective analysis of Medicare fee-for-service claims data for adults admitted for ischemic stroke from 2008 to 2009, with 1-year follow-up. The outcome measures were risk-adjusted rates of high-intensity CT use (≥4 head CT scans) and risk- and price-adjusted Medicare expenditures in the year after admission. The average number of head CT scans in the year after admission, for the 327 521 study patients, was 1.94, whereas 11.9% had ≥4. Risk-adjusted rates of high-intensity CT use ranged from 4.6% (Napa, CA) to 20.0% (East Long Island, NY). These rates were 2.6% higher for blacks than for whites (95% confidence interval, 2.1%-3.1%), with considerable regional variation. Higher fragmentation of care (number of different doctors seen) was associated with high-intensity CT use. Patients living in the top quintile regions of fragmentation experienced a 5.9% higher rate of high-intensity CT use, with the lowest quintile as reference; the corresponding odds ratio was 1.77 (95% confidence interval, 1.71-1.83). Similarly, 1-year risk- and price-adjusted expenditures exhibited considerable regional variation, ranging from $31 175 (Salem, MA) to $61 895 (McAllen, TX). Regional rates of high-intensity CT scans were positively associated with 1-year expenditures (r=0.56; P<0.01). Rates of high-intensity CT use for patients with ischemic stroke reflect wide practice patterns across regions and races. Medicare expenditures parallel these disparities. Fragmentation of care is associated with high-intensity CT use. © 2014 American Heart Association, Inc.

  14. Integration of drug dosing data with physiological data streams using a cloud computing paradigm.

    PubMed

    Bressan, Nadja; James, Andrew; McGregor, Carolyn

    2013-01-01

    Many drugs are used during the provision of intensive care for the preterm newborn infant. Recommendations for drug dosing in newborns depend upon data from population based pharmacokinetic research. There is a need to be able to modify drug dosing in response to the preterm infant's response to the standard dosing recommendations. The real-time integration of physiological data with drug dosing data would facilitate individualised drug dosing for these immature infants. This paper proposes the use of a novel computational framework that employs real-time, temporal data analysis for this task. Deployment of the framework within the cloud computing paradigm will enable widespread distribution of individualized drug dosing for newborn infants.

  15. Computer Analysis of Electromagnetic Field Exposure Hazard for Space Station Astronauts during Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Kelley, James S.; Panneton, Robert B.; Arndt, G. Dickey

    1995-01-01

    In order to estimate the RF radiation hazards to astronauts and electronics equipment due to various Space Station transmitters, the electric fields around the various Space Station antennas are computed using the rigorous Computational Electromagnetics (CEM) techniques. The Method of Moments (MoM) was applied to the UHF and S-band low gain antennas. The Aperture Integration (AI) method and the Geometrical Theory of Diffraction (GTD) method were used to compute the electric field intensities for the S- and Ku-band high gain antennas. As a result of this study, The regions in which the electric fields exceed the specified exposure levels for the Extravehicular Mobility Unit (EMU) electronics equipment and Extravehicular Activity (EVA) astronaut are identified for various Space Station transmitters.

  16. Three-dimensional digital mapping of the optic nerve head cupping in glaucoma

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Ramirez, Manuel; Morales, Jose

    1992-08-01

    Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.

  17. Prognosis for Survival of Young Women with Breast Cancer by Quantitative p53 Immunohistochemistry

    PubMed Central

    Axelrod, David E.; Shah, Kinsuk; Yang, Qifeng; Haffty, Bruce G.

    2015-01-01

    p53 protein detected immunohistochemically has not been accepted as a biomarker for breast cancer patients because of disparate reports of the relationship between the amount of p53 protein detected and patient survival. The purpose of this study was to determine experimental conditions and methods of data analysis for which p53 stain intensity could be prognostic for survival of young breast cancer patients. A tissue microarray of specimens from 93 patients was stained with anti-p53 antibody, and stain intensity measured with a computer-aided image analysis system. A cut-point at one standard deviation below the mean of the distribution of p53 stain intensity separated patients into two groups with significantly different survival. These results were confirmed by Quantitative Nuclear Grade determined by DNA-specific Feulgen staining. P53 provided information beyond ER and PR status. Therefore, under the conditions reported here, p53 protein can be an effective prognostic factor for young breast cancer patients. PMID:26322145

  18. Analysis of the X-ray emission spectra of copper, germanium and rubidium plasmas produced at the Phelix laser facility

    NASA Astrophysics Data System (ADS)

    Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.; Denis-Petit, D.; Méot, V.; Gosselin, G.; Morel, P.; Hannachi, F.; Gobet, F.; Tarisien, M.; Versteegen, M.

    2017-03-01

    We present the analysis of X-ray emission spectra of copper, germanium and rubidium plasmas measured at the Phelix laser facility. The laser intensity was around 6×1014 W.cm-2. The analysis is based on the hypothesis of an homogeneous plasma in local thermodynamic equilibrium using an effective temperature. This temperature is deduced from hydrodynamic simulations and collisional-radiative computations. Spectra are then calculated using the LTE opacity codes OPAMCDF and SCO-RCG and compared to experimental data.

  19. Hierarchical Parallelization of Gene Differential Association Analysis

    PubMed Central

    2011-01-01

    Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916

  20. Hierarchical parallelization of gene differential association analysis.

    PubMed

    Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing

    2011-09-21

    Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.

  1. An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore

    USGS Publications Warehouse

    Jackson, J.C.; Ericksent, G.E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  2. An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore

    USGS Publications Warehouse

    John, C.; George, J.; Ericksen, E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  3. Use of application containers and workflows for genomic data analysis

    PubMed Central

    Schulz, Wade L.; Durant, Thomas J. S.; Siddon, Alexa J.; Torres, Richard

    2016-01-01

    Background: The rapid acquisition of biological data and development of computationally intensive analyses has led to a need for novel approaches to software deployment. In particular, the complexity of common analytic tools for genomics makes them difficult to deploy and decreases the reproducibility of computational experiments. Methods: Recent technologies that allow for application virtualization, such as Docker, allow developers and bioinformaticians to isolate these applications and deploy secure, scalable platforms that have the potential to dramatically increase the efficiency of big data processing. Results: While limitations exist, this study demonstrates a successful implementation of a pipeline with several discrete software applications for the analysis of next-generation sequencing (NGS) data. Conclusions: With this approach, we significantly reduced the amount of time needed to perform clonal analysis from NGS data in acute myeloid leukemia. PMID:28163975

  4. The Integrated Radiation Mapper Assistant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlton, R.E.; Tripp, L.R.

    1995-03-01

    The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less

  5. Prospective clinical observational study evaluating gender-associated differences of preoperative pain intensity.

    PubMed

    Tafelski, Sascha; Kerper, Léonie F; Salz, Anna-Lena; Spies, Claudia; Reuter, Eva; Nachtigall, Irit; Schäfer, Michael; Krannich, Alexander; Krampe, Henning

    2016-07-01

    Previous studies reported conflicting results concerning different pain perceptions of men and women. Recent research found higher pain levels in men after major surgery, contrasted by women after minor procedures. This trial investigates differences in self-reported preoperative pain intensity between genders before surgery.Patients were enrolled in 2011 and 2012 presenting for preoperative evaluation at the anesthesiological assessment clinic at Charité University hospital. Out of 5102 patients completing a computer-assisted self-assessment, 3042 surgical patients with any preoperative pain were included into this prospective observational clinical study. Preoperative pain intensity (0-100 VAS, visual analog scale) was evaluated integrating psychological cofactors into analysis.Women reported higher preoperative pain intensity than men with median VAS scores of 30 (25th-75th percentiles: 10-52) versus 21 (10-46) (P < 0.001). Adjusted multiple regression analysis showed that female gender remained statistically significantly associated with higher pain intensity (P < 0.001). Gender differences were consistent across several subgroups especially with varying patterns in elderly. Women scheduled for minor and moderate surgical procedures showed largest differences in overall pain compared to men.This large clinical study observed significantly higher preoperative pain intensity in female surgical patients. This gender difference was larger in the elderly potentially contradicting the current hypothesis of a primary sex-hormone derived effect. The observed variability in specific patient subgroups may help to explain heterogeneous findings of previous studies.

  6. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    ERIC Educational Resources Information Center

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  7. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  8. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.

  9. A study of model parameters associated with the urban climate using HCMM data. [analysis of St. Louis, Missouri infrared imagery

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Progress in the study of the intensity of the urban heat island is reported. The intensity of the heat island is commonly defined as the temperature difference between the center of the city and the surrounding suburban and rural regions. The intensity is considered as a function of changes in the season and changes in meteorological conditions in order to derive various parameters which may be used in numerical models for urban climate. Twelve case studies were selected and CCT's were ordered. In situ data was obtained from sixteen stations scattered about the city of St. Louis. Upper-air meteorological data were obtained and the water vapor and the temperature data were processed. Atmospheric transmissivities were computed for each of the case studies.

  10. GPU accelerated dynamic functional connectivity analysis for functional MRI data.

    PubMed

    Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu

    2015-07-01

    Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.

  12. Computational Omics Funding Opportunity | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium (CPTAC) and the NVIDIA Foundation are pleased to announce funding opportunities in the fight against cancer. Each organization has launched a request for proposals (RFP) that will collectively fund up to $2 million to help to develop a new generation of data-intensive scientific tools to find new ways to treat cancer.

  13. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    PubMed Central

    Vukićević, Milan

    2014-01-01

    Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data. PMID:24892101

  14. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  15. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  16. Special purpose computer system with highly parallel pipelines for flow visualization using holography technology

    NASA Astrophysics Data System (ADS)

    Masuda, Nobuyuki; Sugie, Takashige; Ito, Tomoyoshi; Tanaka, Shinjiro; Hamada, Yu; Satake, Shin-ichi; Kunugi, Tomoaki; Sato, Kazuho

    2010-12-01

    We have designed a PC cluster system with special purpose computer boards for visualization of fluid flow using digital holographic particle tracking velocimetry (DHPTV). In this board, there is a Field Programmable Gate Array (FPGA) chip in which is installed a pipeline for calculating the intensity of an object from a hologram by fast Fourier transform (FFT). This cluster system can create 1024 reconstructed images from a 1024×1024-grid hologram in 0.77 s. It is expected that this system will contribute to the analysis of fluid flow using DHPTV.

  17. Computational Investigation of Soot and Radiation in Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Lalit, Harshad

    This study delves into computational modeling of soot and infrared radiation for turbulent reacting flows, detailed understanding of both of which is paramount in the design of cleaner engines and pollution control. In the first part of the study, the concept of Stochastic Time and Space Series Analysis (STASS) as a numerical tool to compute time dependent statistics of radiation intensity is introduced for a turbulent premixed flame. In the absence of high fidelity codes for large eddy simulation or direct numerical simulation of turbulent flames, the utility of STASS for radiation imaging of reacting flows to understand the flame structure is assessed by generating images of infrared radiation in spectral bands dominated by radiation from gas phase carbon dioxide and water vapor using an assumed PDF method. The study elucidates the need for time dependent computation of radiation intensity for validation with experiments and the need for accounting for turbulence radiation interactions for correctly predicting radiation intensity and consequently the flame temperature and NOx in a reacting fluid flow. Comparison of single point statistics of infrared radiation intensity with measurements show that STASS can not only predict the flame structure but also estimate the dynamics of thermochemical scalars in the flame with reasonable accuracy. While a time series is used to generate realizations of thermochemical scalars in the first part of the study, in the second part, instantaneous realizations of resolved scale temperature, CO2 and H2O mole fractions and soot volume fractions are extracted from a large eddy simulation (LES) to carry out quantitative imaging of radiation intensity (QIRI) for a turbulent soot generating ethylene diffusion flame. A primary motivation of the study is to establish QIRI as a computational tool for validation of soot models, especially in the absence of conventional flow field and measured scalar data for sooting flames. Realizations of scalars from the LES are used in conjunction with the radiation heat transfer equation and a narrow band radiation model to compute time dependent and time averaged images of infrared radiation intensity in spectral bands corresponding to molecular radiation from gas phase carbon dioxide and soot particles exclusively. While qualitative and quantitative comparisons with measured images in the CO2 radiation band show that the flame structure is correctly computed, images computed in the soot radiation band illustrate that the soot volume fraction is under predicted by the computations. The effect of the soot model and cause of under prediction is investigated further by correcting the soot volume fraction using an empirical state relationship. By comparing default simulations with computations using the state relation, it is shown that while the soot model under-estimates the soot concentration, it correctly computes the intermittency of soot in the flame. The study of sooting flames is extended further by performing a parametric analysis of physical and numerical parameters that affect soot formation and transport in two laboratory scale turbulent sooting flames, one fueled by natural gas and the other by ethylene. The study is focused on investigating the effect of molecular diffusion of species, dilution of fuel with hydrogen gas and the effect of chemical reaction mechanism on the soot concentration in the flame. The effect of species Lewis numbers on soot evolution and transport is investigated by carrying out simulations, first with the default equal diffusivity (ED) assumption and then by incorporating a differential diffusion (DD) model. Computations using the DD model over-estimate the concentration of the soot precursor and soot oxidizer species, leading to inconsistencies in the estimate of the soot concentration. The linear differential diffusion (LDD) model, reported previously to consistently model differential diffusion effects is implemented to correct the over prediction effect of the DD model. It is shown that the effect of species Lewis number on soot evolution is a secondary phenomenon and that soot is primarily transported by advection of the fluid in a turbulent flame. The effect of hydrogen dilution on the soot formation and transport process is also studied. It is noted that the decay of soot volume fraction and flame length with hydrogen addition follows trends observed in laminar sooting flame measurements. While hydrogen enhances mixing shown by the laminar flamelet solutions, the mixing effect does not significantly contribute to differential molecular diffusion effects in the soot nucleation regions downstream of the flame and has a negligible effect on soot transport. The sensitivity of computations of soot volume fraction towards the chemical reaction mechanism is shown. It is concluded that modeling reaction pathways of C3 and C4 species that lead up to Polycyclic Aromatic Hydrocarbon (PAH) molecule formation is paramount for accurate predictions of soot in the flame. (Abstract shortened by ProQuest.).

  18. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  19. High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.

    PubMed

    Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C

    2007-10-09

    High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.

  20. Quantum Interference Effects in Resonant Raman Spectroscopy of Single- and Triple-Layer MoTe2 from First-Principles

    NASA Astrophysics Data System (ADS)

    Miranda, Henrique P. C.; Reichardt, Sven; Froehlicher, Guillaume; Molina-Sánchez, Alejandro; Berciaud, Stéphane; Wirtz, Ludger

    2017-04-01

    We present a combined experimental and theoretical study of resonant Raman spectroscopy in single- and triple-layer MoTe$_2$. Raman intensities are computed entirely from first principles by calculating finite differences of the dielectric susceptibility. In our analysis, we investigate the role of quantum interference effects and the electron-phonon coupling. With this method, we explain the experimentally observed intensity inversion of the $A^\\prime_1$ vibrational modes in triple-layer MoTe2 with increasing laser photon energy. Finally, we show that a quantitative comparison with experimental data requires the proper inclusion of excitonic effects.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolic, R J

    This month's issue has the following articles: (1) Dawn of a New Era of Scientific Discovery - Commentary by Edward I. Moses; (2) At the Frontiers of Fundamental Science Research - Collaborators from national laboratories, universities, and international organizations are using the National Ignition Facility to probe key fundamental science questions; (3) Livermore Responds to Crisis in Post-Earthquake Japan - More than 70 Laboratory scientists provided round-the-clock expertise in radionuclide analysis and atmospheric dispersion modeling as part of the nation's support to Japan following the March 2011 earthquake and nuclear accident; (4) A Comprehensive Resource for Modeling, Simulation, and Experimentsmore » - A new Web-based resource called MIDAS is a central repository for material properties, experimental data, and computer models; and (5) Finding Data Needles in Gigabit Haystacks - Livermore computer scientists have developed a novel computer architecture based on 'persistent' memory to ease data-intensive computations.« less

  2. Computer Activities for Persons With Dementia.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee

    2015-06-01

    The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  4. Energy intensity of computer manufacturing: hybrid assessment combining process and economic input-output methods.

    PubMed

    Williams, Eric

    2004-11-15

    The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.

  5. An infrastructure for data-intensive seismology using ADMIRE: laying the bricks for a new data highway

    NASA Astrophysics Data System (ADS)

    Trani, L.; Spinuso, A.; Galea, M.; Atkinson, M.; Van Eck, T.; Vilotte, J.

    2011-12-01

    The data bonanza generated by today's digital revolution is forcing scientists to rethink their methodologies and working practices. Traditional approaches to knowledge discovery are pushed to their limit and struggle to keep apace with the data flows produced by modern systems. This work shows how the ADMIRE data-intensive architecture supports seismologists by enabling them to focus on their scientific goals and questions, abstracting away the underlying technology platform that enacts their data integration and analysis tasks. ADMIRE accomplishes this partly by recognizing 3 different types of experts that require clearly defined interfaces between their interaction: the domain expert who is the application specialist, the data-analysis expert who is a specialist in extracting information from data, and the data-intensive engineer who develops the infrastructure for data-intensive computation. In order to provide a context in which each category of expert may flourish, ADMIRE uses a 3-level architecture: the upper - tool - level supports the work of both domain and data-analysis experts, housing an extensive and evolving set of portals, tools and development environments; the lower - enactment - level houses a large and dynamic community of providers delivering data and data-intensive enactment environments as an evolving infrastructure that supports all of the work underway in the upper layer. Most data-intensive engineers work here; the crucial innovation lies in the middle level, a gateway that is a tightly defined and stable interface through which the two diverse and dynamic upper and lower layers communicate. This is a minimal and simple protocol and language (DISPEL), ultimately to be controlled by standards, so that the upper and lower communities may invest, secure in the knowledge that changes in this interface will be carefully managed. We implemented a well-established procedure for processing seismic ambient noise on the prototype architecture. The primary goal was to evaluate its capabilities for large-scale integration and analysis of distributed data. A secondary goal was to gauge its potential and the added value that it might bring to the seismological community. Though still in its infant state, the architecture met the demands of our use case and promises to cater for our future requirements. We shall continue to develop its capabilities as part of an EU funded project VERCE - Virtual Earthquake and Seismology Research Community for Europe. VERCE aims to significantly advance our understanding of the Earth in order to aid society in its management of natural resources and hazards. Its strategy is to enable seismologists to fully exploit the under-utilized wealth of seismic data, and key to this is a data-intensive computation framework adapted to the scale and diversity of the community. This is a first step in building a data-intensive highway for geoscientists, smoothing their travel from the primary sources of data to new insights and rapid delivery of actionable information.

  6. Multimodal Image Analysis in Alzheimer’s Disease via Statistical Modelling of Non-local Intensity Correlations

    NASA Astrophysics Data System (ADS)

    Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien

    2016-04-01

    The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.

  7. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  9. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  11. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-02-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  12. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80

    NASA Technical Reports Server (NTRS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-01-01

    The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.

  13. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  14. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  15. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  16. Computer-assisted sperm morphometry fluorescence-based analysis has potential to determine progeny sex.

    PubMed

    Santolaria, Pilar; Pauciullo, Alfredo; Silvestre, Miguel A; Vicente-Fiel, Sandra; Villanova, Leyre; Pinton, Alain; Viruel, Juan; Sales, Ester; Yániz, Jesús L

    2016-01-01

    This study was designed to determine the ability of computer-assisted sperm morphometry analysis (CASA-Morph) with fluorescence to discriminate between spermatozoa carrying different sex chromosomes from the nuclear morphometrics generated and different statistical procedures in the bovine species. The study was divided into two experiments. The first was to study the morphometric differences between X- and Y-chromosome-bearing spermatozoa (SX and SY, respectively). Spermatozoa from eight bulls were processed to assess simultaneously the sex chromosome by FISH and sperm morphometry by fluorescence-based CASA-Morph. SX cells were larger than SY cells on average (P < 0.001) although with important differences between bulls. A simultaneous evaluation of all the measured features by discriminant analysis revealed that nuclear area and average fluorescence intensity were the variables selected by stepwise discriminant function analysis as the best discriminators between SX and SY. In the second experiment, the sperm nuclear morphometric results from CASA-Morph in nonsexed (mixed SX and SY) and sexed (SX) semen samples from four bulls were compared. FISH allowed a successful classification of spermatozoa according to their sex chromosome content. X-sexed spermatozoa displayed a larger size and fluorescence intensity than nonsexed spermatozoa (P < 0.05). We conclude that the CASA-Morph fluorescence-based method has the potential to find differences between X- and Y-chromosome-bearing spermatozoa in bovine species although more studies are needed to increase the precision of sex determination by this technique.

  17. Eigen Spreading

    DTIC Science & Technology

    2008-02-27

    between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be

  18. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  19. Effect of High-Intensity Interval Training on Total, Abdominal and Visceral Fat Mass: A Meta-Analysis.

    PubMed

    Maillard, Florie; Pereira, Bruno; Boisseau, Nathalie

    2018-02-01

    High-intensity interval training (HIIT) is promoted as a time-efficient strategy to improve body composition. The aim of this meta-analysis was to assess the efficacy of HIIT in reducing total, abdominal, and visceral fat mass in normal-weight and overweight/obese adults. Electronic databases were searched to identify all related articles on HIIT and fat mass. Stratified analysis was performed using the nature of HIIT (cycling versus running, target intensity), sex and/or body weight, and the methods of measuring body composition. Heterogeneity was also determined RESULTS: A total of 39 studies involving 617 subjects were included (mean age 38.8 years ± 14.4, 52% females). HIIT significantly reduced total (p = 0.003), abdominal (p = 0.007), and visceral (p = 0.018) fat mass, with no differences between the sexes. A comparison showed that running was more effective than cycling in reducing total and visceral fat mass. High-intensity (above 90% peak heart rate) training was more successful in reducing whole body adiposity, while lower intensities had a greater effect on changes in abdominal and visceral fat mass. Our analysis also indicated that only computed tomography scan or magnetic resonance imaging showed significant abdominal and/or visceral fat-mass loss after HIIT interventions. HIIT is a time-efficient strategy to decrease fat-mass deposits, including those of abdominal and visceral fat mass. There was some evidence of the greater effectiveness of HIIT running versus cycling, but owing to the wide variety of protocols used and the lack of full details about cycling training, further comparisons need to be made. Large, multicenter, prospective studies are required to establish the best HIIT protocols for reducing fat mass according to subject characteristics.

  20. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will involve the further integration and analysis of this data across the social sciences to facilitate the impacts across the societal domain, including timely analysis to more accurately predict and forecast future climate and environmental state.

  1. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography.

    PubMed

    Precht, H; Kitslaar, P H; Broersen, A; Gerke, O; Dijkstra, J; Thygesen, J; Egstrup, K; Lambrechtsen, J

    2017-02-01

    Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities. Three patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDI vol at 6.7 mGy), 60% ASIR (CTDI vol 4.3 mGy) and Veo (CTDI vol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities. Plaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm 3 and 337 mm 3 -252 mm 3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm 3 -391 mm 3 for lumen volume and vessel volume from 939 mm 3 to 830 mm 3 . The intensities did not change overall between the different reconstructions for either lumen or plaque. We found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  2. Evaluation of bed load transport subject to high shear stress fluctuations

    NASA Astrophysics Data System (ADS)

    Cheng, Nian-Sheng; Tang, Hongwu; Zhu, Lijun

    2004-05-01

    Many formulas available in the literature for computing sediment transport rates are often expressed in terms of time mean variables such as time mean bed shear stress or flow velocity, while effects of turbulence intensity, e.g., bed shear stress fluctuation, on sediment transport were seldom considered. This may be due to the fact that turbulence fluctuation is relatively limited in laboratory open-channel flows, which are often used for conducting sediment transport experiments. However, turbulence intensity could be markedly enhanced in practice. This note presents an analytical method to compute bed load transport by including effects of fluctuations in the bed shear stress. The analytical results obtained show that the transport rate enhanced by turbulence can be expressed as a simple function of the relative fluctuation of the bed shear stress. The results are also verified using data that were collected recently from specifically designed laboratory experiments. The present analysis is applicable largely for the condition of a flat bed that is comprised of uniform sand particles subject to unidirectional flows.

  3. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  4. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Analytical stress intensity solution for the Stable Poisson Loaded specimen

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, David N.

    1993-01-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-curve determination. The crack mouth opening displacements (CMODs) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMODs, and crack displacement profiles, are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  6. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  7. Structural Loads Analysis for Wave Energy Converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi

    2017-06-03

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less

  8. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  9. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  10. Are computers effective lie detectors? A meta-analysis of linguistic cues to deception.

    PubMed

    Hauch, Valerie; Blandón-Gitlin, Iris; Masip, Jaume; Sporer, Siegfried L

    2015-11-01

    This meta-analysis investigates linguistic cues to deception and whether these cues can be detected with computer programs. We integrated operational definitions for 79 cues from 44 studies where software had been used to identify linguistic deception cues. These cues were allocated to six research questions. As expected, the meta-analyses demonstrated that, relative to truth-tellers, liars experienced greater cognitive load, expressed more negative emotions, distanced themselves more from events, expressed fewer sensory-perceptual words, and referred less often to cognitive processes. However, liars were not more uncertain than truth-tellers. These effects were moderated by event type, involvement, emotional valence, intensity of interaction, motivation, and other moderators. Although the overall effect size was small, theory-driven predictions for certain cues received support. These findings not only further our knowledge about the usefulness of linguistic cues to detect deception with computers in applied settings but also elucidate the relationship between language and deception. © 2014 by the Society for Personality and Social Psychology, Inc.

  11. BioPig: a Hadoop-based analytic toolkit for large-scale sequence data.

    PubMed

    Nordberg, Henrik; Bhatia, Karan; Wang, Kai; Wang, Zhong

    2013-12-01

    The recent revolution in sequencing technologies has led to an exponential growth of sequence data. As a result, most of the current bioinformatics tools become obsolete as they fail to scale with data. To tackle this 'data deluge', here we introduce the BioPig sequence analysis toolkit as one of the solutions that scale to data and computation. We built BioPig on the Apache's Hadoop MapReduce system and the Pig data flow language. Compared with traditional serial and MPI-based algorithms, BioPig has three major advantages: first, BioPig's programmability greatly reduces development time for parallel bioinformatics applications; second, testing BioPig with up to 500 Gb sequences demonstrates that it scales automatically with size of data; and finally, BioPig can be ported without modification on many Hadoop infrastructures, as tested with Magellan system at National Energy Research Scientific Computing Center and the Amazon Elastic Compute Cloud. In summary, BioPig represents a novel program framework with the potential to greatly accelerate data-intensive bioinformatics analysis.

  12. On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-21

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  13. On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-01

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  14. Rasdaman for Big Spatial Raster Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.

    2015-12-01

    Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.

  15. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  16. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  17. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  18. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  19. A Computational Approach for Probabilistic Analysis of LS-DYNA Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2010-01-01

    NASA s development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. Because of the computational cost, these tools are often used to evaluate specific conditions and rarely used for statistical analysis. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. For this problem, response surface models are used to predict the system time responses to a water landing as a function of capsule speed, direction, attitude, water speed, and water direction. Furthermore, these models can also be used to ascertain the adequacy of the design in terms of probability measures. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  20. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  1. Central Fetal Monitoring With and Without Computer Analysis: A Randomized Controlled Trial.

    PubMed

    Nunes, Inês; Ayres-de-Campos, Diogo; Ugwumadu, Austin; Amin, Pina; Banfield, Philip; Nicoll, Antony; Cunningham, Simon; Sousa, Paulo; Costa-Santos, Cristina; Bernardes, João

    2017-01-01

    To evaluate whether intrapartum fetal monitoring with computer analysis and real-time alerts decreases the rate of newborn metabolic acidosis or obstetric intervention when compared with visual analysis. A randomized clinical trial carried out in five hospitals in the United Kingdom evaluated women with singleton, vertex fetuses of 36 weeks of gestation or greater during labor. Continuous central fetal monitoring by computer analysis and online alerts (experimental arm) was compared with visual analysis (control arm). Fetal blood sampling and electrocardiographic ST waveform analysis were available in both arms. The primary outcome was incidence of newborn metabolic acidosis (pH less than 7.05 and base deficit greater than 12 mmol/L). Prespecified secondary outcomes included operative delivery, use of fetal blood sampling, low 5-minute Apgar score, neonatal intensive care unit admission, hypoxic-ischemic encephalopathy, and perinatal death. A sample size of 3,660 per group (N=7,320) was planned to be able to detect a reduction in the rate of metabolic acidosis from 2.8% to 1.8% (two-tailed α of 0.05 with 80% power). From August 2011 through July 2014, 32,306 women were assessed for eligibility and 7,730 were randomized: 3,961 to computer analysis and online alerts, and 3,769 to visual analysis. Baseline characteristics were similar in both groups. Metabolic acidosis occurred in 16 participants (0.40%) in the experimental arm and 22 participants (0.58%) in the control arm (relative risk 0.69 [0.36-1.31]). No statistically significant differences were found in the incidence of secondary outcomes. Compared with visual analysis, computer analysis of fetal monitoring signals with real-time alerts did not significantly reduce the rate of metabolic acidosis or obstetric intervention. A lower-than-expected rate of newborn metabolic acidosis was observed in both arms of the trial. ISRCTN Registry, http://www.isrctn.com, ISRCTN42314164.

  2. User Guide to RockJock - A Program for Determining Quantitative Mineralogy from X-Ray Diffraction Data

    USGS Publications Warehouse

    Eberl, D.D.

    2003-01-01

    RockJock is a computer program that determines quantitative mineralogy in powdered samples by comparing the integrated X-ray diffraction (XRD) intensities of individual minerals in complex mixtures to the intensities of an internal standard. Analysis without an internal standard (standardless analysis) also is an option. This manual discusses how to prepare and X-ray samples and mineral standards for these types of analyses and describes the operation of the program. Carefully weighed samples containing an internal standard (zincite) are ground in a McCrone mill. Randomly oriented preparations then are X-rayed, and the X-ray data are entered into the RockJock program. Minerals likely to be present in the sample are chosen from a list of standards, and the calculation is begun. The program then automatically fits the sum of stored XRD patterns of pure standard minerals (the calculated pattern) to the measured pattern by varying the fraction of each mineral standard pattern, using the Solver function in Microsoft Excel to minimize a degree of fit parameter between the calculated and measured pattern. The calculation analyzes the pattern (usually 20 to 65 degrees two-theta) to find integrated intensities for the minerals. Integrated intensities for each mineral then are determined from the proportion of each mineral standard pattern required to give the best fit. These integrated intensities then are compared to the integrated intensity of the internal standard, and the weight percentages of the minerals are calculated. The results are presented as a list of minerals with their corresponding weight percent. To some extent, the quality of the analysis can be checked because each mineral is analyzed independently, and, therefore, the sum of the analysis should approach 100 percent. Also, the method has been shown to give good results with artificial mixtures. The program is easy to use, but does require an understanding of mineralogy, of X-ray diffraction practice, and an elementary knowledge of the Excel program.

  3. Intensity Conserving Spectral Fitting

    NASA Technical Reports Server (NTRS)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  4. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  5. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  6. Interactive signal analysis and ultrasonic data collection system user's manual

    NASA Technical Reports Server (NTRS)

    Smith, G. R.

    1978-01-01

    The interactive signal analysis and ultrasonic data collection system (ECHO1) is a real time data acquisition and display system. ECHO1 executed on a PDP-11/45 computer under the RT11 real time operating system. Extensive operator interaction provided the requisite parameters to the data collection, calculation, and data modules. Data were acquired in real time from a pulse echo ultrasonic system using a Biomation Model 8100 transient recorder. The data consisted of 2084 intensity values representing the amplitude of pulses transmitted and received by the ultrasonic unit.

  7. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  8. Development of the NASA/FLAGRO computer program for analysis of airframe structures

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Newman, J. C., Jr.

    1994-01-01

    The NASA/FLAGRO (NASGRO) computer program was developed for fracture control analysis of space hardware and is currently the standard computer code in NASA, the U.S. Air Force, and the European Agency (ESA) for this purpose. The significant attributes of the NASGRO program are the numerous crack case solutions, the large materials file, the improved growth rate equation based on crack closure theory, and the user-friendly promptive input features. In support of the National Aging Aircraft Research Program (NAARP); NASGRO is being further developed to provide advanced state-of-the-art capability for damage tolerance and crack growth analysis of aircraft structural problems, including mechanical systems and engines. The project currently involves a cooperative development effort by NASA, FAA, and ESA. The primary tasks underway are the incorporation of advanced methodology for crack growth rate retardation resulting from spectrum loading and improved analysis for determining crack instability. Also, the current weight function solutions in NASGRO or nonlinear stress gradient problems are being extended to more crack cases, and the 2-d boundary integral routine for stress analysis and stress-intensity factor solutions is being extended to 3-d problems. Lastly, effort is underway to enhance the program to operate on personal computers and work stations in a Windows environment. Because of the increasing and already wide usage of NASGRO, the code offers an excellent mechanism for technology transfer for new fatigue and fracture mechanics capabilities developed within NAARP.

  9. Quantification of protein expression in cells and cellular subcompartments on immunohistochemical sections using a computer supported image analysis system.

    PubMed

    Braun, Martin; Kirsten, Robert; Rupp, Niels J; Moch, Holger; Fend, Falko; Wernert, Nicolas; Kristiansen, Glen; Perner, Sven

    2013-05-01

    Quantification of protein expression based on immunohistochemistry (IHC) is an important step for translational research and clinical routine. Several manual ('eyeballing') scoring systems are used in order to semi-quantify protein expression based on chromogenic intensities and distribution patterns. However, manual scoring systems are time-consuming and subject to significant intra- and interobserver variability. The aim of our study was to explore, whether new image analysis software proves to be sufficient as an alternative tool to quantify protein expression. For IHC experiments, one nucleus specific marker (i.e., ERG antibody), one cytoplasmic specific marker (i.e., SLC45A3 antibody), and one marker expressed in both compartments (i.e., TMPRSS2 antibody) were chosen. Stainings were applied on TMAs, containing tumor material of 630 prostate cancer patients. A pathologist visually quantified all IHC stainings in a blinded manner, applying a four-step scoring system. For digital quantification, image analysis software (Tissue Studio v.2.1, Definiens AG, Munich, Germany) was applied to obtain a continuous spectrum of average staining intensity. For each of the three antibodies we found a strong correlation of the manual protein expression score and the score of the image analysis software. Spearman's rank correlation coefficient was 0.94, 0.92, and 0.90 for ERG, SLC45A3, and TMPRSS2, respectively (p⟨0.01). Our data suggest that the image analysis software Tissue Studio is a powerful tool for quantification of protein expression in IHC stainings. Further, since the digital analysis is precise and reproducible, computer supported protein quantification might help to overcome intra- and interobserver variability and increase objectivity of IHC based protein assessment.

  10. Monitoring Engine Vibrations And Spectrum Of Exhaust

    NASA Technical Reports Server (NTRS)

    Martinez, Carol L.; Randall, Michael R.; Reinert, John W.

    1991-01-01

    Real-time computation of intensities of peaks in visible-light emission spectrum of exhaust combined with real-time spectrum analysis of vibrations into developmental monitoring technique providing up-to-the-second information on conditions of critical bearings in engine. Conceived to monitor conditions of bearings in turbopump suppling oxygen to Space Shuttle main engine, based on observations that both vibrations in bearings and intensities of visible light emitted at specific wavelengths by exhaust plume of engine indicate wear and incipient failure of bearings. Applicable to monitoring "health" of other machinery via spectra of vibrations and electromagnetic emissions from exhausts. Concept related to one described in "Monitoring Bearing Vibrations For Signs Of Damage", (MFS-29734).

  11. VERCE: a productive e-Infrastructure and e-Science environment for data-intensive seismology research

    NASA Astrophysics Data System (ADS)

    Vilotte, Jean-Pierre; Atkinson, Malcolm; Carpené, Michele; Casarotti, Emanuele; Frank, Anton; Igel, Heiner; Rietbrock, Andreas; Schwichtenberg, Horst; Spinuso, Alessandro

    2016-04-01

    Seismology pioneers global and open-data access -- with internationally approved data, metadata and exchange standards facilitated worldwide by the Federation of Digital Seismic Networks (FDSN) and in Europe the European Integrated Data Archives (EIDA). The growing wealth of data generated by dense observation and monitoring systems and recent advances in seismic wave simulation capabilities induces a change in paradigm. Data-intensive seismology research requires a new holistic approach combining scalable high-performance wave simulation codes and statistical data analysis methods, and integrating distributed data and computing resources. The European E-Infrastructure project "Virtual Earthquake and seismology Research Community e-science environment in Europe" (VERCE) pioneers the federation of autonomous organisations providing data and computing resources, together with a comprehensive, integrated and operational virtual research environment (VRE) and E-infrastructure devoted to the full path of data use in a research-driven context. VERCE delivers to a broad base of seismology researchers in Europe easily used high-performance full waveform simulations and misfit calculations, together with a data-intensive framework for the collaborative development of innovative statistical data analysis methods, all of which were previously only accessible to a small number of well-resourced groups. It balances flexibility with new integrated capabilities to provide a fluent path from research innovation to production. As such, VERCE is a major contribution to the implementation phase of the ``European Plate Observatory System'' (EPOS), the ESFRI initiative of the solid-Earth community. The VRE meets a range of seismic research needs by eliminating chores and technical difficulties to allow users to focus on their research questions. It empowers researchers to harvest the new opportunities provided by well-established and mature high-performance wave simulation codes of the community. It enables active researchers to invent and refine scalable methods for innovative statistical analysis of seismic waveforms in a wide range of application contexts. The VRE paves the way towards a flexible shared framework for seismic waveform inversion, lowering the barriers to uptake for the next generation of researchers. The VRE can be accessed through the science gateway that puts together computational and data-intensive research into the same framework, integrating multiple data sources and services. It provides a context for task-oriented and data-streaming workflows, and maps user actions to the full gamut of the federated platform resources and procurement policies, activating the necessary behind-the-scene automation and transformation. The platform manages and produces domain metadata, coupling them with the provenance information describing the relationships and the dependencies, which characterise the whole workflow process. This dynamic knowledge base, can be explored for validation purposes via a graphical interface and a web API. Moreover, it fosters the assisted selection and re-use of the data within each phase of the scientific analysis. These phases can be identified as Simulation, Data Access, Preprocessing, Misfit and data processing, and are presented to the users of the gateway as dedicated and interactive workspaces. By enabling researchers to share results and provenance information, VERCE steers open-science behaviour, allowing researchers to discover and build on prior work and thereby to progress faster. A key asset is the agile strategy that VERCE deployed in a multi-organisational context, engaging seismologists, data scientists, ICT researchers, HPC and data resource providers, system administrators into short-lived tasks each with a goal that is a seismology priority, and intimately coupling research thinking with technical innovation. This changes the focus from HPC production environments and community data services to user-focused scenario, avoiding wasteful bouts of technology centricity where technologists collect requirements and develop a system that is not used because the ideas of the planned users have moved on. As such the technologies and concepts developed in VERCE are relevant to many other disciplines in computational and data driven Earth Sciences and can provide the key technologies for a European wide computational and data intensive framework in Earth Sciences.

  12. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  13. The large-scale structure of software-intensive systems

    PubMed Central

    Booch, Grady

    2012-01-01

    The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964

  14. Unified First-Principle Analysis Of Ultraintense Laser-Matter Interactions: Theory, Computation and Experiments

    DTIC Science & Technology

    2017-03-29

    AFRL-AFOSR-VA-TR-2017-0072  12. DISTRIBUTION/ AVAILABILITY STATEMENT DISTRIBUTION A: Distribution approved for public release. 13. SUPPLEMENTARY NOTES...ablation, high intensity 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a.  NAME OF RESPONSIBLE PERSON PARRA, ENRIQUE...Leibniz Universitat Hannover. These additions have significantly strengthened our team, as evidenced by the high quality publications by those

  15. Comment on the paper ;NDSD-1000: High-resolution, high-temperature nitrogen dioxide spectroscopic Databank; by A.A. Lukashevskaya, N.N. Lavrentieva, A.C. Dudaryonok, V.I. Perevalov, J Quant Spectrosc Radiat Transfer 2016;184:205-17

    NASA Astrophysics Data System (ADS)

    Perrin, A.; Ndao, M.; Manceron, L.

    2017-10-01

    A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.

  16. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  18. Combining various types of classifiers and features extracted from magnetic resonance imaging data in schizophrenia recognition.

    PubMed

    Janousova, Eva; Schwarz, Daniel; Kasparek, Tomas

    2015-06-30

    We investigated a combination of three classification algorithms, namely the modified maximum uncertainty linear discriminant analysis (mMLDA), the centroid method, and the average linkage, with three types of features extracted from three-dimensional T1-weighted magnetic resonance (MR) brain images, specifically MR intensities, grey matter densities, and local deformations for distinguishing 49 first episode schizophrenia male patients from 49 healthy male subjects. The feature sets were reduced using intersubject principal component analysis before classification. By combining the classifiers, we were able to obtain slightly improved results when compared with single classifiers. The best classification performance (81.6% accuracy, 75.5% sensitivity, and 87.8% specificity) was significantly better than classification by chance. We also showed that classifiers based on features calculated using more computation-intensive image preprocessing perform better; mMLDA with classification boundary calculated as weighted mean discriminative scores of the groups had improved sensitivity but similar accuracy compared to the original MLDA; reducing a number of eigenvectors during data reduction did not always lead to higher classification accuracy, since noise as well as the signal important for classification were removed. Our findings provide important information for schizophrenia research and may improve accuracy of computer-aided diagnostics of neuropsychiatric diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Decision making and preferences for acoustic signals in choice situations by female crickets.

    PubMed

    Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias

    2015-08-01

    Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.

  20. Cloud Computing for Protein-Ligand Binding Site Comparison

    PubMed Central

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery. PMID:23762824

  1. Cloud computing for protein-ligand binding site comparison.

    PubMed

    Hung, Che-Lun; Hua, Guan-Jie

    2013-01-01

    The proteome-wide analysis of protein-ligand binding sites and their interactions with ligands is important in structure-based drug design and in understanding ligand cross reactivity and toxicity. The well-known and commonly used software, SMAP, has been designed for 3D ligand binding site comparison and similarity searching of a structural proteome. SMAP can also predict drug side effects and reassign existing drugs to new indications. However, the computing scale of SMAP is limited. We have developed a high availability, high performance system that expands the comparison scale of SMAP. This cloud computing service, called Cloud-PLBS, combines the SMAP and Hadoop frameworks and is deployed on a virtual cloud computing platform. To handle the vast amount of experimental data on protein-ligand binding site pairs, Cloud-PLBS exploits the MapReduce paradigm as a management and parallelizing tool. Cloud-PLBS provides a web portal and scalability through which biologists can address a wide range of computer-intensive questions in biology and drug discovery.

  2. Characteristic Analysis Light Intensity Sensor Based On Plastic Optical Fiber At Various Configuration

    NASA Astrophysics Data System (ADS)

    Arifin, A.; Lusiana; Yunus, Muhammad; Dewang, Syamsir

    2018-03-01

    This research discusses the light intensity sensor based on plastic optical fiber. This light intensity sensor is made of plastic optical fiber consisting of two types, namely which is cladding and without cladding. Plastic optical fiber used multi-mode step-index type made of polymethyl metacrylate (PMMA). The infrared LED emits light into the optical fiber of the plastic and is subsequently received by the phototransistor to be converted to an electric voltage. The sensor configuration is made with three models: straight configuration, U configuration and gamma configuration with cladding and without cladding. The measured light source uses a 30 Watt high power LED with a light intensity of 0 to 10 Klux. The measured light intensity will affect the propagation of light inside the optical fiber sensor. The greater the intensity of the measured light, the greater the output voltage that is read on the computer. The results showed that the best optical fiber sensor characteristics were obtained in U configuration. Sensors with U-configuration without cladding had the best sensitivity and resolution values of 0.0307 volts/Klux and 0.0326 Klux. The advantages of this measuring light intensity based on the plastic optical fiber instrument are simple, easy-to-make operational systems, low cost, high sensitivity and resolution.

  3. The 1943 K emission spectrum of H216O between 6600 and 7050 cm-1

    NASA Astrophysics Data System (ADS)

    Czinki, Eszter; Furtenbacher, Tibor; Császár, Attila G.; Eckhardt, André K.; Mellau, Georg Ch.

    2018-02-01

    An emission spectrum of H216O has been recorded, with Doppler-limited resolution, at 1943 K using Hot Gas Molecular Emission (HOTGAME) spectroscopy. The wavenumber range covered is 6600 to 7050 cm-1. This work reports the analysis and subsequent assignment of close to 3700 H216O transitions out of a total of more than 6700 measured peaks. The analysis is based on the Measured Active Rotational-Vibrational Energy Levels (MARVEL) energy levels of H216O determined in 2013 and emission line intensities obtained from accurate variational nuclear-motion computations. The analysis of the spectrum yields about 1300 transitions not measured previously and 23 experimentally previously unidentified rovibrational energy levels. The accuracy of the line positions and intensities used in the analysis was improved with the spectrum deconvolution software SyMath via creating a peak list corresponding to the dense emission spectrum. The extensive list of labeled transitions and the new experimental energy levels obtained are deposited in the Supplementary Material of this article as well as in the ReSpecTh (http://www.respecth.hu) information system.

  4. A mixed-mode crack analysis of rectilinear anisotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Wang, S. S.; Yau, J. F.; Corten, H. T.

    1980-01-01

    A very simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems in rectilinear anisotropic solids is presented. The analysis is formulated on the basis of conservation laws of anisotropic elasticity and of fundamental relationships in anisotropic fracture mechanics. The problem is reduced to a system of linear algebraic equations in mixed-mode stress intensity factors. One of the salient features of the present approach is that it can determine directly the mixed-mode stress intensity solutions from the conservation integrals evaluated along a path removed from the crack-tip region without the need of solving the corresponding complex near-field boundary value problem. Several examples with solutions available in the literature are solved to ensure the accuracy of the current analysis. This method is further demonstrated to be superior to other approaches in its numerical simplicity and computational efficiency. Solutions of more complicated and practical engineering problems dealing with the crack emanating from a circular hole in composites are presented also to illustrate the capacity of this method.

  5. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  6. The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison

    PubMed Central

    Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth

    2006-01-01

    Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497

  7. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  8. Geographic Information System and Remote Sensing Approach with Hydrologic Rational Model for Flood Event Analysis in Jakarta

    NASA Astrophysics Data System (ADS)

    Aditya, M. R.; Hernina, R.; Rokhmatuloh

    2017-12-01

    Rapid development in Jakarta which generates more impervious surface has reduced the amount of rainfall infiltration into soil layer and increases run-off. In some events, continuous high rainfall intensity could create sudden flood in Jakarta City. This article used rainfall data of Jakarta during 10 February 2015 to compute rainfall intensity and then interpolate it with ordinary kriging technique. Spatial distribution of rainfall intensity then overlaid with run-off coefficient based on certain land use type of the study area. Peak run-off within each cell resulted from hydrologic rational model then summed for the whole study area to generate total peak run-off. For this study area, land use types consisted of 51.9 % industrial, 37.57% parks, and 10.54% residential with estimated total peak run-off 6.04 m3/sec, 0.39 m3/sec, and 0.31 m3/sec, respectively.

  9. Multiscale analysis of neural spike trains.

    PubMed

    Ramezan, Reza; Marriott, Paul; Chenouri, Shojaeddin

    2014-01-30

    This paper studies the multiscale analysis of neural spike trains, through both graphical and Poisson process approaches. We introduce the interspike interval plot, which simultaneously visualizes characteristics of neural spiking activity at different time scales. Using an inhomogeneous Poisson process framework, we discuss multiscale estimates of the intensity functions of spike trains. We also introduce the windowing effect for two multiscale methods. Using quasi-likelihood, we develop bootstrap confidence intervals for the multiscale intensity function. We provide a cross-validation scheme, to choose the tuning parameters, and study its unbiasedness. Studying the relationship between the spike rate and the stimulus signal, we observe that adjusting for the first spike latency is important in cross-validation. We show, through examples, that the correlation between spike trains and spike count variability can be multiscale phenomena. Furthermore, we address the modeling of the periodicity of the spike trains caused by a stimulus signal or by brain rhythms. Within the multiscale framework, we introduce intensity functions for spike trains with multiplicative and additive periodic components. Analyzing a dataset from the retinogeniculate synapse, we compare the fit of these models with the Bayesian adaptive regression splines method and discuss the limitations of the methodology. Computational efficiency, which is usually a challenge in the analysis of spike trains, is one of the highlights of these new models. In an example, we show that the reconstruction quality of a complex intensity function demonstrates the ability of the multiscale methodology to crack the neural code. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Composition analysis by scanning femtosecond laser ultraprobing (CASFLU).

    DOEpatents

    Ishikawa, Muriel Y.; Wood, Lowell L.; Campbell, E. Michael; Stuart, Brent C.; Perry, Michael D.

    2002-01-01

    The composition analysis by scanning femtosecond ultraprobing (CASFLU) technology scans a focused train of extremely short-duration, very intense laser pulses across a sample. The partially-ionized plasma ablated by each pulse is spectrometrically analyzed in real time, determining the ablated material's composition. The steering of the scanned beam thus is computer directed to either continue ablative material-removal at the same site or to successively remove nearby material for the same type of composition analysis. This invention has utility in high-speed chemical-elemental, molecular-fragment and isotopic analyses of the microstructure composition of complex objects, e.g., the oxygen isotopic compositions of large populations of single osteons in bone.

  11. Utilizing lung sounds analysis for the evaluation of acute asthma in small children.

    PubMed

    Tinkelman, D G; Lutz, C; Conner, B

    1991-09-01

    One of the most difficult aspects of management of acute asthma in the small child is the clinician's inability to quantitate the response or lack of response to bronchodilator agents because of the inability of a child this age to perform objective lung measurements in the acute state. The present study was designed to evaluate bronchodilator responsiveness in children between 2 and 6 years of age with wheezing by means of a computerized lung sound analysis, computer digitized airway phonopneumonography. Children between ages 2 and 6 who were experiencing acute exacerbations of asthma were included in this study population. The 43 children were evaluated by physical examination, pulmonary function testing, if possible, by use of (spirometry or peak flow meter) and transmission of lung sounds to a computer using an electronic stethoscope to obtain a phonopneumograph with sound intensity level determinations during tidal breathing. A control group of 20 known asthmatic patients between the ages of 8 and 52 years who also presented to the office with acute asthma were evaluated similarly. In each of these individuals, a physical examination was followed by complete spirometry as well as computer digitized airway phonopneumonography recordings. Following initial measurements, all patients were treated with nebulized albuterol (0.25 mL in 2 mL of saline). Five minutes after completion of the nebulization all patients were reexamined and repeat pulmonary function tests were performed followed by CDAP recordings. In the study group of children, the mean pretreatment sound intensity level was 1,694 (range 557 to 4,950 SD +/- 745).(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception

    DTIC Science & Technology

    2017-12-01

    computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead

  13. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  14. Age-related differences in muscle fatigue vary by contraction type: a meta-analysis.

    PubMed

    Avin, Keith G; Law, Laura A Frey

    2011-08-01

    During senescence, despite the loss of strength (force-generating capability) associated with sarcopenia, muscle endurance may improve for isometric contractions. The purpose of this study was to perform a systematic meta-analysis of young versus older adults, considering likely moderators (ie, contraction type, joint, sex, activity level, and task intensity). A 2-stage systematic review identified potential studies from PubMed, CINAHL, PEDro, EBSCOhost: ERIC, EBSCOhost: Sportdiscus, and The Cochrane Library. Studies reporting fatigue tasks (voluntary activation) performed at a relative intensity in both young (18-45 years of age) and old (≥ 55 years of age) adults who were healthy were considered. Sample size, mean and variance outcome data (ie, fatigue index or endurance time), joint, contraction type, task intensity (percentage of maximum), sex, and activity levels were extracted. Effect sizes were (1) computed for all data points; (2) subgrouped by contraction type, sex, joint or muscle group, intensity, or activity level; and (3) further subgrouped between contraction type and the remaining moderators. Out of 3,457 potential studies, 46 publications (with 78 distinct effect size data points) met all inclusion criteria. A lack of available data limited subgroup analyses (ie, sex, intensity, joint), as did a disproportionate spread of data (most intensities ≥ 50% of maximum voluntary contraction). Overall, older adults were able to sustain relative-intensity tasks significantly longer or with less force decay than younger adults (effect size=0.49). However, this age-related difference was present only for sustained and intermittent isometric contractions, whereas this age-related advantage was lost for dynamic tasks. When controlling for contraction type, the additional modifiers played minor roles. Identifying muscle endurance capabilities in the older adult may provide an avenue to improve functional capabilities, despite a clearly established decrement in peak torque.

  15. A scientific workflow framework for (13)C metabolic flux analysis.

    PubMed

    Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina

    2016-08-20

    Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Multifractal Analysis of Seismically Induced Soft-Sediment Deformation Structures Imaged by X-Ray Computed Tomography

    NASA Astrophysics Data System (ADS)

    Nakashima, Yoshito; Komatsubara, Junko

    Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.

  17. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  18. Computer-based image analysis of one-dimensional electrophoretic gels used for the separation of DNA restriction fragments.

    PubMed Central

    Gray, A J; Beecher, D E; Olson, M V

    1984-01-01

    A stand-alone, interactive computer system has been developed that automates the analysis of ethidium bromide-stained agarose and acrylamide gels on which DNA restriction fragments have been separated by size. High-resolution digital images of the gels are obtained using a camera that contains a one-dimensional, 2048-pixel photodiode array that is mechanically translated through 2048 discrete steps in a direction perpendicular to the gel lanes. An automatic band-detection algorithm is used to establish the positions of the gel bands. A color-video graphics system, on which both the gel image and a variety of operator-controlled overlays are displayed, allows the operator to visualize and interact with critical stages of the analysis. The principal interactive steps involve defining the regions of the image that are to be analyzed and editing the results of the band-detection process. The system produces a machine-readable output file that contains the positions, intensities, and descriptive classifications of all the bands, as well as documentary information about the experiment. This file is normally further processed on a larger computer to obtain fragment-size assignments. Images PMID:6320097

  19. A Metric for Reducing False Positives in the Computer-Aided Detection of Breast Cancer from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Based Screening Examinations of High-Risk Women.

    PubMed

    Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L

    2016-02-01

    Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.

  20. Automated grading system for evaluation of ocular redness associated with dry eye.

    PubMed

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0-4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as "horizontality"). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R - 0.83G - 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. The agreement (both Pearson's and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was [Formula: see text] is the predicted investigator grade, and [Formula: see text] and [Formula: see text] are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define.

  1. Confined compressive strength analysis can improve PDC bit selection. [Polycrystalline Diamond Compact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fabain, R.T.

    1994-05-16

    A rock strength analysis program, through intensive log analysis, can quantify rock hardness in terms of confined compressive strength to identify intervals suited for drilling with polycrystalline diamond compact (PDC) bits. Additionally, knowing the confined compressive strength helps determine the optimum PDC bit for the intervals. Computing rock strength as confined compressive strength can more accurately characterize a rock's actual hardness downhole than other methods. the information can be used to improve bit selections and to help adjust drilling parameters to reduce drilling costs. Empirical data compiled from numerous field strength analyses have provided a guide to selecting PDC drillmore » bits. A computer analysis program has been developed to aid in PDC bit selection. The program more accurately defines rock hardness in terms of confined strength, which approximates the in situ rock hardness downhole. Unconfined compressive strength is rock hardness at atmospheric pressure. The program uses sonic and gamma ray logs as well as numerous input data from mud logs. Within the range of lithologies for which the program is valid, rock hardness can be determine with improved accuracy. The program's output is typically graphed in a log format displaying raw data traces from well logs, computer-interpreted lithology, the calculated values of confined compressive strength, and various optional rock mechanic outputs.« less

  2. Improving the clinical correlation of multiple sclerosis black hole volume change by paired-scan analysis.

    PubMed

    Tam, Roger C; Traboulsee, Anthony; Riddehough, Andrew; Li, David K B

    2012-01-01

    The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.

  3. Computer-based psychological treatment for comorbid depression and problematic alcohol and/or cannabis use: a randomized controlled trial of clinical efficacy.

    PubMed

    Kay-Lambkin, Frances J; Baker, Amanda L; Lewin, Terry J; Carr, Vaughan J

    2009-03-01

    To evaluate computer- versus therapist-delivered psychological treatment for people with comorbid depression and alcohol/cannabis use problems. Randomized controlled trial. Community-based participants in the Hunter Region of New South Wales, Australia. Ninety-seven people with comorbid major depression and alcohol/cannabis misuse. All participants received a brief intervention (BI) for depressive symptoms and substance misuse, followed by random assignment to: no further treatment (BI alone); or nine sessions of motivational interviewing and cognitive behaviour therapy (intensive MI/CBT). Participants allocated to the intensive MI/CBT condition were selected at random to receive their treatment 'live' (i.e. delivered by a psychologist) or via a computer-based program (with brief weekly input from a psychologist). Depression, alcohol/cannabis use and hazardous substance use index scores measured at baseline, and 3, 6 and 12 months post-baseline assessment. (i) Depression responded better to intensive MI/CBT compared to BI alone, with 'live' treatment demonstrating a strong short-term beneficial effect which was matched by computer-based treatment at 12-month follow-up; (ii) problematic alcohol use responded well to BI alone and even better to the intensive MI/CBT intervention; (iii) intensive MI/CBT was significantly better than BI alone in reducing cannabis use and hazardous substance use, with computer-based therapy showing the largest treatment effect. Computer-based treatment, targeting both depression and substance use simultaneously, results in at least equivalent 12-month outcomes relative to a 'live' intervention. For clinicians treating people with comorbid depression and alcohol problems, BIs addressing both issues appear to be an appropriate and efficacious treatment option. Primary care of those with comorbid depression and cannabis use problems could involve computer-based integrated interventions for depression and cannabis use, with brief regular contact with the clinician to check on progress.

  4. Computer vision approach to morphometric feature analysis of basal cell nuclei for evaluating malignant potentiality of oral submucous fibrosis.

    PubMed

    Muthu Rama Krishnan, M; Pal, Mousumi; Paul, Ranjan Rashmi; Chakraborty, Chandan; Chatterjee, Jyotirmoy; Ray, Ajoy K

    2012-06-01

    This research work presents a quantitative approach for analysis of histomorphometric features of the basal cell nuclei in respect to their size, shape and intensity of staining, from surface epithelium of Oral Submucous Fibrosis showing dysplasia (OSFD) to that of the Normal Oral Mucosa (NOM). For all biological activity, the basal cells of the surface epithelium form the proliferative compartment and therefore their morphometric changes will spell the intricate biological behavior pertaining to normal cellular functions as well as in premalignant and malignant status. In view of this, the changes in shape, size and intensity of staining of the nuclei in the basal cell layer of the NOM and OSFD have been studied. Geometric, Zernike moments and Fourier descriptor (FD) based as well as intensity based features are extracted for histomorphometric pattern analysis of the nuclei. All these features are statistically analyzed along with 3D visualization in order to discriminate the groups. Results showed increase in the dimensions (area and perimeter), shape parameters and decreasing mean nuclei intensity of the nuclei in OSFD in respect to NOM. Further, the selected features are fed to the Bayesian classifier to discriminate normal and OSFD. The morphometric and intensity features provide a good sensitivity of 100%, specificity of 98.53% and positive predicative accuracy of 97.35%. This comparative quantitative characterization of basal cell nuclei will be of immense help for oral onco-pathologists, researchers and clinicians to assess the biological behavior of OSFD, specially relating to their premalignant and malignant potentiality. As a future direction more extensive study involving more number of disease subjects is observed.

  5. An Annotated Bibliography Of U.S. Army Natick Anthropology (1947-1991).

    DTIC Science & Technology

    1991-08-01

    designers of lasts and shoes for the Army. In order to provide greater detail and also more directly applicable information, ari intensive analysis of the...individual simultaneous computations. These five measuremrnts are: length of cranium 4 cm above Na-S, sinus breadth, total facial height, bigonial and...implications of using the Army’s personal equipment are examined in light of the present and projected demographic ccupsition of the Army active duty

  6. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  7. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  8. Analysis of positron lifetime spectra in polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.

    1988-01-01

    A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.

  9. Computer analysis of the leaf movements of pinto beans.

    PubMed

    Hoshizaki, T; Hamner, K C

    1969-07-01

    Computer analysis was used for the detection of rhythmic components and the estimation of period length in leaf movement records. The results of this study indicated that spectral analysis can be profitably used to determine rhythmic components in leaf movements.In Pinto bean plants (Phaseolus vulgaris L.) grown for 28 days under continuous light of 750 ft-c and at a constant temperature of 28 degrees , there was only 1 highly significant rhythmic component in the leaf movements. The period of this rhythm was 27.3 hr. In plants grown at 20 degrees , there were 2 highly significant rhythmic components: 1 of 13.8 hr and a much stronger 1 of 27.3 hr. At 15 degrees , the highly significant rhythmic components were also 27.3 and 13.8 hr in length but were of equal intensity. Random movements less than 9 hr in length became very pronounced at this temperature. At 10 degrees , no significant rhythm was found in the leaf movements. At 5 degrees , the leaf movements ceased within 1 day.

  10. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  11. Mathematical and computational aspects of nonuniform frictional slip modeling

    NASA Astrophysics Data System (ADS)

    Gorbatikh, Larissa

    2004-07-01

    A mechanics-based model of non-uniform frictional sliding is studied from the mathematical/computational analysis point of view. This problem is of a key importance for a number of applications (particularly geomechanical ones), where materials interfaces undergo partial frictional sliding under compression and shear. We show that the problem is reduced to Dirichlet's problem for monotonic loading and to Riemman's problem for cyclic loading. The problem may look like a traditional crack interaction problem, however, it is confounded by the fact that locations of n sliding intervals are not known. They are to be determined from the condition for the stress intensity factors: KII=0 at the ends of the sliding zones. Computationally, it reduces to solving a system of 2n coupled non-linear algebraic equations involving singular integrals with unknown limits of integration.

  12. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  13. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  14. The NASA Energy Conservation Program

    NASA Technical Reports Server (NTRS)

    Gaffney, G. P.

    1977-01-01

    Large energy-intensive research and test equipment at NASA installations is identified, and methods for reducing energy consumption outlined. However, some of the research facilities are involved in developing more efficient, fuel-conserving aircraft, and tradeoffs between immediate and long-term conservation may be necessary. Major programs for conservation include: computer-based systems to automatically monitor and control utility consumption; a steam-producing solid waste incinerator; and a computer-based cost analysis technique to engineer more efficient heating and cooling of buildings. Alternate energy sources in operation or under evaluation include: solar collectors; electric vehicles; and ultrasonically emulsified fuel to attain higher combustion efficiency. Management support, cooperative participation by employees, and effective reporting systems for conservation programs, are also discussed.

  15. The QuakeSim Project: Numerical Simulations for Active Tectonic Processes

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry

    2004-01-01

    In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.

  16. Tropical Cyclone Intensity Estimation Using Deep Convolutional Neural Networks

    NASA Technical Reports Server (NTRS)

    Maskey, Manil; Cecil, Dan; Ramachandran, Rahul; Miller, Jeffrey J.

    2018-01-01

    Estimating tropical cyclone intensity by just using satellite image is a challenging problem. With successful application of the Dvorak technique for more than 30 years along with some modifications and improvements, it is still used worldwide for tropical cyclone intensity estimation. A number of semi-automated techniques have been derived using the original Dvorak technique. However, these techniques suffer from subjective bias as evident from the most recent estimations on October 10, 2017 at 1500 UTC for Tropical Storm Ophelia: The Dvorak intensity estimates ranged from T2.3/33 kt (Tropical Cyclone Number 2.3/33 knots) from UW-CIMSS (University of Wisconsin-Madison - Cooperative Institute for Meteorological Satellite Studies) to T3.0/45 kt from TAFB (the National Hurricane Center's Tropical Analysis and Forecast Branch) to T4.0/65 kt from SAB (NOAA/NESDIS Satellite Analysis Branch). In this particular case, two human experts at TAFB and SAB differed by 20 knots in their Dvorak analyses, and the automated version at the University of Wisconsin was 12 knots lower than either of them. The National Hurricane Center (NHC) estimates about 10-20 percent uncertainty in its post analysis when only satellite based estimates are available. The success of the Dvorak technique proves that spatial patterns in infrared (IR) imagery strongly relate to tropical cyclone intensity. This study aims to utilize deep learning, the current state of the art in pattern recognition and image recognition, to address the need for an automated and objective tropical cyclone intensity estimation. Deep learning is a multi-layer neural network consisting of several layers of simple computational units. It learns discriminative features without relying on a human expert to identify which features are important. Our study mainly focuses on convolutional neural network (CNN), a deep learning algorithm, to develop an objective tropical cyclone intensity estimation. CNN is a supervised learning algorithm requiring a large number of training data. Since the archives of intensity data and tropical cyclone centric satellite images is openly available for use, the training data is easily created by combining the two. Results, case studies, prototypes, and advantages of this approach will be discussed.

  17. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  18. PNNLs Data Intensive Computing research battles Homeland Security threats

    ScienceCinema

    David Thurman; Joe Kielman; Katherine Wolf; David Atkinson

    2018-05-11

    The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  19. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema

    Deborah Gracio; David Koppenaal; Ruby Leung

    2018-05-18

    The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  20. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  1. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  2. Computer-assisted analysis of the vascular endothelial cell motile response to injury.

    PubMed

    Askey, D B; Herman, I M

    1988-12-01

    We have developed an automated, user-friendly method to track vascular endothelial cell migration in vitro using an IBM PC/XT with MS DOS. Analog phase-contrast images of the bovine aortic endothelial cells are converted into digital images (8 bit, 250 x 240 pixel resolution) using a Tecmar Video VanGogh A/D board. Digitized images are stored at selected time points following mechanical injury in vitro. FORTRAN and assembly language subroutines have been implemented to automatically detect the wound edge and the edge of each cell nucleus in the phase-contrast, light-microscope field. Detection of the wound edge is accomplished by intensity thresholding following noise reduction in the image and subsequent sampling of the wound. After the range of wound intensities is determined, the entire image is sampled and a histogram of intensities is formed. The histogram peak corresponding to the wound intensities is subtracted, leaving a histogram peak that gives the range of intensities corresponding to the cell nuclei. Rates of cell migration, as well as cellular trajectories and cell surface areas, can be automatically quantitated and analyzed. This inexpensive, automated cell-tracking system should be widely applicable in a variety of cell biologic applications.

  3. Measurements of geomagnetically trapped alpha particles, 1968-1970. I - Quiet time distributions

    NASA Technical Reports Server (NTRS)

    Krimigis, S. M.; Verzariu, P.

    1973-01-01

    Results of observations of geomagnetically trapped alpha particles over the energy range from 1.18 to 8 MeV performed with the aid of the Injun 5 polar-orbiting satellite during the period from September 1968 to May 1970. Following a presentation of a time history covering this entire period, a detailed analysis is made of the magnetically quiet period from Feb. 11 to 28, 1970. During this period the alpha particle fluxes and the intensity ratio of alpha particles to protons attained their lowest values in approximately 20 months; the alpha particle intensity versus L profile was most similar to the proton profile at the same energy per nucleon interval; the intensity ratio was nearly constant as a function of L in the same energy per nucleon representation, but rose sharply with L when computed in the same total energy interval; the variation of alpha particle intensity with B suggested a steep angular distribution at small equatorial pitch angles, while the intensity ratio showed little dependence on B; and the alpha particle spectral parameter showed a markedly different dependence on L from the equivalent one for protons.

  4. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  5. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  6. [Computer-assisted image processing for quantifying histopathologic variables in the healing of colonic anastomosis in dogs].

    PubMed

    Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C

    1997-01-01

    The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.

  7. Unsteady thermal blooming of intense laser beams

    NASA Astrophysics Data System (ADS)

    Ulrich, J. T.; Ulrich, P. B.

    1980-01-01

    A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.

  8. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  9. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  10. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  11. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    NASA Astrophysics Data System (ADS)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.

  12. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  13. Opportunities and challenges for the life sciences community.

    PubMed

    Kolker, Eugene; Stewart, Elizabeth; Ozdemir, Vural

    2012-03-01

    Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19-20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16-17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org ) was formed to become a Digital Commons for the life sciences community.

  14. Opportunities and Challenges for the Life Sciences Community

    PubMed Central

    Stewart, Elizabeth; Ozdemir, Vural

    2012-01-01

    Abstract Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19–20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16–17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org) was formed to become a Digital Commons for the life sciences community. PMID:22401659

  15. An updated climatology of explosive cyclones using alternative measures of cyclone intensity

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Caballero, R.

    2009-04-01

    Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?

  16. Long live the Data Scientist, but can he/she persist?

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.

    2011-12-01

    In recent years the fourth paradigm of data intensive science has slowly taken hold as the increased capacity of instruments and an increasing number of instruments (in particular sensor networks) have changed how fundamental research is undertaken. Most modern scientific research is about digital capture of data direct from instruments, processing it by computers, storing the results on computers and only publishing a small fraction of data in hard copy publications. At the same time, the rapid increase in capacity of supercomputers, particularly at petascale, means that far larger data sets can be analysed and to greater resolution than previously possible. The new cloud computing paradigm which allows distributed data, software and compute resources to be linked by seamless workflows, is creating new opportunities in processing of high volumes of data to an increasingly larger number of researchers. However, to take full advantage of these compute resources, data sets for analysis have to be aggregated from multiple sources to create high performance data sets. These new technology developments require that scientists must become more skilled in data management and/or have a higher degree of computer literacy. In almost every science discipline there is now an X-informatics branch and a computational X branch (eg, Geoinformatics and Computational Geoscience): both require a new breed of researcher that has skills in both the science fundamentals and also knowledge of some ICT aspects (computer programming, data base design and development, data curation, software engineering). People that can operate in both science and ICT are increasingly known as 'data scientists'. Data scientists are a critical element of many large scale earth and space science informatics projects, particularly those that are tackling current grand challenges at an international level on issues such as climate change, hazard prediction and sustainable development of our natural resources. These projects by their very nature require the integration of multiple digital data sets from multiple sources. Often the preparation of the data for computational analysis can take months and requires painstaking attention to detail to ensure that anomalies identified are real and are not just artefacts of the data preparation and/or the computational analysis. Although data scientists are increasingly vital to successful data intensive earth and space science projects, unless they are recognised for their capabilities in both the science and the computational domains they are likely to migrate to either a science role or an ICT role as their career advances. Most reward and recognition systems do not recognise those with skills in both, hence, getting trained data scientists to persist beyond one or two projects can be challenge. Those data scientists that persist in the profession are characteristically committed and enthusiastic people who have the support of their organisations to take on this role. They also tend to be people who share developments and are critical to the success of the open source software movement. However, the fact remains that survival of the data scientist as a species is being threatened unless something is done to recognise their invaluable contributions to the new fourth paradigm of science.

  17. Influence of educational attainment on pain intensity and disability in patients with lumbar spinal stenosis: mediation effect of pain catastrophizing.

    PubMed

    Kim, Ho-Joong; Kim, Sung-Chan; Kang, Kyoung-Tak; Chang, Bong-Soon; Lee, Choon-Ki; Yeom, Jin S

    2014-05-01

    Level IV, prospective case series. To investigate the influence of educational attainment on the level of pain intensity and disability in patients with lumbar spinal stenosis (LSS) and determine how coping behavior, such as catastrophizing, may mediate the association between educational attainment and clinical impairments. Educational attainment has been thought to influence disability caused by chronic painful disease, mediated by pain behavior or a coping strategy such as catastrophizing. Nevertheless, little is known about the role of educational attainment on pain intensity or disability related with LSS. A total of 155 patients who were diagnosed as degenerative LSS participated in the study. Data on detailed medical history, physical examination, and series of questionnaires were collected, including pain catastrophizing scale, Oswestry Disability Index, and visual analogue pain scale for back and leg pain. For measures of socioeconomic status, educational attainment and occupation were assessed. Radiological analysis was performed using magnetic resonance images and computed tomographic scans. After adjustment of covariates, multivariate regression analysis was used to assess each component of the proposed mediation models among visual analogue pain scale for back/leg pain, Oswestry Disability Index, the level of education, occupation and pain catastrophizing scale. Mediation was also assessed by the bootstrapping technique. Educational attainment was negatively correlated with pain intensity, disability, and catastrophizing. Pain catastrophizing were also significantly correlated with disability and pain intensity for back/leg pain in the patients with LSS. In the relationship among variables, the mediation analysis with bootstrapping clearly showed the role of catastrophizing in the mediation between visual analogue pain scale for back pain/leg pain, Oswestry Disability Index, and the level of education. This study demonstrated that lower educational attainment was associated with increased pain intensity and disability in patients with LSS, which was mediated by the coping mechanism, catastrophizing.

  18. The Magellan Final Report on Cloud Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ,; Coghlan, Susan; Yelick, Katherine

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computingmore » Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.« less

  19. Neonatal Intensive Care Unit Nurses Working in an Open Ward: Stress and Work Satisfaction.

    PubMed

    Lavoie-Tremblay, Mélanie; Feeley, Nancy; Lavigne, Geneviève L; Genest, Christine; Robins, Stéphanie; Fréchette, Julie

    2016-01-01

    There is some research on the impact of open-ward unit design on the health of babies and the stress experienced by parents and nurses in neonatal intensive care units. However, few studies have explored the factors associated with nurse stress and work satisfaction among nurses practicing in open-ward neonatal intensive care units. The purpose of this study was to examine what factors are associated with nurse stress and work satisfaction among nurses practicing in an open-ward neonatal intensive care unit. A cross-sectional correlational design was used in this study. Participants were nurses employed in a 34-bed open-ward neonatal intensive care unit in a major university-affiliated hospital in Montréal, Quebec, Canada. A total of 94 nurses were eligible, and 86 completed questionnaires (91% response rate). Descriptive statistics were computed to describe the participants' characteristics. To identify factors associated with nurse stress and work satisfaction, correlational analysis and multiple regression analyses were performed with the Nurse Stress Scale and the Global Work Satisfaction scores as the dependent variables. Different factors predict neonatal intensive care unit nurses' stress and job satisfaction, including support, family-centered care, performance obstacles, work schedule, education, and employment status. In order to provide neonatal intensive care units nurses with a supportive environment, managers can provide direct social support to nurses and influence the culture around teamwork.

  20. "You are free to set your own hours": governing worker productivity and health through flexibility and resilience.

    PubMed

    MacEachen, Ellen; Polzer, Jessica; Clarke, Judy

    2008-03-01

    Flexible work is now endemic in modern economies. A growing literature both praises work flexibility for accommodating employees' needs and criticizes it for fueling contingency and job insecurity. Although studies have identified varied effects of flexible work, questions remain about the workplace dimensions of flexibility and how occupational workplace health is managed in these workplaces. This paper presents findings from a qualitative study of how managers in the computer software industry situate workplace flexibility and approach worker health. In-depth interviews were conducted with managers (and some workers) at 30 firms in Ontario, Canada. Using a critical discourse analysis approach, we examine managers' optimistic descriptions of flexibility which emphasize how flexible work contributes to workers' life balance. We then contrast this with managers' depictions of flexibility work practices as intense and inescapable. We suggest that the discourse of flexibility, and the work practices they foster, make possible and reinforce an increased intensity of work that is driven by the demands of technological pace and change that characterize the global information technology and computer software industries. Finally, we propose that flexible knowledge work has led to a re-framing of occupational health management involving a focus on what we call "strategies of resilience" that aim to buttress workers' capacities to withstand intensive and uncertain working conditions.

  1. surf3d: A 3-D finite-element program for the analysis of surface and corner cracks in solids subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1993-01-01

    A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.

  2. Structural Weight Estimation for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd

    2002-01-01

    This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.

  3. Study of Submicron Particle Size Distributions by Laser Doppler Measurement of Brownian Motion.

    DTIC Science & Technology

    1984-10-29

    o ..... . 5-1 A.S *6NEW DISCOVERIES OR INVENTIONS .. o......... ......... 6-1 APPENDIX: COMPUTER SIMULATION OF THE BROWNIAN MOTION SENSOR SIGNALS...scattering regime by analysis of the scattered light intensity and particle mass (size) obtained using the Brownian motion sensor . 9 Task V - By application...of the Brownian motion sensor in a flat-flame burner, the contractor shall assess the application of this technique for In-situ sizing of submicron

  4. Ground-water appraisal of the Pineland Sands area, central Minnesota

    USGS Publications Warehouse

    Helgesen, J.O.

    1977-01-01

    Results of model analysis show that present development (withdrawals totaling 3.3 cubic feet per second) has no significant effect on the aquifer system. Simulations of hypothetical withdrawals of 60 to 120 cubic feet per second resulted in computed water-table declines as great as 12 feet in places. Most pumpage is derived from intercepted base flow to streams, thus reducing streamflow. Similarly, some lake levels can be expected to decline in response to nearby intensive development.

  5. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  6. Conformational, structural, vibrational and quantum chemical analysis on 4-aminobenzohydrazide and 4-hydroxybenzohydrazide--a comparative study.

    PubMed

    Arjunan, V; Jayaprakash, A; Carthigayan, K; Periandy, S; Mohan, S

    2013-05-01

    Experimental and theoretical quantum chemical studies were carried out on 4-hydroxybenzohydrazide (4HBH) and 4-aminobenzohydrazide (4ABH) using FTIR and FT-Raman spectral data. The structural characteristics and vibrational spectroscopic analysis were carried performed by quantum chemical methods with the hybrid exchange-correlation functional B3LYP using 6-31G(**), 6-311++G(**) and aug-cc-pVDZ basis sets. The most stable conformer of the title compounds have been determined from the analysis of potential energy surface. The stable molecular geometries, electronic and thermodynamic parameters, IR intensities, harmonic vibrational frequencies, depolarisation ratio and Raman intensities have been computed. Molecular electrostatic potential and frontier molecular orbitals were constructed to understand the electronic properties. The potential energy distributions (PEDs) were calculated to explain the mixing of fundamental modes. The theoretical geometrical parameters and the fundamental frequencies were compared with the experimental. The interactions of hydroxy and amino group substitutions on the characteristic vibrations of the ring and hydrazide group have been analysed. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions

    NASA Technical Reports Server (NTRS)

    Prezekop, Adam

    2008-01-01

    Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.

  8. Social, Organizational, and Contextual Characteristics of Clinical Decision Support Systems for Intensive Insulin Therapy: A Literature Review and Case Study

    PubMed Central

    Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.

    2009-01-01

    Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452

  9. Spectral analysis of sinus arrhythmia - A measure of mental effort

    NASA Technical Reports Server (NTRS)

    Vicente, Kim J.; Craig Thornton, D.; Moray, Neville

    1987-01-01

    The validity of the spectral analysis of sinus arrhythmia as a measure of mental effort was investigated using a computer simulation of a hovercraft piloted along a river as the experimental task. Strong correlation was observed between the subjective effort-ratings and the heart-rate variability (HRV) power spectrum between 0.06 and 0.14 Hz. Significant correlations were observed not only between subjects but, more importantly, within subjects as well, indicating that the spectral analysis of HRV is an accurate measure of the amount of effort being invested by a subject. Results also indicate that the intensity of effort invested by subjects cannot be inferred from the objective ratings of task difficulty or from performance.

  10. Rapid scanning system for fuel drawers

    DOEpatents

    Caldwell, J.T.; Fehlau, P.E.; France, S.W.

    A nondestructive method for uniquely distinguishing among and quantifying the mass of individual fuel plates in situ in fuel drawers utilized in nuclear reactors is described. The method is both rapid and passive, eliminating the personnel hazard of the commonly used irradiation techniques which require that the analysis be performed in proximity to an intense neutron source such as a reactor. In the present technique, only normally decaying nuclei are observed. This allows the analysis to be performed anywhere. This feature, combined with rapid scanning of a given fuel drawer (in approximately 30 s), and the computer data analysis allows the processing of large numbers of fuel drawers efficiently in the event of a loss alert.

  11. Rapid scanning system for fuel drawers

    DOEpatents

    Caldwell, John T.; Fehlau, Paul E.; France, Stephen W.

    1981-01-01

    A nondestructive method for uniqely distinguishing among and quantifying the mass of individual fuel plates in situ in fuel drawers utilized in nuclear reactors is described. The method is both rapid and passive, eliminating the personnel hazard of the commonly used irradiation techniques which require that the analysis be performed in proximity to an intense neutron source such as a reactor. In the present technique, only normally decaying nuclei are observed. This allows the analysis to be performed anywhere. This feature, combined with rapid scanning of a given fuel drawer (in approximately 30 s), and the computer data analysis allows the processing of large numbers of fuel drawers efficiently in the event of a loss alert.

  12. A strategy for selecting data mining techniques in metabolomics.

    PubMed

    Banimustafa, Ahmed Hmaidan; Hardy, Nigel W

    2012-01-01

    There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.

  13. Intensity-based masking: A tool to improve functional connectivity results of resting-state fMRI.

    PubMed

    Peer, Michael; Abboud, Sami; Hertz, Uri; Amedi, Amir; Arzy, Shahar

    2016-07-01

    Seed-based functional connectivity (FC) of resting-state functional MRI data is a widely used methodology, enabling the identification of functional brain networks in health and disease. Based on signal correlations across the brain, FC measures are highly sensitive to noise. A somewhat neglected source of noise is the fMRI signal attenuation found in cortical regions in close vicinity to sinuses and air cavities, mainly in the orbitofrontal, anterior frontal and inferior temporal cortices. BOLD signal recorded at these regions suffers from dropout due to susceptibility artifacts, resulting in an attenuated signal with reduced signal-to-noise ratio in as many as 10% of cortical voxels. Nevertheless, signal attenuation is largely overlooked during FC analysis. Here we first demonstrate that signal attenuation can significantly influence FC measures by introducing false functional correlations and diminishing existing correlations between brain regions. We then propose a method for the detection and removal of the attenuated signal ("intensity-based masking") by fitting a Gaussian-based model to the signal intensity distribution and calculating an intensity threshold tailored per subject. Finally, we apply our method on real-world data, showing that it diminishes false correlations caused by signal dropout, and significantly improves the ability to detect functional networks in single subjects. Furthermore, we show that our method increases inter-subject similarity in FC, enabling reliable distinction of different functional networks. We propose to include the intensity-based masking method as a common practice in the pre-processing of seed-based functional connectivity analysis, and provide software tools for the computation of intensity-based masks on fMRI data. Hum Brain Mapp 37:2407-2418, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Numerical Analysis of Crack Tip Plasticity and History Effects under Mixed Mode Conditions

    NASA Astrophysics Data System (ADS)

    Lopez-Crespo, Pablo; Pommier, Sylvie

    The plastic behaviour in the crack tip region has a strong influence on the fatigue life of engineering components. In general, residual stresses developed as a consequence of the plasticity being constrained around the crack tip have a significant role on both the direction of crack propagation and the propagation rate. Finite element methods (FEM) are commonly employed in order to model plasticity. However, if millions of cycles need to be modelled to predict the fatigue behaviour of a component, the method becomes computationally too expensive. By employing a multiscale approach, very precise analyses computed by FEM can be brought to a global scale. The data generated using the FEM enables us to identify a global cyclic elastic-plastic model for the crack tip region. Once this model is identified, it can be employed directly, with no need of additional FEM computations, resulting in fast computations. This is done by partitioning local displacement fields computed by FEM into intensity factors (global data) and spatial fields. A Karhunen-Loeve algorithm developed for image processing was employed for this purpose. In addition, the partitioning is done such as to distinguish into elastic and plastic components. Each of them is further divided into opening mode and shear mode parts. The plastic flow direction was determined with the above approach on a centre cracked panel subjected to a wide range of mixed-mode loading conditions. It was found to agree well with the maximum tangential stress criterion developed by Erdogan and Sih, provided that the loading direction is corrected for residual stresses. In this approach, residual stresses are measured at the global scale through internal intensity factors.

  15. Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; García, Sebastián Gimeno

    2013-05-01

    Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.

  16. Fast Learning for Immersive Engagement in Energy Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M

    The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less

  17. Computational Discovery of Materials Using the Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Avendaño-Franco, Guillermo; Romero, Aldo

    Our current ability to model physical phenomena accurately, the increase computational power and better algorithms are the driving forces behind the computational discovery and design of novel materials, allowing for virtual characterization before their realization in the laboratory. We present the implementation of a novel firefly algorithm, a population-based algorithm for global optimization for searching the structure/composition space. This novel computation-intensive approach naturally take advantage of concurrency, targeted exploration and still keeping enough diversity. We apply the new method in both periodic and non-periodic structures and we present the implementation challenges and solutions to improve efficiency. The implementation makes use of computational materials databases and network analysis to optimize the search and get insights about the geometric structure of local minima on the energy landscape. The method has been implemented in our software PyChemia, an open-source package for materials discovery. We acknowledge the support of DMREF-NSF 1434897 and the Donors of the American Chemical Society Petroleum Research Fund for partial support of this research under Contract 54075-ND10.

  18. Automatic segmentation of solitary pulmonary nodules based on local intensity structure analysis and 3D neighborhood features in 3D chest CT images

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi; Mori, Kensaku

    2012-03-01

    This paper presents a solitary pulmonary nodule (SPN) segmentation method based on local intensity structure analysis and neighborhood feature analysis in chest CT images. Automated segmentation of SPNs is desirable for a chest computer-aided detection/diagnosis (CAS) system since a SPN may indicate early stage of lung cancer. Due to the similar intensities of SPNs and other chest structures such as blood vessels, many false positives (FPs) are generated by nodule detection methods. To reduce such FPs, we introduce two features that analyze the relation between each segmented nodule candidate and it neighborhood region. The proposed method utilizes a blob-like structure enhancement (BSE) filter based on Hessian analysis to augment the blob-like structures as initial nodule candidates. Then a fine segmentation is performed to segment much more accurate region of each nodule candidate. FP reduction is mainly addressed by investigating two neighborhood features based on volume ratio and eigenvector of Hessian that are calculates from the neighborhood region of each nodule candidate. We evaluated the proposed method by using 40 chest CT images, include 20 standard-dose CT images that we randomly chosen from a local database and 20 low-dose CT images that were randomly chosen from a public database: LIDC. The experimental results revealed that the average TP rate of proposed method was 93.6% with 12.3 FPs/case.

  19. Classification of collective behavior: a comparison of tracking and machine learning methods to study the effect of ambient light on fish shoaling.

    PubMed

    Butail, Sachit; Salerno, Philip; Bollt, Erik M; Porfiri, Maurizio

    2015-12-01

    Traditional approaches for the analysis of collective behavior entail digitizing the position of each individual, followed by evaluation of pertinent group observables, such as cohesion and polarization. Machine learning may enable considerable advancements in this area by affording the classification of these observables directly from images. While such methods have been successfully implemented in the classification of individual behavior, their potential in the study collective behavior is largely untested. In this paper, we compare three methods for the analysis of collective behavior: simple tracking (ST) without resolving occlusions, machine learning with real data (MLR), and machine learning with synthetic data (MLS). These methods are evaluated on videos recorded from an experiment studying the effect of ambient light on the shoaling tendency of Giant danios. In particular, we compute average nearest-neighbor distance (ANND) and polarization using the three methods and compare the values with manually-verified ground-truth data. To further assess possible dependence on sampling rate for computing ANND, the comparison is also performed at a low frame rate. Results show that while ST is the most accurate at higher frame rate for both ANND and polarization, at low frame rate for ANND there is no significant difference in accuracy between the three methods. In terms of computational speed, MLR and MLS take significantly less time to process an image, with MLS better addressing constraints related to generation of training data. Finally, all methods are able to successfully detect a significant difference in ANND as the ambient light intensity is varied irrespective of the direction of intensity change.

  20. Race, gender, and information technology use: the new digital divide.

    PubMed

    Jackson, Linda A; Zhao, Yong; Kolenic, Anthony; Fitzgerald, Hiram E; Harold, Rena; Von Eye, Alexander

    2008-08-01

    This research examined race and gender differences in the intensity and nature of IT use and whether IT use predicted academic performance. A sample of 515 children (172 African Americans and 343 Caucasian Americans), average age 12 years old, completed surveys as part of their participation in the Children and Technology Project. Findings indicated race and gender differences in the intensity of IT use; African American males were the least intense users of computers and the Internet, and African American females were the most intense users of the Internet. Males, regardless of race, were the most intense videogame players, and females, regardless of race, were the most intense cell phone users. IT use predicted children's academic performance. Length of time using computers and the Internet was a positive predictor of academic performance, whereas amount of time spent playing videogames was a negative predictor. Implications of the findings for bringing IT to African American males and bringing African American males to IT are discussed.

  1. On the analysis of para-ammonia observations

    NASA Technical Reports Server (NTRS)

    Kuiper, T. B. H.

    1994-01-01

    The intensities and optical depths of the (1, 1), (2, 2), and (2, 1) inversion transitions of ammonia can be calculated quite accurately without solving the equations of statistical equilibrium. A two-temperature partition function suffices. The excitation of the K-ladders can be approximated by using a temperature obtained from a two-level model with the (2, 1) and (1, 1) levels. Distribution of populations between the ladders is described with the kinetic temperature. This enables one to compute the (1, 1) and (2, 1) inversion transition excitation temperatures and optical depths. To compute the (2, 2) brightness temperatures, the fractional population of the (2, 2) doublet is computed from the population of the (1, 1) doublet using the 'true rotation temperature,' which is calculated using a three-level model with the (2, 1), (2, 2), and (1, 1) levels. In spite of some iterative steps, the calculation is quite fast.

  2. Technologies for Large Data Management in Scientific Computing

    NASA Astrophysics Data System (ADS)

    Pace, Alberto

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  3. Information Technology in Critical Care: Review of Monitoring and Data Acquisition Systems for Patient Care and Research

    PubMed Central

    De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185

  4. Information technology in critical care: review of monitoring and data acquisition systems for patient care and research.

    PubMed

    De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.

  5. Geometry Modeling and Grid Generation for Design and Optimization

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1998-01-01

    Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.

  6. Using spatial principles to optimize distributed computing for enabling the physical science discoveries

    PubMed Central

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-01-01

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century. PMID:21444779

  7. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    PubMed

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  8. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  9. Ray tracing on the MPP

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.

  10. Deformable registration of CT and cone-beam CT with local intensity matching.

    PubMed

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-07

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  11. Deformable registration of CT and cone-beam CT with local intensity matching

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-01

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  12. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  13. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  14. Assessment of Radiative Heating Uncertainty for Hyperbolic Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Mazaheri, Alireza; Gnoffo, Peter A.; Kleb, W. L.; Sutton, Kenneth; Prabhu, Dinesh K.; Brandis, Aaron M.; Bose, Deepak

    2011-01-01

    This paper investigates the shock-layer radiative heating uncertainty for hyperbolic Earth entry, with the main focus being a Mars return. In Part I of this work, a baseline simulation approach involving the LAURA Navier-Stokes code with coupled ablation and radiation is presented, with the HARA radiation code being used for the radiation predictions. Flight cases representative of peak-heating Mars or asteroid return are de ned and the strong influence of coupled ablation and radiation on their aerothermodynamic environments are shown. Structural uncertainties inherent in the baseline simulations are identified, with turbulence modeling, precursor absorption, grid convergence, and radiation transport uncertainties combining for a +34% and ..24% structural uncertainty on the radiative heating. A parametric uncertainty analysis, which assumes interval uncertainties, is presented. This analysis accounts for uncertainties in the radiation models as well as heat of formation uncertainties in the flow field model. Discussions and references are provided to support the uncertainty range chosen for each parameter. A parametric uncertainty of +47.3% and -28.3% is computed for the stagnation-point radiative heating for the 15 km/s Mars-return case. A breakdown of the largest individual uncertainty contributors is presented, which includes C3 Swings cross-section, photoionization edge shift, and Opacity Project atomic lines. Combining the structural and parametric uncertainty components results in a total uncertainty of +81.3% and ..52.3% for the Mars-return case. In Part II, the computational technique and uncertainty analysis presented in Part I are applied to 1960s era shock-tube and constricted-arc experimental cases. It is shown that experiments contain shock layer temperatures and radiative ux values relevant to the Mars-return cases of present interest. Comparisons between the predictions and measurements, accounting for the uncertainty in both, are made for a range of experiments. A measure of comparison quality is de ned, which consists of the percent overlap of the predicted uncertainty bar with the corresponding measurement uncertainty bar. For nearly all cases, this percent overlap is greater than zero, and for most of the higher temperature cases (T >13,000 K) it is greater than 50%. These favorable comparisons provide evidence that the baseline computational technique and uncertainty analysis presented in Part I are adequate for Mars-return simulations. In Part III, the computational technique and uncertainty analysis presented in Part I are applied to EAST shock-tube cases. These experimental cases contain wavelength dependent intensity measurements in a wavelength range that covers 60% of the radiative intensity for the 11 km/s, 5 m radius flight case studied in Part I. Comparisons between the predictions and EAST measurements are made for a range of experiments. The uncertainty analysis presented in Part I is applied to each prediction, and comparisons are made using the metrics defined in Part II. The agreement between predictions and measurements is excellent for velocities greater than 10.5 km/s. Both the wavelength dependent and wavelength integrated intensities agree within 30% for nearly all cases considered. This agreement provides confidence in the computational technique and uncertainty analysis presented in Part I, and provides further evidence that this approach is adequate for Mars-return simulations. Part IV of this paper reviews existing experimental data that include the influence of massive ablation on radiative heating. It is concluded that this existing data is not sufficient for the present uncertainty analysis. Experiments to capture the influence of massive ablation on radiation are suggested as future work, along with further studies of the radiative precursor and improvements in the radiation properties of ablation products.

  15. Turbulence and Coherent Structure in the Atmospheric Boundary Layer near the Eyewall of Hurricane Hugo (1989)

    NASA Astrophysics Data System (ADS)

    Zhang, J. A.; Marks, F. D.; Montgomery, M. T.; Black, P. G.

    2008-12-01

    In this talk we present an analysis of observational data collected from NOAA'S WP-3D research aircraft during the eyewall penetration of category five Hurricane Hugo (1989). The 1 Hz flight level data near 450m above the sea surface comprising wind velocity, temperature, pressure and relative humidity are used to estimate the turbulence intensity and fluxes. In the turbulent flux calculation, the universal shape spectra and co-spectra derived using the 40 Hz data collected during the Coupled Boundary Layer Air-sea Transfer (CBLAST) Hurricane experiment are applied to correct the high frequency part of the data collected in Hurricane Hugo. Since the stationarity assumption required for standard eddy correlations is not always satisfied, different methods are summarized for computing the turbulence parameters. In addition, a wavelet analysis is conducted to investigate the time and special scales of roll vortices or coherent structures that are believed important elements of the eye/eyewall mixing processes that support intense storms.

  16. Some uses of wavelets for imaging dynamic processes in live cochlear structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, J.

    2007-09-01

    A variety of image and signal processing algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is presented, and its effectiveness is demonstrated on artificial and experimental image sequences.

  17. Gear crack propagation investigations

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Ballarini, Roberto

    1996-01-01

    Analytical and experimental studies were performed to investigate the effect of gear rim thickness on crack propagation life. The FRANC (FRacture ANalysis Code) computer program was used to simulate crack propagation. The FRANC program used principles of linear elastic fracture mechanics, finite element modeling, and a unique re-meshing scheme to determine crack tip stress distributions, estimate stress intensity factors, and model crack propagation. Various fatigue crack growth models were used to estimate crack propagation life based on the calculated stress intensity factors. Experimental tests were performed in a gear fatigue rig to validate predicted crack propagation results. Test gears were installed with special crack propagation gages in the tooth fillet region to measure bending fatigue crack growth. Good correlation between predicted and measured crack growth was achieved when the fatigue crack closure concept was introduced into the analysis. As the gear rim thickness decreased, the compressive cyclic stress in the gear tooth fillet region increased. This retarded crack growth and increased the number of crack propagation cycles to failure.

  18. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less

  19. Accelerating next generation sequencing data analysis with system level optimizations.

    PubMed

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  20. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  1. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  2. Local storage federation through XRootD architecture for interactive distributed analysis

    NASA Astrophysics Data System (ADS)

    Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.

    2015-12-01

    A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.

  3. Differential network analysis reveals the genome-wide landscape of estrogen receptor modulation in hormonal cancers

    PubMed Central

    Hsiao, Tzu-Hung; Chiu, Yu-Chiao; Hsu, Pei-Yin; Lu, Tzu-Pin; Lai, Liang-Chuan; Tsai, Mong-Hsun; Huang, Tim H.-M.; Chuang, Eric Y.; Chen, Yidong

    2016-01-01

    Several mutual information (MI)-based algorithms have been developed to identify dynamic gene-gene and function-function interactions governed by key modulators (genes, proteins, etc.). Due to intensive computation, however, these methods rely heavily on prior knowledge and are limited in genome-wide analysis. We present the modulated gene/gene set interaction (MAGIC) analysis to systematically identify genome-wide modulation of interaction networks. Based on a novel statistical test employing conjugate Fisher transformations of correlation coefficients, MAGIC features fast computation and adaption to variations of clinical cohorts. In simulated datasets MAGIC achieved greatly improved computation efficiency and overall superior performance than the MI-based method. We applied MAGIC to construct the estrogen receptor (ER) modulated gene and gene set (representing biological function) interaction networks in breast cancer. Several novel interaction hubs and functional interactions were discovered. ER+ dependent interaction between TGFβ and NFκB was further shown to be associated with patient survival. The findings were verified in independent datasets. Using MAGIC, we also assessed the essential roles of ER modulation in another hormonal cancer, ovarian cancer. Overall, MAGIC is a systematic framework for comprehensively identifying and constructing the modulated interaction networks in a whole-genome landscape. MATLAB implementation of MAGIC is available for academic uses at https://github.com/chiuyc/MAGIC. PMID:26972162

  4. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  5. Automated grading system for evaluation of ocular redness associated with dry eye

    PubMed Central

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    Background We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. Aim To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Methods Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0–4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as “horizontality”). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R – 0.83G – 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. Results The agreement (both Pearson’s and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. Conclusion This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was C4=−12.24+2.12C2HOR+0.88C2RED:C4 is the predicted investigator grade, and C2HOR and C2RED are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define. PMID:23814457

  6. Optimized photonic gauge of extreme high vacuum with Petawatt lasers

    NASA Astrophysics Data System (ADS)

    Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor

    2014-03-01

    One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.

  7. An infrared scattering by evaporating droplets at the initial stage of a pool fire suppression by water sprays

    NASA Astrophysics Data System (ADS)

    Dombrovsky, Leonid A.; Dembele, Siaka; Wen, Jennifer X.

    2018-06-01

    The computational analysis of downward motion and evaporation of water droplets used to suppress a typical transient pool fire shows local regions of a high volume fraction of relatively small droplets. These droplets are comparable in size with the infrared wavelength in the range of intense flame radiation. The estimated scattering of the radiation by these droplets is considerable throughout the entire spectrum except for a narrow region in the vicinity of the main absorption peak of water where the anomalous refraction takes place. The calculations of infrared radiation field in the model pool fire indicate the strong effect of scattering which can be observed experimentally to validate the fire computational model.

  8. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.

  9. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  10. Understanding Coronal Heating through Time-Series Analysis and Nanoflare Modeling

    NASA Astrophysics Data System (ADS)

    Romich, Kristine; Viall, Nicholeen

    2018-01-01

    Periodic intensity fluctuations in coronal loops, a signature of temperature evolution, have been observed using the Atmospheric Imaging Assembly (AIA) aboard NASA’s Solar Dynamics Observatory (SDO) spacecraft. We examine the proposal that nanoflares, or impulsive bursts of energy release in the solar atmosphere, are responsible for the intensity fluctuations as well as the megakelvin-scale temperatures observed in the corona. Drawing on the work of Cargill (2014) and Bradshaw & Viall (2016), we develop a computer model of the energy released by a sequence of nanoflare events in a single magnetic flux tube. We then use EBTEL (Enthalpy-Based Thermal Evolution of Loops), a hydrodynamic model of plasma response to energy input, to simulate intensity as a function of time across the coronal AIA channels. We test the EBTEL output for periodicities using a spectral code based on Mann and Lees’ (1996) multitaper method and present preliminary results here. Our ultimate goal is to establish whether quasi-continuous or impulsive energy bursts better approximate the original SDO data.

  11. Raman spectroscopic and theoretical study of liquid and solid water within the spectral region 1600-2300 cm-1

    NASA Astrophysics Data System (ADS)

    Kozlovskaya, E. N.; Pitsevich, G. A.; Malevich, A. E.; Doroshenko, O. P.; Pogorelov, V. E.; Doroshenko, I. Yu.; Balevicius, V.; Sablinskas, V.; Kamnev, A. A.

    2018-05-01

    Raman spectra of liquid water and ice were measured at different temperatures. The intensity of the band assigned to bending vibrations of water molecules was observed to decrease at the liquid-to-solid transition, while the Raman line near 2200 cm-1 showed an anomalously high intensity in the solid phase. A tetrahedral model was used for computer analysis of the observed spectral changes. Quantum-chemical calculations of the structure, normal vibrations and Raman spectra in the harmonic approximation, as well as frequencies and intensities of some vibrations using 1D and 2D potential energy surfaces, were carried out using B3LYP with the cc-pVTZ basis set. The influence of the number of hydrogen bonds on the frequency and Raman activity of the bending vibrations was analyzed. The possibility of hydrogen bond weakening upon excitation of the combined bending-rocking vibration due to the large amplitude of this vibration is considered.

  12. Mass preserving registration for lung CT

    NASA Astrophysics Data System (ADS)

    Gorbunova, Vladlena; Lo, Pechin; Loeve, Martine; Tiddens, Harm A.; Sporring, Jon; Nielsen, Mads; de Bruijne, Marleen

    2009-02-01

    In this paper, we evaluate a novel image registration method on a set of expiratory-inspiratory pairs of computed tomography (CT) lung scans. A free-form multi resolution image registration technique is used to match two scans of the same subject. To account for the differences in the lung intensities due to differences in inspiration level, we propose to adjust the intensity of lung tissue according to the local expansion or compression. An image registration method without intensity adjustment is compared to the proposed method. Both approaches are evaluated on a set of 10 pairs of expiration and inspiration CT scans of children with cystic fibrosis lung disease. The proposed method with mass preserving adjustment results in significantly better alignment of the vessel trees. Analysis of local volume change for regions with trapped air compared to normally ventilated regions revealed larger differences between these regions in the case of mass preserving image registration, indicating that mass preserving registration is better at capturing localized differences in lung deformation.

  13. A robust close-range photogrammetric target extraction algorithm for size and type variant targets

    NASA Astrophysics Data System (ADS)

    Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert

    2016-05-01

    The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.

  14. Ignition Prediction of Pressed HMX based on Hotspot Analysis Under Shock Pulse Loading

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Miller, Christopher; Horie, Yasuyuki; Molek, Christopher; Welle, Eric; Zhou, Min

    The ignition behavior of pressed HMX under shock pulse loading with a flyer is analyzed using a cohesive finite element method (CFEM) which accounts for large deformation, microcracking, frictional heating, and thermal conduction. The simulations account for the controlled loading of thin-flyer shock experiments with flyer velocities between 1.7 and 4.0 km/s. The study focuses on the computational prediction of ignition threshold using James criterion which involves loading intensity and energy imparted to the material. The predicted thresholds are in good agreement with measurements from shock experiments. In particular, it is found that grain size significantly affects the ignition sensitivity of the materials, with smaller sizes leading to lower energy thresholds required for ignition. In addition, significant stress attenuation is observed in high intensity pulse loading as compared to low intensity pulse loading, which affects density of hotspot distribution. The microstructure-performance relations obtained can be used to design explosives with tailored attributes and safety envelopes.

  15. Rainfall variability over the tropical Pacific from July 1987 through December 1991 as inferred via monthly estimates from SSM/I

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Avery, Susan K.

    1994-01-01

    Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the Special Sensor Microwave/Imager (SSM/I) for the preiod from July 1987 through December 1991. The monthly estimates were calibrated using measurements from a network of Pacific atoll rain gauges and compared to other satellite-based rainfall estimation techniques. Based on these monthly estimates, an analysis of the variability of large-scale features over intraseasonal to interannual timescales has been performed. While the major precipitation features as well as the seasonal variability distributions show good agreement with expected values, the presence of a moderately intense El Nino during 1986-87 and an intense La Nina during 1988-89 highlights this time period.

  16. The ÖX˜ absorption of vinoxy radical revisited: Normal and Herzberg-Teller bands observed via cavity ringdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Thomas, Phillip S.; Chhantyal-Pun, Rabi; Kline, Neal D.; Miller, Terry A.

    2010-03-01

    The ÖX˜ electronic absorption spectrum of vinoxy radical has been investigated using room temperature cavity ringdown spectroscopy. Analysis of the observed bands on the basis of computed vibrational frequencies and rotational envelopes reveals that two distinct types of features are present with comparable intensities. The first type corresponds to "normal" allowed electronic transitions to the origin and symmetric vibrations in the à state. The second type is interpreted in terms of excitations to asymmetric à state vibrations, which are only vibronically allowed by Herzberg-Teller coupling to the B˜ state. Results of electronic structure calculations indicate that the magnitude of the Herzberg-Teller coupling is appropriate to produce vibronically induced transitions with intensities comparable to those of the normal bands.

  17. Characterization of amine-functionalized electrode for aqueous carbon dioxide (CO2) direct detection

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi

    2017-03-01

    In this study, fabrication of amino groups and ferrocenes co-modified sensor electrode and electrochemical detection of carbon dioxide (CO2) in the saline solution is reported. Electrochemical detection of CO2 was carried out using cyclic voltammetry in saline solution containing sodium bicarbonate as CO2 source. Oxidation and reduction peak current intensities computed from cyclic voltammograms varied as a function of concentration of CO2 molecules. The calibration curve was obtained by plotting oxidation peak current intensities as a function of CO2 concentration. The sensor electrode prepared in this study can estimate the differences between concentrations of CO2 in normal seawater up to 10 times higher. Furthermore, the surface analysis was performed to clarify the CO2 detection mechanism.

  18. Big Data Ecosystems Enable Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, Terence J.; Kleese van Dam, Kerstin

    Over the past 5 years, advances in experimental, sensor and computational technologies have driven the exponential growth in the volumes, acquisition rates, variety and complexity of scientific data. As noted by Hey et al in their 2009 e-book The Fourth Paradigm, this availability of large-quantities of scientifically meaningful data has given rise to a new scientific methodology - data intensive science. Data intensive science is the ability to formulate and evaluate hypotheses using data and analysis to extend, complement and, at times, replace experimentation, theory, or simulation. This new approach to science no longer requires scientists to interact directly withmore » the objects of their research; instead they can utilize digitally captured, reduced, calibrated, analyzed, synthesized and visualized results - allowing them carry out 'experiments' in data.« less

  19. Analysis of speckle and material properties in laider tracer

    NASA Astrophysics Data System (ADS)

    Ross, Jacob W.; Rigling, Brian D.; Watson, Edward A.

    2017-04-01

    The SAL simulation tool Laider Tracer models speckle: the random variation in intensity of an incident light beam across a rough surface. Within Laider Tracer, the speckle field is modeled as a 2-D array of jointly Gaussian random variables projected via ray tracing onto the scene of interest. Originally, all materials in Laider Tracer were treated as ideal diffuse scatterers, for which the far-field return computed uses the Lambertian Bidirectional Reflectance Distribution Function (BRDF). As presented here, we implement material properties into Laider Tracer via the Non-conventional Exploitation Factors Data System: a database of properties for thousands of different materials sampled at various wavelengths and incident angles. We verify the intensity behavior as a function of incident angle after material properties are added to the simulation.

  20. A Simple Tool for the Design and Analysis of Multiple-Reflector Antennas in a Multi-Disciplinary Environment

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea

    2000-01-01

    The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.

  1. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  2. Analysis of Coordinated Observations in the Region of the Day Side Polar Cleft

    DTIC Science & Technology

    1988-04-01

    measuremrnts in the invariant latitude Reiff. 1982; Luhmann et al., 1984: Chtu et al.. 1985 and range between 70’ and 800 with about 25-min time resolu... light line) .4 L index underneath. (figure courtesy of R. L. McPh! rron). comparison between the measured AL index and the computed AL inJex. The top...This is also a region where irregular magnetic pulsations ular pulsations measured by ground-based magnetometers in often occur. The intensity of these

  3. Usage of "Powergraph" software at laboratory lessons of "general physics" department of MEPhI

    NASA Astrophysics Data System (ADS)

    Klyachin, N. A.; Matronchik, A. Yu.; Khangulyan, E. V.

    2017-01-01

    One considers usage of "PowerGraph" software in laboratory exercise "Study of sodium spectrum" of physical experiment lessons. Togethe with the design of experiment setup, one discusses the sodium spectra digitized with computer audio chip. Usage of "PowerGraph" software in laboratory experiment "Study of sodium spectrum" allows an efficient visualization of the sodium spectrum and analysis of its fine structure. In particular, it allows quantitative measurements of the wavelengths and line relative intensities.

  4. Thermal acoustic oscillations, volume 2. [cryogenic fluid storage

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Sims, W. H.; Fan, C.

    1975-01-01

    A number of thermal acoustic oscillation phenomena and their effects on cryogenic systems were studied. The conditions which cause or suppress oscillations, the frequency, amplitude and intensity of oscillations when they exist, and the heat loss they induce are discussed. Methods of numerical analysis utilizing the digital computer were developed for use in cryogenic systems design. In addition, an experimental verification program was conducted to study oscillation wave characteristics and boiloff rate. The data were then reduced and compared with the analytical predictions.

  5. Finite-element analysis of dynamic fracture

    NASA Technical Reports Server (NTRS)

    Aberson, J. A.; Anderson, J. M.; King, W. W.

    1976-01-01

    Applications of the finite element method to the two dimensional elastodynamics of cracked structures are presented. Stress intensity factors are computed for two problems involving stationary cracks. The first serves as a vehicle for discussing lumped-mass and consistent-mass characterizations of inertia. In the second problem, the behavior of a photoelastic dynamic tear test specimen is determined for the time prior to crack propagation. Some results of a finite element simulation of rapid crack propagation in an infinite body are discussed.

  6. Electromagnetic field scattering by a triangular aperture.

    PubMed

    Harrison, R E; Hyman, E

    1979-03-15

    The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.

  7. Laser Velocimeter Measurements and Analysis in Turbulent Flows with Combustion. Part 2.

    DTIC Science & Technology

    1983-07-01

    sampling error for 63 this sample size. Mean velocities and turbulence intensi- ties were found to be statistically accurate to ± 1 % and 13%, respectively...Although the statist - ical error was found to be rather small (± 1 % for mean velo- cities and 13% for turbulence intensities), there can be additional...34Computational and Experimental Study of a Captive Annular Eddy," Journal of Fluid Mechanics, Vol. 28, pt. 1 , pp. 43-63, 12 April, 1967. 152 REFERENCES (con’d

  8. Further studies using matched filter theory and stochastic simulation for gust loads prediction

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii

    1993-01-01

    This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.

  9. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  10. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  11. [Mobile computing in anaesthesiology and intensive care medicine. The practical relevance of portable digital assistants].

    PubMed

    Pazhur, R J; Kutter, B; Georgieff, M; Schraag, S

    2003-06-01

    Portable digital assistants (PDAs) may be of value to the anaesthesiologist as development in medical care is moving towards "bedside computing". Many different portable computers are currently available and it is now possible for the physician to carry a mobile computer with him all the time. It is data base, reference book, patient tracking help, date planner, computer, book, magazine, calculator and much more in one mobile device. With the help of a PDA, information that is required for our work may be available at all times and everywhere at the point of care within seconds. In this overview the possibilities for the use of PDAs in anaesthesia and intensive care medicine are discussed. Developments in other countries, possibilities in use but also problems such as data security and network technology are evaluated.

  12. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  13. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  14. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  15. Low Latency Workflow Scheduling and an Application of Hyperspectral Brightness Temperatures

    NASA Astrophysics Data System (ADS)

    Nguyen, P. T.; Chapman, D. R.; Halem, M.

    2012-12-01

    New system analytics for Big Data computing holds the promise of major scientific breakthroughs and discoveries from the exploration and mining of the massive data sets becoming available to the science community. However, such data intensive scientific applications face severe challenges in accessing, managing and analyzing petabytes of data. While the Hadoop MapReduce environment has been successfully applied to data intensive problems arising in business, there are still many scientific problem domains where limitations in the functionality of MapReduce systems prevent its wide adoption by those communities. This is mainly because MapReduce does not readily support the unique science discipline needs such as special science data formats, graphic and computational data analysis tools, maintaining high degrees of computational accuracies, and interfacing with application's existing components across heterogeneous computing processors. We address some of these limitations by exploiting the MapReduce programming model for satellite data intensive scientific problems and address scalability, reliability, scheduling, and data management issues when dealing with climate data records and their complex observational challenges. In addition, we will present techniques to support the unique Earth science discipline needs such as dealing with special science data formats (HDF and NetCDF). We have developed a Hadoop task scheduling algorithm that improves latency by 2x for a scientific workflow including the gridding of the EOS AIRS hyperspectral Brightness Temperatures (BT). This workflow processing algorithm has been tested at the Multicore Computing Center private Hadoop based Intel Nehalem cluster, as well as in a virtual mode under the Open Source Eucalyptus cloud. The 55TB AIRS hyperspectral L1b Brightness Temperature record has been gridded at the resolution of 0.5x1.0 degrees, and we have computed a 0.9 annual anti-correlation to the El Nino Southern oscillation in the Nino 4 region, as well as a 1.9 Kelvin decadal Arctic warming in the 4u and 12u spectral regions. Additionally, we will present the frequency of extreme global warming events by the use of a normalized maximum BT in a grid cell relative to its local standard deviation. A low-latency Hadoop scheduling environment maintains data integrity and fault tolerance in a MapReduce data intensive Cloud environment while improving the "time to solution" metric by 35% when compared to a more traditional parallel processing system for the same dataset. Our next step will be to improve the usability of our Hadoop task scheduling system, to enable rapid prototyping of data intensive experiments by means of processing "kernels". We will report on the performance and experience of implementing these experiments on the NEX testbed, and propose the use of a graphical directed acyclic graph (DAG) interface to help us develop on-demand scientific experiments. Our workflow system works within Hadoop infrastructure as a replacement for the FIFO or FairScheduler, thus the use of Apache "Pig" latin or other Apache tools may also be worth investigating on the NEX system to improve the usability of our workflow scheduling infrastructure for rapid experimentation.

  16. Data Scientists ARE coming of age: but WHERE are they coming from?

    NASA Astrophysics Data System (ADS)

    Evans, N.; Bastrakova, I.; Connor, N.; Raymond, O.; Wyborn, L. A.

    2013-12-01

    The fourth paradigm of data intensive science is upon us: a new fundamental scientific methodology has emerged which is underpinned by the capability to analyse large volumes of data using advanced computational capacities. This combination is enabling earth and space scientists to respond to decadal challenges on issues such as the sustainable development of our natural resources, impacts of climate change and protection from national hazards. Fundamental to the data intensive paradigm is data that are readily accessible and capable of being integrated and amalgamated with other data often from multiple sources. For many years Earth and Space science practitioners have been drowning in a data deluge. In many cases, either lacking confidence in their capability and/or not having the time or capacity to manage these data assets they have called in the data professionals. However, such people rarely had domain knowledge of the data they were dealing with and before long it emerged that although the ';containers' of data were now much better managed and documented, in reality the content was locked up and difficult to access, particularly for HPC environments where national to global scale problems were being addressed. Geoscience Australia (GA) is the custodian of over 4 PB of Geoscientific data and is a key provider of evidence-based, scientific advice to government on national issues. Since 2011, in collaboration with CSIRO Minerals Down Under Program, and the National Computational Infrastructure, GA has begun a series of data intensive scientific research pilots that focussed on applying advanced ICT tools and technologies to enhance scientific outcomes for the agency, in particular, national scale analysis of data sets that can be up to 500 TB in size. As in any change program, a small group of innovators and early adopters took up the challenge of data intensive science and quickly showed that GA was able to use new ICT technologies to exploit an information-rich world to undertake applied research and to deliver new business outcomes in ways that current technologies do not allow. The innovators clearly had the necessary skills to rapidly adapt to data intensive techniques. However, if we were to scale out to the rest of the organisation, we needed to quantify these skills. The Strategic People Development Section of GA agreed to: * Conduct a capability analysis of the scientific staff that participated in the pilot projects including a review of university training and post graduate training; and * Conduct capability analysis of the technical groups involved in the pilot projects. The analysis identified the need for multi-disciplinary teams across the spectrum from pure scientists to pure ICT staff along with a key hybrid role - the Data Scientist, who has a greater capacity in mathematical, numerical modelling, statistics, computational skills, software engineering and spatial skills and the ability to integrate data across multiple domains. To fill the emerging gap, GA is asking the questions; how do we find or develop this capability, can we successfully transform the Scientist or the ICT Professional, are our educational facilities modifying their training - but it is certainly leading GA to acknowledge, formalise, and promote a continuum of skills and roles, changing our recruitment, re-assignment and Learning and Development strategic decisions.

  17. Conformational analysis and circular dichroism of bilirubin, the yellow pigment of jaundice

    NASA Astrophysics Data System (ADS)

    Lightner, David A.; Person, Richard; Peterson, Blake; Puzicha, Gisbert; Pu, Yu-Ming; Bojadziev, Stefan

    1991-06-01

    Conformational analysis of (4Z, 15Z)-bilirubin-IX(alpha) by molecular mechanics computations reveals a global energy minimum folded conformation. Powerful added stabilization is achieved through intramolecular hydrogen bonding. Theoretical treatment of bilirubin as a molecular exciton predicts an intense bisignate circular dichroism spectrum for the folded conformation: (Delta) (epsilon) is congruent to 270 L (DOT) mole-1 (DOT) cm-1 for the $OM450 nm electronic transition(s). Synthesis of bilirubin analogs with propionic acid groups methylated at the (alpha) or (beta) position introduces an allosteric effect that allows for an optical resolution of the pigments, with enantiomers exhibiting the theoretically predicted circular dichroism.

  18. Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study.

    PubMed

    Campion, Thomas R; Waitman, Lemuel R; May, Addison K; Ozdas, Asli; Lorenzi, Nancy M; Gadd, Cynthia S

    2010-01-01

    Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. Copyright (c) 2009. Published by Elsevier Ireland Ltd.

  19. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  20. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  1. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.

  2. Age-Related Differences in Muscle Fatigue Vary by Contraction Type: A Meta-analysis

    PubMed Central

    Avin, Keith G.

    2011-01-01

    Background During senescence, despite the loss of strength (force-generating capability) associated with sarcopenia, muscle endurance may improve for isometric contractions. Purpose The purpose of this study was to perform a systematic meta-analysis of young versus older adults, considering likely moderators (ie, contraction type, joint, sex, activity level, and task intensity). Data Sources A 2-stage systematic review identified potential studies from PubMed, CINAHL, PEDro, EBSCOhost: ERIC, EBSCOhost: Sportdiscus, and The Cochrane Library. Study Selection Studies reporting fatigue tasks (voluntary activation) performed at a relative intensity in both young (18–45 years of age) and old (≥55 years of age) adults who were healthy were considered. Data Extraction Sample size, mean and variance outcome data (ie, fatigue index or endurance time), joint, contraction type, task intensity (percentage of maximum), sex, and activity levels were extracted. Data Synthesis Effect sizes were (1) computed for all data points; (2) subgrouped by contraction type, sex, joint or muscle group, intensity, or activity level; and (3) further subgrouped between contraction type and the remaining moderators. Out of 3,457 potential studies, 46 publications (with 78 distinct effect size data points) met all inclusion criteria. Limitations A lack of available data limited subgroup analyses (ie, sex, intensity, joint), as did a disproportionate spread of data (most intensities ≥50% of maximum voluntary contraction). Conclusions Overall, older adults were able to sustain relative-intensity tasks significantly longer or with less force decay than younger adults (effect size=0.49). However, this age-related difference was present only for sustained and intermittent isometric contractions, whereas this age-related advantage was lost for dynamic tasks. When controlling for contraction type, the additional modifiers played minor roles. Identifying muscle endurance capabilities in the older adult may provide an avenue to improve functional capabilities, despite a clearly established decrement in peak torque. PMID:21616932

  3. Morphological cladistic analysis of eight popular Olive (Olea europaea L.) cultivars grown in Saudi Arabia using Numerical Taxonomic System for personal computer to detect phyletic relationship and their proximate fruit composition

    PubMed Central

    Al-Ruqaie, I.; Al-Khalifah, N.S.; Shanavaskhan, A.E.

    2015-01-01

    Varietal identification of olives is an intrinsic and empirical exercise owing to the large number of synonyms and homonyms, intensive exchange of genotypes, presence of varietal clones and lack of proper certification in nurseries. A comparative study of morphological characters of eight olive cultivars grown in Saudi Arabia was carried out and analyzed using NTSYSpc (Numerical Taxonomy System for personal computer) system segregated smaller fruits in one clade and the rest in two clades. Koroneiki, a Greek cultivar with a small sized fruit shared arm with Spanish variety Arbosana. Morphologic analysis using NTSYSpc revealed that biometrics of leaves, fruits and seeds are reliable morphologic characters to distinguish between varieties, except for a few morphologically very similar olive cultivars. The proximate analysis showed significant variations in the protein, fiber, crude fat, ash and moisture content of different cultivars. The study also showed that neither the size of fruit nor the fruit pulp thickness is a limiting factor determining crude fat content of olives. PMID:26858547

  4. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  5. Exergy analysis of helium liquefaction systems based on modified Claude cycle with two-expanders

    NASA Astrophysics Data System (ADS)

    Thomas, Rijo Jacob; Ghosh, Parthasarathi; Chowdhury, Kanchan

    2011-06-01

    Large-scale helium liquefaction systems, being energy-intensive, demand judicious selection of process parameters. An effective tool for design and analysis of thermodynamic cycles for these systems is exergy analysis, which is used to study the behavior of a helium liquefaction system based on modified Claude cycle. Parametric evaluation using process simulator Aspen HYSYS® helps to identify the effects of cycle pressure ratio and expander flow fraction on the exergetic efficiency of the liquefaction cycle. The study computes the distribution of losses at different refrigeration stages of the cycle and helps in selecting optimum cycle pressures, operating temperature levels of expanders and mass flow rates through them. Results from the analysis may help evolving guidelines for designing appropriate thermodynamic cycles for practical helium liquefaction systems.

  6. Parent-child attitude congruence on type and intensity of physical activity: testing multiple mediators of sedentary behavior in older children.

    PubMed

    Anderson, Cheryl B; Hughes, Sheryl O; Fuemmeler, Bernard F

    2009-07-01

    This study examined parent-child attitudes on value of specific types and intensities of physical activity, which may explain gender differences in child activity, and evaluated physical activity as a mechanism to reduce time spent in sedentary behaviors. A community sample of 681 parents and 433 children (mean age 9.9 years) reported attitudes on importance of vigorous and moderate intensity team and individually performed sports/activities, as well as household chores. Separate structural models (LISREL 8.7) for girls and boys tested whether parental attitudes were related to child TV and computer via child attitudes, sport team participation, and physical activity, controlling for demographic factors. Child 7-day physical activity, sport teams, weekly TV, computer. Parent-child attitude congruence was more prevalent among boys, and attitudes varied by ethnicity, parent education, and number of children. Positive parent-child attitudes for vigorous team sports were related to increased team participation and physical activity, as well as reduced TV and computer in boys and girls. Value of moderate intensity household chores, such as cleaning house and doing laundry, was related to decreased team participation and increased TV in boys. Only organized team sports, not general physical activity, was related to reduced TV and computer. Results support parents' role in socializing children's achievement task values, affecting child activity by transferring specific attitudes. Value of vigorous intensity sports provided the most benefits to activity and reduction of sedentary behavior, while valuing household chores had unexpected negative effects.

  7. Integrating Computing across the Curriculum: The Impact of Internal Barriers and Training Intensity on Computer Integration in the Elementary School Classroom

    ERIC Educational Resources Information Center

    Coleman, LaToya O.; Gibson, Philip; Cotten, Shelia R.; Howell-Moroney, Michael; Stringer, Kristi

    2016-01-01

    This study examines the relationship between internal barriers, professional development, and computer integration outcomes among a sample of fourth- and fifth-grade teachers in an urban, low-income school district in the Southeastern United States. Specifically, we examine the impact of teachers' computer attitudes, computer anxiety, and computer…

  8. Overview 1993: Computational applications

    NASA Technical Reports Server (NTRS)

    Benek, John A.

    1993-01-01

    Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.

  9. Evaluating virtual hosted desktops for graphics-intensive astronomy

    NASA Astrophysics Data System (ADS)

    Meade, B. F.; Fluke, C. J.

    2018-04-01

    Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.

  10. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  11. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  12. Comparison of fatigue crack growth of riveted and bonded aircraft lap joints made of Aluminium alloy 2024-T3 substrates - A numerical study

    NASA Astrophysics Data System (ADS)

    Pitta, S.; Rojas, J. I.; Crespo, D.

    2017-05-01

    Aircraft lap joints play an important role in minimizing the operational cost of airlines. Hence, airlines pay more attention to these technologies to improve efficiency. Namely, a major time consuming and costly process is maintenance of aircraft between the flights, for instance, to detect early formation of cracks, monitoring crack growth, and fixing the corresponding parts with joints, if necessary. This work is focused on the study of repairs of cracked aluminium alloy (AA) 2024-T3 plates to regain their original strength; particularly, cracked AA 2024-T3 substrate plates repaired with doublers of AA 2024-T3 with two configurations (riveted and with adhesive bonding) are analysed. The fatigue life of the substrate plates with cracks of 1, 2, 5, 10 and 12.7mm is computed using Fracture Analysis 3D (FRANC3D) tool. The stress intensity factors for the repaired AA 2024-T3 plates are computed for different crack lengths and compared using commercial FEA tool ABAQUS. The results for the bonded repairs showed significantly lower stress intensity factors compared with the riveted repairs. This improves the overall fatigue life of the bonded joint.

  13. Application of ERTS-1 data to the protection and management of New Jersey's coastal environment

    NASA Technical Reports Server (NTRS)

    Yunghans, R. S.; Feinberg, E. B.; Wobber, F. J.; Mairs, R. L. (Principal Investigator); Macomber, R. T.; Stanczuk, D.; Stitt, J. A.

    1974-01-01

    The author has identified the following significant results. Rapid access to ERTS data was provided by NASA GSFC for the February 26, 1974 overpass of the New Jersey test site. Forty-seven hours following the overpass computer-compatible tapes were ready for processing at EarthSat. The finished product was ready just 60 hours following the overpass and delivered to the New Jersey Department of Environmental Protection. This operational demonstration has been successful in convincing NJDEP as to the worth of ERTS as an operational monitoring and enforcement tool of significant value to the State. An erosion/ accretion severity index has been developed for the New Jersey shore case study area. Computerized analysis techniques have been used for monitoring offshore waste disposal dumping locations, drift vectors, and dispersion rates in the New York Bight area. A computer shade print of the area was used to identify intensity levels of acid waste. A Litton intensity slice print was made to provide graphic presentation of dispersion characteristics and the dump extent. Continued monitoring will lead to the recommendation and justification of permanent dumping sites which pose no threat to water quality in nearshore environments.

  14. Topological Vulnerability Analysis

    NASA Astrophysics Data System (ADS)

    Jajodia, Sushil; Noel, Steven

    Traditionally, network administrators rely on labor-intensive processes for tracking network configurations and vulnerabilities. This requires a great deal of expertise, and is error prone because of the complexity of networks and associated security data. The interdependencies of network vulnerabilities make traditional point-wise vulnerability analysis inadequate. We describe a Topological Vulnerability Analysis (TVA) approach that analyzes vulnerability dependencies and shows all possible attack paths into a network. From models of the network vulnerabilities and potential attacker exploits, we compute attack graphs that convey the impact of individual and combined vulnerabilities on overall security. TVA finds potential paths of vulnerability through a network, showing exactly how attackers may penetrate a network. From this, we identify key vulnerabilities and provide strategies for protection of critical network assets.

  15. Vibrational analysis and quantum chemical calculations of 2,2‧-bipyridine Zinc(II) halide complexes

    NASA Astrophysics Data System (ADS)

    Ozel, Aysen E.; Kecel, Serda; Akyuz, Sevim

    2007-05-01

    In this study the molecular structure and vibrational spectra of Zn(2,2'-bipyridine)X 2 (X = Cl and Br) complexes were studied in their ground states by computational vibrational study and scaled quantum mechanical (SQM) analysis. The geometry optimization, vibrational wavenumber and intensity calculations of free and coordinated 2,2'-bipyridine were carried out with the Gaussian03 program package by using Hartree-Fock (HF) and Density Functional Theory (DFT) with B3LYP functional and 6-31G (d,p) basis set. The total energy distributions (TED) of the vibrational modes were calculated by using Scaled Quantum Mechanical (SQM) analysis. Fundamentals were characterised by their total energy distributions. Coordination sensitive modes of 2,2'-bipyridine were determined.

  16. Laboratory studies, analysis, and interpretation of the spectra of hydrocarbons present in planetary atmospheres including cyanoacetylene, acetylene, propane, and ethane

    NASA Technical Reports Server (NTRS)

    Blass, William E.; Daunt, Stephen J.; Peters, Antoni V.; Weber, Mark C.

    1990-01-01

    Combining broadband Fourier transform spectrometers (FTS) from the McMath facility at NSO and from NRC in Ottawa and narrow band TDL data from the laboratories with computational physics techniques has produced a broad range of results for the study of planetary atmospheres. Motivation for the effort flows from the Voyager/IRIS observations and the needs of Voyager analysis for laboratory results. In addition, anticipation of the Cassini mission adds incentive to pursue studies of observed and potentially observable constituents of planetary atmospheres. Current studies include cyanoacetylene, acetylene, propane, and ethane. Particular attention is devoted to cyanoacetylen (H3CN) which is observed in the atmosphere of Titan. The results of a high resolution infrared laboratory study of the line positions of the 663, 449, and 22.5/cm fundamental bands are presented. Line position, reproducible to better than 5 MHz for the first two bands, are available for infrared astrophysical searches. Intensity and broadening studies are in progress. Acetylene is a nearly ubiquitous atmospheric constituent of the outer planets and Titan due to the nature of methane photochemistry. Results of ambient temperature absolute intensity measurements are presented for the fundamental and two two-quantum hotband in the 730/cm region. Low temperature hotband intensity and linewidth measurements are planned.

  17. Towards Coupling of Macroseismic Intensity with Structural Damage Indicators

    NASA Astrophysics Data System (ADS)

    Kouteva, Mihaela; Boshnakov, Krasimir

    2016-04-01

    Knowledge on basic data of ground motion acceleration time histories during earthquakes is essential to understanding the earthquake resistant behaviour of structures. Peak and integral ground motion parameters such as peak ground motion values (acceleration, velocity and displacement), measures of the frequency content of ground motion, duration of strong shaking and various intensity measures play important roles in seismic evaluation of existing facilities and design of new systems. Macroseismic intensity is an earthquake measure related to seismic hazard and seismic risk description. Having detailed ideas on the correlations between the earthquake damage potential and macroseismic intensity is an important issue in engineering seismology and earthquake engineering. Reliable earthquake hazard estimation is the major prerequisite to successful disaster risk management. The usage of advanced earthquake engineering approaches for structural response modelling is essential for reliable evaluation of the accumulated damages in the existing buildings and structures due to the history of seismic actions, occurred during their lifetime. Full nonlinear analysis taking into account single event or series of earthquakes and the large set of elaborated damage indices are suitable contemporary tools to cope with this responsible task. This paper presents some results on the correlation between observational damage states, ground motion parameters and selected analytical damage indices. Damage indices are computed on the base of nonlinear time history analysis of test reinforced structure, characterising the building stock of the Mediterranean region designed according the earthquake resistant requirements in mid XX-th century.

  18. Nationwide Buildings Energy Research enabled through an integrated Data Intensive Scientific Workflow and Advanced Analysis Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Lansing, Carina S.; Elsethagen, Todd O.

    2014-01-28

    Modern workflow systems enable scientists to run ensemble simulations at unprecedented scales and levels of complexity, allowing them to study system sizes previously impossible to achieve, due to the inherent resource requirements needed for the modeling work. However as a result of these new capabilities the science teams suddenly also face unprecedented data volumes that they are unable to analyze with their existing tools and methodologies in a timely fashion. In this paper we will describe the ongoing development work to create an integrated data intensive scientific workflow and analysis environment that offers researchers the ability to easily create andmore » execute complex simulation studies and provides them with different scalable methods to analyze the resulting data volumes. The integration of simulation and analysis environments is hereby not only a question of ease of use, but supports fundamental functions in the correlated analysis of simulation input, execution details and derived results for multi-variant, complex studies. To this end the team extended and integrated the existing capabilities of the Velo data management and analysis infrastructure, the MeDICi data intensive workflow system and RHIPE the R for Hadoop version of the well-known statistics package, as well as developing a new visual analytics interface for the result exploitation by multi-domain users. The capabilities of the new environment are demonstrated on a use case that focusses on the Pacific Northwest National Laboratory (PNNL) building energy team, showing how they were able to take their previously local scale simulations to a nationwide level by utilizing data intensive computing techniques not only for their modeling work, but also for the subsequent analysis of their modeling results. As part of the PNNL research initiative PRIMA (Platform for Regional Integrated Modeling and Analysis) the team performed an initial 3 year study of building energy demands for the US Eastern Interconnect domain, which they are now planning to extend to predict the demand for the complete century. The initial study raised their data demands from a few GBs to 400GB for the 3year study and expected tens of TBs for the full century.« less

  19. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  20. nMoldyn: a program package for a neutron scattering oriented analysis of molecular dynamics simulations.

    PubMed

    Róg, T; Murzyn, K; Hinsen, K; Kneller, G R

    2003-04-15

    We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003

  1. Computational Prediction of Shock Ignition Thresholds and Ignition Probability of Polymer-Bonded Explosives

    NASA Astrophysics Data System (ADS)

    Wei, Yaochi; Kim, Seokpum; Horie, Yasuyuki; Zhou, Min

    2017-06-01

    A computational approach is developed to predict the probabilistic ignition thresholds of polymer-bonded explosives (PBXs). The simulations explicitly account for microstructure, constituent properties, and interfacial responses and capture processes responsible for the development of hotspots and damage. The specific damage mechanisms considered include viscoelasticity, viscoplasticity, fracture, post-fracture contact, frictional heating, and heat conduction. The probabilistic analysis uses sets of statistically similar microstructure samples to mimic relevant experiments for statistical variations of material behavior due to inherent material heterogeneities. The ignition thresholds and corresponding ignition probability maps are predicted for PBX 9404 and PBX 9501 for the impact loading regime of Up = 200 --1200 m/s. James and Walker-Wasley relations are utilized to establish explicit analytical expressions for the ignition probability as a function of load intensities. The predicted results are in good agreement with available experimental measurements. The capability to computationally predict the macroscopic response out of material microstructures and basic constituent properties lends itself to the design of new materials and the analysis of existing materials. The authors gratefully acknowledge the support from Air Force Office of Scientific Research (AFOSR) and the Defense Threat Reduction Agency (DTRA).

  2. An R package for state-trace analysis.

    PubMed

    Prince, Melissa; Hawkins, Guy; Love, Jonathon; Heathcote, Andrew

    2012-09-01

    State-trace analysis (Bamber, Journal of Mathematical Psychology, 19, 137-181, 1979) is a graphical analysis that can determine whether one or more than one latent variable mediates an apparent dissociation between the effects of two experimental manipulations. State-trace analysis makes only ordinal assumptions and so, is not confounded by range effects that plague alternative methods, especially when performance is measured on a bounded scale (such as accuracy). We describe and illustrate the application of a freely available GUI driven package, StateTrace, for the R language. StateTrace automates many aspects of a state-trace analysis of accuracy and other binary response data, including customizable graphics and the efficient management of computationally intensive Bayesian methods for quantifying evidence about the outcomes of a state-trace experiment, developed by Prince, Brown, and Heathcote (Psychological Methods, 17, 78-99, 2012).

  3. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  4. Potential energy surface, dipole moment surface and the intensity calculations for the 10 μm, 5 μm and 3 μm bands of ozone

    NASA Astrophysics Data System (ADS)

    Polyansky, Oleg L.; Zobov, Nikolai F.; Mizus, Irina I.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan

    2018-05-01

    Monitoring ozone concentrations in the Earth's atmosphere using spectroscopic methods is a major activity which undertaken both from the ground and from space. However there are long-running issues of consistency between measurements made at infrared (IR) and ultraviolet (UV) wavelengths. In addition, key O3 IR bands at 10 μm, 5 μm and 3 μm also yield results which differ by a few percent when used for retrievals. These problems stem from the underlying laboratory measurements of the line intensities. Here we use quantum chemical techniques, first principles electronic structure and variational nuclear-motion calculations, to address this problem. A new high-accuracy ab initio dipole moment surface (DMS) is computed. Several spectroscopically-determined potential energy surfaces (PESs) are constructed by fitting to empirical energy levels in the region below 7000 cm-1 starting from an ab initio PES. Nuclear motion calculations using these new surfaces allow the unambiguous determination of the intensities of 10 μm band transitions, and the computation of the intensities of 10 μm and 5 μm bands within their experimental error. A decrease in intensities within the 3 μm is predicted which appears consistent with atmospheric retrievals. The PES and DMS form a suitable starting point both for the computation of comprehensive ozone line lists and for future calculations of electronic transition intensities.

  5. Solar wind speed and He I (1083 nm) absorption line intensity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakamada, Kazuyuki; Kojima, Masayoshi; Kakinuma, Takakiyo

    1991-04-01

    Since the pattern of the solar wind was relatively steady during Carrington rotations 1,748 through 1,752 in 1984, an average distribution of the solar windspeed on a so-called source surface can be constructed by superposed epoch analysis of the wind values estimated by the interplanetary scintillation observations. The average distribution of the solar wind speed is then projected onto the photosphere along magnetic field lines computed by a so-called potential model with the line-of-sight components of the photospheric magnetic fields. The solar wind speeds projected onto the photosphere are compared with the intensities of the He I (1,083 nm) absorptionmore » line at the corresponding locations in the chromosphere. The authors found that there is a linear relation between the speeds and the intensities. Since the intensity of the He I (1,083 nm) absorption line is coupled with the temperature of the corona, this relation suggests that some physical mechanism in or above the photosphere accelerates coronal plasmas to the solar wind speed in regions where the temperature is low. Further, it is suggested that the efficiency of the solar wind acceleration decreases as the coronal temperature increases.« less

  6. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  7. Measuring sperm movement within the female reproductive tract using Fourier analysis.

    PubMed

    Nicovich, Philip R; Macartney, Erin L; Whan, Renee M; Crean, Angela J

    2015-02-01

    The adaptive significance of variation in sperm phenotype is still largely unknown, in part due to the difficulties of observing and measuring sperm movement in its natural, selective environment (i.e., within the female reproductive tract). Computer-assisted sperm analysis systems allow objective and accurate measurement of sperm velocity, but rely on being able to track individual sperm, and are therefore unable to measure sperm movement in species where sperm move in trains or bundles. Here we describe a newly developed computational method for measuring sperm movement using Fourier analysis to estimate sperm tail beat frequency. High-speed time-lapse videos of sperm movement within the female tract of the neriid fly Telostylinus angusticollis were recorded, and a map of beat frequencies generated by converting the periodic signal of an intensity versus time trace at each pixel to the frequency domain using the Fourier transform. We were able to detect small decreases in sperm tail beat frequency over time, indicating the method is sensitive enough to identify consistent differences in sperm movement. Fourier analysis can be applied to a wide range of species and contexts, and should therefore facilitate novel exploration of the causes and consequences of variation in sperm movement.

  8. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  9. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  10. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  11. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  12. Comparative phyloinformatics of virus genes at micro and macro levels in a distributed computing environment.

    PubMed

    Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo

    2008-01-01

    Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.

  13. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  14. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  15. David Price--Pioneer of digital ICP monitoring, neurosurgeon and teacher.

    PubMed

    Czosnyka, Marek; Kirollos, Ramez; van Hille, Philip

    2015-06-01

    In early 1970s first personal desk-top computers started to be available in hospitals. Mr Price was one of the pioneers introducing his own software to identify Marmarou's model of CSF space during infusion studies to diagnose patients suffering from hydrocephalus. His closed-loop control system for infusion of mannitol to manage patients at risk of intracranial hypertension was designed in 1977. The system worked successfully for 10 years in Pinderfields Hospital in Wakefield, UK. In the middle 1980's he initiated international cooperation with Children's Health Centre in Poland in long-term computer-assisted monitoring and analysis of ICP. Software designed in a course of this cooperation paved the way for contemporary package of ICM+ (Intensive Care Monitor, University of Cambridge, UK). Our scientific portfolio from these years (1985-1995) contains hundreds of head injured patients with waveform ICP analysis, introduction of compensatory reserve index RAP, few highly cited papers. Now, we understand ICP much better thanks to David's personal passion and extremely friendly support.

  16. Enabling smart personalized healthcare: a hybrid mobile-cloud approach for ECG telemonitoring.

    PubMed

    Wang, Xiaoliang; Gui, Qiong; Liu, Bingwei; Jin, Zhanpeng; Chen, Yu

    2014-05-01

    The severe challenges of the skyrocketing healthcare expenditure and the fast aging population highlight the needs for innovative solutions supporting more accurate, affordable, flexible, and personalized medical diagnosis and treatment. Recent advances of mobile technologies have made mobile devices a promising tool to manage patients' own health status through services like telemedicine. However, the inherent limitations of mobile devices make them less effective in computation- or data-intensive tasks such as medical monitoring. In this study, we propose a new hybrid mobile-cloud computational solution to enable more effective personalized medical monitoring. To demonstrate the efficacy and efficiency of the proposed approach, we present a case study of mobile-cloud based electrocardiograph monitoring and analysis and develop a mobile-cloud prototype. The experimental results show that the proposed approach can significantly enhance the conventional mobile-based medical monitoring in terms of diagnostic accuracy, execution efficiency, and energy efficiency, and holds the potential in addressing future large-scale data analysis in personalized healthcare.

  17. Vector spherical quasi-Gaussian vortex beams

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2014-02-01

    Model equations for describing and efficiently computing the radiation profiles of tightly spherically focused higher-order electromagnetic beams of vortex nature are derived stemming from a vectorial analysis with the complex-source-point method. This solution, termed as a high-order quasi-Gaussian (qG) vortex beam, exactly satisfies the vector Helmholtz and Maxwell's equations. It is characterized by a nonzero integer degree and order (n,m), respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and an azimuthal phase dependency in the form of a complex exponential corresponding to a vortex beam. An attractive feature of the high-order solution is the rigorous description of strongly focused (or strongly divergent) vortex wave fields without the need of either the higher-order corrections or the numerically intensive methods. Closed-form expressions and computational results illustrate the analysis and some properties of the high-order qG vortex beams based on the axial and transverse polarization schemes of the vector potentials with emphasis on the beam waist.

  18. Computer-Delivered Interventions to Reduce College Student Drinking: A Meta-Analysis

    PubMed Central

    Carey, Kate B.; Scott-Sheldon, Lori A. J.; Elliott, Jennifer C.; Bolles, Jamie R.; Carey, Michael P.

    2009-01-01

    Aims This meta-analysis evaluates the efficacy and moderators of computer-delivered interventions (CDIs) to reduce alcohol use among college students. Methods We included 35 manuscripts with 43 separate interventions, and calculated both between-group and within-group effect sizes for alcohol consumption and alcohol-related problems. Effects sizes were calculated for short-term (≤ 5 weeks) and longer-term (≥ 6 weeks) intervals. All studies were coded for study descriptors, participant characteristics, and intervention components. Results The effects of CDIs depended on the nature of the comparison condition: CDIs reduced quantity and frequency measures relative to assessment-only controls, but rarely differed from comparison conditions that included alcohol content. Small-to-medium within-group effect sizes can be expected for CDIs at short- and longer-term follow-ups; these changes are less than or equivalent to the within-group effect sizes observed for more intensive interventions. Conclusions CDIs reduce the quantity and frequency of drinking among college students. CDIs are generally equivalent to alternative alcohol-related comparison interventions. PMID:19744139

  19. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  20. Validation of Heart Rate Monitor Polar RS800 for Heart Rate Variability Analysis During Exercise.

    PubMed

    Hernando, David; Garatachea, Nuria; Almeida, Rute; Casajús, Jose A; Bailón, Raquel

    2018-03-01

    Hernando, D, Garatachea, N, Almeida, R, Casajús, JA, and Bailón, R. Validation of heart rate monitor Polar RS800 for heart rate variability analysis during exercise. J Strength Cond Res 32(3): 716-725, 2018-Heart rate variability (HRV) analysis during exercise is an interesting noninvasive tool to measure the cardiovascular response to the stress of exercise. Wearable heart rate monitors are a comfortable option to measure interbeat (RR) intervals while doing physical activities. It is necessary to evaluate the agreement between HRV parameters derived from the RR series recorded by wearable devices and those derived from an electrocardiogram (ECG) during dynamic exercise of low to high intensity. Twenty-three male volunteers performed an exercise stress test on a cycle ergometer. Subjects wore a Polar RS800 device, whereas ECG was also recorded simultaneously to extract the reference RR intervals. A time-frequency spectral analysis was performed to extract the instantaneous mean heart rate (HRM), and the power of low-frequency (PLF) and high-frequency (PHF) components, the latter centered on the respiratory frequency. Analysis was done in intervals of different exercise intensity based on oxygen consumption. Linear correlation, reliability, and agreement were computed in each interval. The agreement between the RR series obtained from the Polar device and from the ECG is high throughout the whole test although the shorter the RR is, the more differences there are. Both methods are interchangeable when analyzing HRV at rest. At high exercise intensity, HRM and PLF still presented a high correlation (ρ > 0.8) and excellent reliability and agreement indices (above 0.9). However, the PHF measurements from the Polar showed reliability and agreement coefficients around 0.5 or lower when the level of the exercise increases (for levels of O2 above 60%).

  1. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  2. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  3. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  4. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  5. The Effects of Computer Assisted English Instruction on High School Preparatory Students' Attitudes towards Computers and English

    ERIC Educational Resources Information Center

    Ates, Alev; Altunay, Ugur; Altun, Eralp

    2006-01-01

    The aim of this research was to discern the effects of computer assisted English instruction on English language preparatory students' attitudes towards computers and English in a Turkish-medium high school with an intensive English program. A quasi-experimental time series research design, also called "before-after" or "repeated…

  6. Impact of counselling on exclusive breast-feeding practices in a poor urban setting in Kenya: a randomized controlled trial.

    PubMed

    Ochola, Sophie A; Labadarios, Demetre; Nduati, Ruth W

    2013-10-01

    To determine the impact of facility-based semi-intensive and home-based intensive counselling in improving exclusive breast-feeding (EBF) in a low-resource urban setting in Kenya. A cluster randomized controlled trial in which nine villages were assigned on a 1:1:1 ratio, by computer, to two intervention groups and a control group. The home-based intensive counselling group (HBICG) received seven counselling sessions at home by trained peers, one prenatally and six postnatally. The facility-based semi-intensive counselling group (FBSICG) received only one counselling session prenatally. The control group (CG) received no counselling from the research team. Information on infant feeding practices was collected monthly for 6 months after delivery. The data-gathering team was blinded to the intervention allocation. The outcome was EBF prevalence at 6 months. Kibera slum, Nairobi. A total of 360 HIV-negative women, 34-36 weeks pregnant, were selected from an antenatal clinic in Kibera; 120 per study group. Of the 360 women enrolled, 265 completed the study and were included in the analysis (CG n 89; FBSICG n 87; HBICG n 89). Analysis was by intention to treat. The prevalence of EBF at 6 months was 23.6% in HBICG, 9.2% in FBSICG and 5.6% in CG. HBICG mothers had four times increased likelihood to practise EBF compared with those in the CG (adjusted relative risk = 4.01; 95% CI 2.30, 7.01; P=0.001). There was no significant difference between EBF rates in FBSICG and CG. EBF can be promoted in low socio-economic conditions using home-based intensive counselling. One session of facility-based counselling is not sufficient to sustain EBF.

  7. Development of advanced structural analysis methodologies for predicting widespread fatigue damage in aircraft structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.

    1995-01-01

    NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.

  8. Issues in International Energy Consumption Analysis: Canadian Energy Demand

    EIA Publications

    2015-01-01

    The residential sector is one of the main end-use sectors in Canada accounting for 16.7% of total end-use site energy consumption in 2009 (computed from NRCan 2012. pp, 4-5). In this year, the residential sector accounted for 54.5% of buildings total site energy consumption. Between 1990 and 2009, Canadian household energy consumption grew by less than 11%. Nonetheless, households contributed to 14.6% of total energy-related greenhouse gas emissions in Canada in 2009 (computed from NRCan 2012). This is the U.S. Energy Information Administration’s second study to help provide a better understanding of the factors impacting residential energy consumption and intensity in North America (mainly the United States and Canada) by using similar methodology for analyses in both countries.

  9. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  10. Compiling and editing agricultural strata boundaries with remotely sensed imagery and map attribute data using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1991-01-01

    The USDA presently uses labor-intensive photographic interpretation procedures to delineate large geographical areas into manageable size sampling units for the estimation of domestic crop and livestock production. Computer software to automate the boundary delineation procedure, called the computer-assisted stratification and sampling (CASS) system, was developed using a Hewlett Packard color-graphics workstation. The CASS procedures display Thematic Mapper (TM) satellite digital imagery on a graphics display workstation as the backdrop for the onscreen delineation of sampling units. USGS Digital Line Graph (DLG) data for roads and waterways are displayed over the TM imagery to aid in identifying potential sample unit boundaries. Initial analysis conducted with three Missouri counties indicated that CASS was six times faster than the manual techniques in delineating sampling units.

  11. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  12. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  13. Development and Validation of 2D Difference Intensity Analysis for Chemical Library Screening by Protein-Detected NMR Spectroscopy.

    PubMed

    Egner, John M; Jensen, Davin R; Olp, Michael D; Kennedy, Nolan W; Volkman, Brian F; Peterson, Francis C; Smith, Brian C; Hill, R Blake

    2018-03-02

    An academic chemical screening approach was developed by using 2D protein-detected NMR, and a 352-chemical fragment library was screened against three different protein targets. The approach was optimized against two protein targets with known ligands: CXCL12 and BRD4. Principal component analysis reliably identified compounds that induced nonspecific NMR crosspeak broadening but did not unambiguously identify ligands with specific affinity (hits). For improved hit detection, a novel scoring metric-difference intensity analysis (DIA)-was devised that sums all positive and negative intensities from 2D difference spectra. Applying DIA quickly discriminated potential ligands from compounds inducing nonspecific NMR crosspeak broadening and other nonspecific effects. Subsequent NMR titrations validated chemotypes important for binding to CXCL12 and BRD4. A novel target, mitochondrial fission protein Fis1, was screened, and six hits were identified by using DIA. Screening these diverse protein targets identified quinones and catechols that induced nonspecific NMR crosspeak broadening, hampering NMR analyses, but are currently not computationally identified as pan-assay interference compounds. The results established a streamlined screening workflow that can easily be scaled and adapted as part of a larger screening pipeline to identify fragment hits and assess relative binding affinities in the range of 0.3-1.6 mm. DIA could prove useful in library screening and other applications in which NMR chemical shift perturbations are measured. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Regional scale landslide risk assessment with a dynamic physical model - development, application and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh

    2013-04-01

    Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).

  15. Analysis of crack propagation in roller bearings using the boundary integral equation method - A mixed-mode loading problem

    NASA Technical Reports Server (NTRS)

    Ghosn, L. J.

    1988-01-01

    Crack propagation in a rotating inner raceway of a high-speed roller bearing is analyzed using the boundary integral method. The model consists of an edge plate under plane strain condition upon which varying Hertzian stress fields are superimposed. A multidomain boundary integral equation using quadratic elements was written to determine the stress intensity factors KI and KII at the crack tip for various roller positions. The multidomain formulation allows the two faces of the crack to be modeled in two different subregions, making it possible to analyze crack closure when the roller is positioned on or close to the crack line. KI and KII stress intensity factors along any direction were computed. These calculations permit determination of crack growth direction along which the average KI times the alternating KI is maximum.

  16. Understanding light scattering by a coated sphere part 2: time domain analysis.

    PubMed

    Laven, Philip; Lock, James A

    2012-08-01

    Numerical computations were made of scattering of an incident electromagnetic pulse by a coated sphere that is large compared to the dominant wavelength of the incident light. The scattered intensity was plotted as a function of the scattering angle and delay time of the scattered pulse. For fixed core and coating radii, the Debye series terms that most strongly contribute to the scattered intensity in different regions of scattering angle-delay time space were identified and analyzed. For a fixed overall radius and an increasing core radius, the first-order rainbow was observed to evolve into three separate components. The original component faded away, while the two new components eventually merged together. The behavior of surface waves generated by grazing incidence at the core/coating and coating/exterior interfaces was also examined and discussed.

  17. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  18. Numerical calibration of the stable poisson loaded specimen

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Calomino, Anthony M.; Brewer, Dave N.

    1992-01-01

    An analytical calibration of the Stable Poisson Loaded (SPL) specimen is presented. The specimen configuration is similar to the ASTM E-561 compact-tension specimen with displacement controlled wedge loading used for R-Curve determination. The crack mouth opening displacements (CMOD's) are produced by the diametral expansion of an axially compressed cylindrical pin located in the wake of a machined notch. Due to the unusual loading configuration, a three-dimensional finite element analysis was performed with gap elements simulating the contact between the pin and specimen. In this report, stress intensity factors, CMOD's, and crack displacement profiles are reported for different crack lengths and different contacting conditions. It was concluded that the computed stress intensity factor decreases sharply with increasing crack length, thus making the SPL specimen configuration attractive for fracture testing of brittle, high modulus materials.

  19. Monitoring minimization of grade B environments based on risk assessment using three-dimensional airflow measurements and computer simulation.

    PubMed

    Katayama, Hirohito; Higo, Takashi; Tokunaga, Yuji; Katoh, Shigeo; Hiyama, Yukio; Morikawa, Kaoru

    2008-01-01

    A practical, risk-based monitoring approach using the combined data collected from actual experiments and computer simulations was developed for the qualification of an EU GMP Annex 1 Grade B, ISO Class 7 area. This approach can locate and minimize the representative number of sampling points used for microbial contamination risk assessment. We conducted a case study on an aseptic clean room, newly constructed and specifically designed for the use of a restricted access barrier system (RABS). Hotspots were located using three-dimensional airflow analysis based on a previously published empirical measurement method, the three-dimensional airflow analysis. Local mean age of air (LMAA) values were calculated based on computer simulations. Comparable results were found using actual measurements and simulations, demonstrating the potential usefulness of such tools in estimating contamination risks based on the airflow characteristics of a clean room. Intensive microbial monitoring and particle monitoring at the Grade B environmental qualification stage, as well as three-dimensional airflow analysis, were also conducted to reveal contamination hotspots. We found representative hotspots were located at perforated panels covering the air exhausts where the major piston airflows collect in the Grade B room, as well as at any locations within the room that were identified as having stagnant air. However, we also found that the floor surface air around the exit airway of the RABS EU GMP Annex 1 Grade A, ISO Class 5 area was always remarkably clean, possibly due to the immediate sweep of the piston airflow, which prevents dispersed human microbes from falling in a Stokes-type manner on settling plates placed on the floor around the Grade A exit airway. In addition, this airflow is expected to be clean with a significantly low LMAA. Based on these observed results, we propose a simplified daily monitoring program to monitor microbial contamination in Grade B environments. To locate hotspots we propose using a combination of computer simulation, actual airflow measurements, and intensive environmental monitoring at the qualification stage. Thereafter, instead of particle or microbial air monitoring, we recommend the use of microbial surface monitoring at the main air exhaust. These measures would be sufficient to assure the efficiency of the monitoring program, as well as to minimize the number of surface sampling points used in environments surrounding a RABS.

  20. Fermilab | Science at Fermilab | Experiments & Projects | Intensity

    Science.gov Websites

    Theory Computing High-performance Computing Grid Computing Networking Mass Storage Plan for the Future List Historic Results Inquiring Minds Questions About Physics Other High-Energy Physics Sites More About Particle Physics Library Visual Media Services Timeline History High-Energy Physics Accelerator

  1. High-intensity exercise interventions in cancer survivors: a systematic review exploring the impact on health outcomes.

    PubMed

    Toohey, Kellie; Pumpa, Kate; McKune, Andrew; Cooke, Julie; Semple, Stuart

    2018-01-01

    There is an increasing body of evidence underpinning high-intensity exercise as an effective and time-efficient intervention for improving health in cancer survivors. The aim of this study was to, (1) evaluate the efficacy and (2) the safety of high-intensity exercise interventions in improving selected health outcomes in cancer survivors. Design Systematic review. Data sources Google Scholar and EBSCO, CINAHL Plus, Computers and Applied Sciences Complete, Health Source-Consumer Edition, Health Source: Nursing/Academic Edition, MEDLINE, Web of Science and SPORTDiscuss from inception up until August 2017. Eligibility criteria Randomized controlled trials of high-intensity exercise interventions in cancer survivors (all cancer types) with health-related outcome measures. The guidelines adopted for this review were the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA). The search returned 447 articles, of which nine articles (n = 531 participants mean, age 58 ± 9.5 years) met the eligibility criteria. Exercise interventions of between 4 and 18 weeks consisting of high-intensity interval bouts of up to 4-min were compared with a continuous moderate intensity (CMIT) intervention or a control group. High-intensity exercise interventions elicited significant improvements in VO 2 max, strength, body mass, body fat and hip and waist circumference compared with CMIT and/or control groups. The studies reviewed showed low risk in participating in supervised high-intensity exercise interventions. Mixed mode high-intensity interventions which included both aerobic and resistance exercises were most effective improving the aerobic fitness levels of cancer survivors by 12.45-21.35%, from baseline to post-intervention. High-intensity exercise interventions improved physical and physiological health-related outcome measures such as cardiovascular fitness and strength in cancer survivors. Given that high-intensity exercise sessions require a shorter time commitment, it may be a useful modality to improve health outcomes in those who are time poor. The risk of adverse events associated with high-intensity exercise was low.

  2. Immunohistochemical quantification of expression of a tight junction protein, claudin-7, in human lung cancer samples using digital image analysis method.

    PubMed

    Lu, Zhe; Liu, Yi; Xu, Junfeng; Yin, Hongping; Yuan, Haiying; Gu, Jinjing; Chen, Yan-Hua; Shi, Liyun; Chen, Dan; Xie, Bin

    2018-03-01

    Tight junction proteins are correlated with cancer development. As the pivotal proteins in epithelial cells, altered expression and distribution of different claudins have been reported in a wide variety of human malignancies. We have previously reported that claudin-7 was strongly expressed in benign bronchial epithelial cells at the cell-cell junction while expression of claudin-7 was either altered with discontinued weak expression or completely absent in lung cancers. Based on these results, we continued working on the expression pattern of claudin-7 and its relationship with lung cancer development. We herein proposed a new Digital Image Classification, Fragmentation index, Morphological analysis (DICFM) method for differentiating the normal lung tissues and lung cancer tissues based on the claudin-7 immunohistochemical staining. Seventy-seven lung cancer samples were obtained from the Second Affiliated Hospital of Zhejiang University and claudin-7 immunohistochemical staining was performed. Based on C++ and Open Source Computer Vision Library (OpenCV, version 2.4.4), the DICFM processing module was developed. Intensity and fragmentation of claudin-7 expression, as well as the morphological parameters of nuclei were calculated. Evaluation of results was performed using Receiver Operator Characteristic (ROC) analysis. Agreement between these computational results and the results obtained by two pathologists was demonstrated. The intensity of claudin-7 expression was significantly decreased while the fragmentation was significantly increased in the lung cancer tissues compared to the normal lung tissues and the intensity was strongly positively associated with the differentiation of lung cancer cells. Moreover, the perimeters of the nuclei of lung cancer cells were significantly greater than that of the normal lung cells, while the parameters of area and circularity revealed no statistical significance. Taken together, our DICFM approach may be applied as an appropriate approach to quantify the immunohistochemical staining of claudin-7 on the cell membrane and claudin-7 may serve as a marker for identification of lung cancer. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Three-dimensional modeling and quantitative analysis of gap junction distributions in cardiac tissue.

    PubMed

    Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W

    2011-11-01

    Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.

  4. Discrimination between smiling faces: Human observers vs. automated face analysis.

    PubMed

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Data communication network at the ASRM facility

    NASA Astrophysics Data System (ADS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-08-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  6. Data communication network at the ASRM facility

    NASA Technical Reports Server (NTRS)

    Moorhead, Robert J., II; Smith, Wayne D.

    1993-01-01

    This report describes the simulation of the overall communication network structure for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, Mississippi as of today. The report is compiled using information received from NASA/MSFC, LMSC, AAD, and RUST Inc. As per the information gathered, the overall network structure will have one logical FDDI ring acting as a backbone for the whole complex. The buildings will be grouped into two categories viz. manufacturing intensive and manufacturing non-intensive. The manufacturing intensive buildings will be connected via FDDI to the Operational Information System (OIS) in the main computing center in B_1000. The manufacturing non-intensive buildings will be connected by 10BASE-FL to the OIS through the Business Information System (BIS) hub in the main computing center. All the devices inside B_1000 will communicate with the BIS. The workcells will be connected to the Area Supervisory Computers (ASCs) through the nearest manufacturing intensive hub and one of the OIS hubs. Comdisco's Block Oriented Network Simulator (BONeS) has been used to simulate the performance of the network. BONeS models a network topology, traffic, data structures, and protocol functions using a graphical interface. The main aim of the simulations was to evaluate the loading of the OIS, the BIS, and the ASCs, and the network links by the traffic generated by the workstations and workcells throughout the site.

  7. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  8. Stormwater quality processes for three land-use areas in Broward County, Florida

    USGS Publications Warehouse

    Mattraw, H.C.; Miller, Robert A.

    1981-01-01

    Systematic collection and chemical analysis of stormwater runoff samples from three small urban areas in Broward County, Florida, were obtained between 1974 and 1977. Thirty or more runoff-constituent loads were computed for each of the homogeneous land-use areas. The areas sampled were single family residential, highway, and a commercial shopping center. Rainfall , runoff, and nutrient and metal analyses were stored in a data-management system. The data-management system permitted computation of loads, publication of basic-data reports and the interface of environmental and load information with a comprehensive statistical analysis system. Seven regression models relating water quality loads to characteristics of peak discharge, antecedent conditions, season, storm duration and rainfall intensity were constructed for each of the three sites. Total water-quality loads were computed for the collection period by summing loads for individual storms. Loads for unsampled storms were estimated by using regression models and records of storm precipitation. Loadings, pounds per day per acre of hydraulically effective impervious area, were computed for the three land-use types. Total nitrogen, total phosphorus, and total residue loadings were highest in the residential area. Chemical oxygen demand and total lead loadings were highest in the commercial area. Loadings of atmospheric fallout on each watershed were estimated by bulk precipitation samples collected at the highway and commercial site. (USGS)

  9. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  10. Early postnatal myelin content estimate of white matter via T1w/T2w ratio

    NASA Astrophysics Data System (ADS)

    Lee, Kevin; Cherel, Marie; Budin, Francois; Gilmore, John; Zaldarriaga Consing, Kirsten; Rasmussen, Jerod; Wadhwa, Pathik D.; Entringer, Sonja; Glasser, Matthew F.; Van Essen, David C.; Buss, Claudia; Styner, Martin

    2015-03-01

    To develop and evaluate a novel processing framework for the relative quantification of myelin content in cerebral white matter (WM) regions from brain MRI data via a computed ratio of T1 to T2 weighted intensity values. We employed high resolution (1mm3 isotropic) T1 and T2 weighted MRI from 46 (28 male, 18 female) neonate subjects (typically developing controls) scanned on a Siemens Tim Trio 3T at UC Irvine. We developed a novel, yet relatively straightforward image processing framework for WM myelin content estimation based on earlier work by Glasser, et al. We first co-register the structural MRI data to correct for motion. Then, background areas are masked out via a joint T1w and T2 foreground mask computed. Raw T1w/T2w-ratios images are computed next. For purpose of calibration across subjects, we first coarsely segment the fat-rich facial regions via an atlas co-registration. Linear intensity rescaling based on median T1w/T2w-ratio values in those facial regions yields calibrated T1w/T2wratio images. Mean values in lobar regions are evaluated using standard statistical analysis to investigate their interaction with age at scan. Several lobes have strongly positive significant interactions of age at scan with the computed T1w/T2w-ratio. Most regions do not show sex effects. A few regions show no measurable effects of change in myelin content change within the first few weeks of postnatal development, such as cingulate and CC areas, which we attribute to sample size and measurement variability. We developed and evaluated a novel way to estimate white matter myelin content for use in studies of brain white matter development.

  11. Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT

    PubMed Central

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2014-01-01

    Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354

  12. Computer-aided assessment of regional abdominal fat with food residue removal in CT.

    PubMed

    Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi

    2013-11-01

    Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.

  13. ASSOCIATION OF DRUSEN VOLUME WITH CHOROIDAL PARAMETERS IN NONNEOVASCULAR AGE-RELATED MACULAR DEGENERATION.

    PubMed

    Balasubramanian, Siva; Lei, Jianqin; Nittala, Muneeswar G; Velaga, Swetha B; Haines, Jonathan; Pericak-Vance, Margaret A; Stambolian, Dwight; Sadda, SriniVas R

    2017-10-01

    The choroid is thought to be relevant to the pathogenesis of nonneovascular age-related macular degeneration, but its role has not yet been fully defined. In this study, we evaluate the relationship between the extent of macular drusen and specific choroidal parameters, including thickness and intensity. Spectral domain optical coherence tomography images were collected from two distinct, independent cohorts with nonneovascular age-related macular degeneration: Amish (53 eyes of 34 subjects) and non-Amish (40 eyes from 26 subjects). All spectral domain optical coherence tomography scans were obtained using the Cirrus HD-OCT with a 512 × 128 macular cube (6 × 6 mm) protocol. The Cirrus advanced retinal pigment epithelium analysis tool was used to automatically compute drusen volume within 3 mm (DV3) and 5 mm (DV5) circles centered on the fovea. The inner and outer borders of the choroid were manually segmented, and the mean choroidal thickness and choroidal intensity (i.e., brightness) were calculated. The choroidal intensity was normalized against the vitreous and nerve fiber layer reflectivity. The correlation between DV and these choroidal parameters was assessed using Pearson and linear regression analysis. A significant positive correlation was observed between normalized choroidal intensity and DV5 in the Amish (r = 0.42, P = 0.002) and non-Amish (r = 0.33, P = 0.03) cohorts. Also, DV3 showed a significant positive correlation with normalized choroidal intensity in both the groups (Amish: r = 0.30, P = 0.02; non-Amish: r = 0.32, P = 0.04). Choroidal thickness was negatively correlated with normalized choroidal intensity in both Amish (r = -0.71, P = 0.001) and non-Amish (r = -0.43, P = 0.01) groups. Normalized choroidal intensity was the most significant constant predictor of DV in both the Amish and non-Amish groups. Choroidal intensity, but not choroidal thickness, seems to be associated with drusen volume in Amish and non-Amish populations. These observations suggest that choroidal parameters beyond thickness warrant further study in the setting of age-related macular degeneration.

  14. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Energy expenditure in adolescents playing new generation computer games.

    PubMed

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2008-07-01

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Setting Research laboratories. Six boys and five girls aged 13-15 years. Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kl/kg/min), tennis (202.5 (31.5) kl/kg/min), and boxing (198.1 (33.9) kl/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kl/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kl/kg/min greater when playing active rather than sedentary games. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  16. A micro-hydrology computation ordering algorithm

    NASA Astrophysics Data System (ADS)

    Croley, Thomas E.

    1980-11-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.

  17. Numerical characteristics of quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  18. Data Intensive Computing on Amazon Web Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, S. A.

    The Geophysical Monitoring Program (GMP) has spent the past few years building up the capability to perform data intensive computing using what have been referred to as “big data” tools. These big data tools would be used against massive archives of seismic signals (>300 TB) to conduct research not previously possible. Examples of such tools include Hadoop (HDFS, MapReduce), HBase, Hive, Storm, Spark, Solr, and many more by the day. These tools are useful for performing data analytics on datasets that exceed the resources of traditional analytic approaches. To this end, a research big data cluster (“Cluster A”) was setmore » up as a collaboration between GMP and Livermore Computing (LC).« less

  19. Free space optical ultra-wideband communications over atmospheric turbulence channels.

    PubMed

    Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu

    2010-08-02

    A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.

  20. Theoretical and Experimental Studies on the Nonlinear Optical Chromophore para Bromoacetanilide

    NASA Astrophysics Data System (ADS)

    Jothy, V. Bena; Vijayakumar, T.; Jayakumar, V. S.; Udayalekshmi, K.; Ramamurthy, K.; Joe, I. Hubert

    2008-11-01

    Vibrational spectral analysis of the hydrogen bonded non-linear optical (NLO) material para Bromo Acetanilide (PBA) is carried out using NIR FT-Raman and FT-IR spectroscopy. Ab initio molecular orbital computations have been performed at HF/6-31G(d) level to derive equilibrium geometry, vibrational wavenumbers, intensities and first hyperpolarizability. The lowering of the imino stretching wavenumbers suggests the existence of strong intermolecular N-H⋯O hydrogen bonding substantiated by the natural bond orbital (NBO) analysis. Blue shifting CH stretching wavenumbers, simultaneous activation of carbonyl stretching mode and the strong activation of low wavenumber H-bond stretching vibrations shows the presence of intramolecular charge transfer in the molecule.

Top