Sample records for require computationally intensive

  1. Fermilab computing at the Intensity Frontier

    DOE PAGES

    Group, Craig; Fuess, S.; Gutsche, O.; ...

    2015-12-23

    The Intensity Frontier refers to a diverse set of particle physics experiments using high- intensity beams. In this paper I will focus the discussion on the computing requirements and solutions of a set of neutrino and muon experiments in progress or planned to take place at the Fermi National Accelerator Laboratory located near Chicago, Illinois. In addition, the experiments face unique challenges, but also have overlapping computational needs. In principle, by exploiting the commonality and utilizing centralized computing tools and resources, requirements can be satisfied efficiently and scientists of individual experiments can focus more on the science and less onmore » the development of tools and infrastructure.« less

  2. Data intensive computing at Sandia.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew T.

    2010-09-01

    Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.

  3. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  4. Task Assignment Heuristics for Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.

  5. A Comparison of Student Perceptions of Their Computer Skills to Their Actual Abilities

    ERIC Educational Resources Information Center

    Grant, Donna M.; Malloy, Alisha D.; Murphy, Marianne C.

    2009-01-01

    In this technology intensive society, most students are required to be proficient in computer skills to compete in today's global job market. These computer skills usually consist of basic to advanced knowledge in word processing, presentation, and spreadsheet applications. In many U.S. states, students are required to demonstrate computer…

  6. Eigen Spreading

    DTIC Science & Technology

    2008-02-27

    between the PHY layer and for example a host PC computer . The PC wants to generate and receive a sequence of data packets. The PC may also want to send...the testbed is quite similar. Given the intense computational requirements of SVD and other matrix mode operations needed to support eigen spreading a...platform for real time operation. This task is probably the major challenge in the development of the testbed. All compute intensive tasks will be

  7. Application verification research of cloud computing technology in the field of real time aerospace experiment

    NASA Astrophysics Data System (ADS)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  8. User's manual for a computer program for simulating intensively managed allowable cut.

    Treesearch

    Robert W. Sassaman; Ed Holt; Karl Bergsvik

    1972-01-01

    Detailed operating instructions are described for SIMAC, a computerized forest simulation model which calculates the allowable cut assuming volume regulation for forests with intensively managed stands. A sample problem illustrates the required inputs and expected output. SIMAC is written in FORTRAN IV and runs on a CDC 6400 computer with a SCOPE 3.3 operating system....

  9. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  10. From cosmos to connectomes: the evolution of data-intensive science.

    PubMed

    Burns, Randal; Vogelstein, Joshua T; Szalay, Alexander S

    2014-09-17

    The analysis of data requires computation: originally by hand and more recently by computers. Different models of computing are designed and optimized for different kinds of data. In data-intensive science, the scale and complexity of data exceeds the comfort zone of local data stores on scientific workstations. Thus, cloud computing emerges as the preeminent model, utilizing data centers and high-performance clusters, enabling remote users to access and query subsets of the data efficiently. We examine how data-intensive computational systems originally built for cosmology, the Sloan Digital Sky Survey (SDSS), are now being used in connectomics, at the Open Connectome Project. We list lessons learned and outline the top challenges we expect to face. Success in computational connectomics would drastically reduce the time between idea and discovery, as SDSS did in cosmology. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    PubMed

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  12. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    ERIC Educational Resources Information Center

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  13. Overview 1993: Computational applications

    NASA Technical Reports Server (NTRS)

    Benek, John A.

    1993-01-01

    Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.

  14. Implementing direct, spatially isolated problems on transputer networks

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.

  15. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  16. Social, Organizational, and Contextual Characteristics of Clinical Decision Support Systems for Intensive Insulin Therapy: A Literature Review and Case Study

    PubMed Central

    Campion, Thomas R.; Waitman, Lemuel R.; May, Addison K.; Ozdas, Asli; Lorenzi, Nancy M.; Gadd, Cynthia S.

    2009-01-01

    Introduction: Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. Results: This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Discussion: Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. Conclusion: This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. PMID:19815452

  17. Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study.

    PubMed

    Campion, Thomas R; Waitman, Lemuel R; May, Addison K; Ozdas, Asli; Lorenzi, Nancy M; Gadd, Cynthia S

    2010-01-01

    Evaluations of computerized clinical decision support systems (CDSS) typically focus on clinical performance changes and do not include social, organizational, and contextual characteristics explaining use and effectiveness. Studies of CDSS for intensive insulin therapy (IIT) are no exception, and the literature lacks an understanding of effective computer-based IIT implementation and operation. This paper presents (1) a literature review of computer-based IIT evaluations through the lens of institutional theory, a discipline from sociology and organization studies, to demonstrate the inconsistent reporting of workflow and care process execution and (2) a single-site case study to illustrate how computer-based IIT requires substantial organizational change and creates additional complexity with unintended consequences including error. Computer-based IIT requires organizational commitment and attention to site-specific technology, workflow, and care processes to achieve intensive insulin therapy goals. The complex interaction between clinicians, blood glucose testing devices, and CDSS may contribute to workflow inefficiency and error. Evaluations rarely focus on the perspective of nurses, the primary users of computer-based IIT whose knowledge can potentially lead to process and care improvements. This paper addresses a gap in the literature concerning the social, organizational, and contextual characteristics of CDSS in general and for intensive insulin therapy specifically. Additionally, this paper identifies areas for future research to define optimal computer-based IIT process execution: the frequency and effect of manual data entry error of blood glucose values, the frequency and effect of nurse overrides of CDSS insulin dosing recommendations, and comprehensive ethnographic study of CDSS for IIT. Copyright (c) 2009. Published by Elsevier Ireland Ltd.

  18. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  19. Autonomic Closure for Turbulent Flows Using Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Doronina, Olga; Christopher, Jason; Hamlington, Peter; Dahm, Werner

    2017-11-01

    Autonomic closure is a new technique for achieving fully adaptive and physically accurate closure of coarse-grained turbulent flow governing equations, such as those solved in large eddy simulations (LES). Although autonomic closure has been shown in recent a priori tests to more accurately represent unclosed terms than do dynamic versions of traditional LES models, the computational cost of the approach makes it challenging to implement for simulations of practical turbulent flows at realistically high Reynolds numbers. The optimization step used in the approach introduces large matrices that must be inverted and is highly memory intensive. In order to reduce memory requirements, here we propose to use approximate Bayesian computation (ABC) in place of the optimization step, thereby yielding a computationally-efficient implementation of autonomic closure that trades memory-intensive for processor-intensive computations. The latter challenge can be overcome as co-processors such as general purpose graphical processing units become increasingly available on current generation petascale and exascale supercomputers. In this work, we outline the formulation of ABC-enabled autonomic closure and present initial results demonstrating the accuracy and computational cost of the approach.

  20. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  1. Challenges in reusing transactional data for daily documentation in neonatal intensive care.

    PubMed

    Kim, G R; Lawson, E E; Lehmann, C U

    2008-11-06

    The reuse of transactional data for clinical documentation requires navigation of computational, institutional and adaptive barriers. We describe organizational and technical issues in developing and deploying a daily progress note tool in a tertiary neonatal intensive care unit that reuses and aggregates data from a commercial integrated clinical information system.

  2. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

    DTIC Science & Technology

    2014-03-27

    and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

  3. Information Technology in Critical Care: Review of Monitoring and Data Acquisition Systems for Patient Care and Research

    PubMed Central

    De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185

  4. Information technology in critical care: review of monitoring and data acquisition systems for patient care and research.

    PubMed

    De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A

    2015-01-01

    There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.

  5. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less

  6. [Mobile computing in anaesthesiology and intensive care medicine. The practical relevance of portable digital assistants].

    PubMed

    Pazhur, R J; Kutter, B; Georgieff, M; Schraag, S

    2003-06-01

    Portable digital assistants (PDAs) may be of value to the anaesthesiologist as development in medical care is moving towards "bedside computing". Many different portable computers are currently available and it is now possible for the physician to carry a mobile computer with him all the time. It is data base, reference book, patient tracking help, date planner, computer, book, magazine, calculator and much more in one mobile device. With the help of a PDA, information that is required for our work may be available at all times and everywhere at the point of care within seconds. In this overview the possibilities for the use of PDAs in anaesthesia and intensive care medicine are discussed. Developments in other countries, possibilities in use but also problems such as data security and network technology are evaluated.

  7. Pair Programming and LSs in Computing Education: Its Impact on Students' Performances

    ERIC Educational Resources Information Center

    Hui, Tie Hui; Umar, Irfan Naufal

    2011-01-01

    Learning to programme requires complex cognitive skills that computing students find it arduous in comprehension. PP (pair programming) is an intensive style of programme cooperation where two people working together in resolving programming scenarios. It begins to draw the interests of educators as a teaching approach to facilitate learning and…

  8. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  9. Achieving supercomputer performance for neural net simulation with an array of digital signal processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, U.A.; Baumle, B.; Kohler, P.

    1992-10-01

    Music, a DSP-based system with a parallel distributed-memory architecture, provides enormous computing power yet retains the flexibility of a general-purpose computer. Reaching a peak performance of 2.7 Gflops at a significantly lower cost, power consumption, and space requirement than conventional supercomputers, Music is well suited to computationally intensive applications such as neural network simulation. 12 refs., 9 figs., 2 tabs.

  10. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    NASA Technical Reports Server (NTRS)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  11. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  12. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  13. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  14. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  15. Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic

    NASA Technical Reports Server (NTRS)

    Hjermstad, Chris

    1986-01-01

    Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.

  16. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    PubMed

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  17. [Features of control of electromagnetic radiation emitted by personal computers].

    PubMed

    Pal'tsev, Iu P; Buzov, A L; Kol'chugin, Iu I

    1996-01-01

    Measurements of PC electromagnetic irradiation show that the main sources are PC blocks emitting the waves of certain frequencies. Use of wide-range detectors measuring field intensity in assessment of PC electromagnetic irradiation gives unreliable results. More precise measurements by selective devices are required. Thus, it is expedient to introduce a term "spectral density of field intensity" and its maximal allowable level. In this case a frequency spectrum of PC electromagnetic irradiation is divided into 4 ranges, one of which is subjected to calculation of field intensity for each harmonic frequency, and others undergo assessment of spectral density of field intensity.

  18. The diagnostic value of single-photon emission computed tomography/computed tomography for severe sacroiliac joint dysfunction.

    PubMed

    Tofuku, Katsuhiro; Koga, Hiroaki; Komiya, Setsuro

    2015-04-01

    We aimed to evaluate the value of single-photon emission computed tomography (SPECT)/computed tomography (CT) for the diagnosis of sacroiliac joint (SIJ) dysfunction. SPECT/CT was performed in 32 patients with severe SIJ dysfunction, who did not respond to 1-year conservative treatment and had a score of >4 points on a 10-cm visual analog scale. We investigated the relationship between the presence of severe SIJ dysfunction and tracer accumulation, as confirmed by SPECT/CT. In cases of bilateral SIJ dysfunction, we also compared the intensity of tracer accumulation on each side. Moreover, we examined the relationship between the intensity of tracer accumulation and the different treatments the patients subsequently received. All 32 patients with severe SIJ dysfunction had tracer accumulation with a standardized uptake value (SUV) of >2.2 (mean SUV 4.7). In the 19 patients with lateralized symptom intensity, mean SUVs of the dominant side were significantly higher than those of the nondominant side. In 10 patients with no lateralization, the difference in the SUVs between sides was <0.6. Patients exhibiting higher levels of tracer accumulation required more advanced treatment. Patients with higher levels of tracer accumulation had greater symptom severity and also required more advanced treatment. Thus, we believe that SPECT/CT may be a suitable supplementary diagnostic modality for SIJ dysfunction as well as a useful technique for predicting the prognosis of this condition.

  19. Controlling Light Transmission Through Highly Scattering Media Using Semi-Definite Programming as a Phase Retrieval Computation Method.

    PubMed

    N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao

    2017-05-31

    Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.

  20. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  1. Digital image processing for information extraction.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  2. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  3. Structural optimization with approximate sensitivities

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.

    1994-01-01

    Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.

  4. COMBAT: mobile-Cloud-based cOmpute/coMmunications infrastructure for BATtlefield applications

    NASA Astrophysics Data System (ADS)

    Soyata, Tolga; Muraleedharan, Rajani; Langdon, Jonathan; Funai, Colin; Ames, Scott; Kwon, Minseok; Heinzelman, Wendi

    2012-05-01

    The amount of data processed annually over the Internet has crossed the zetabyte boundary, yet this Big Data cannot be efficiently processed or stored using today's mobile devices. Parallel to this explosive growth in data, a substantial increase in mobile compute-capability and the advances in cloud computing have brought the state-of-the- art in mobile-cloud computing to an inflection point, where the right architecture may allow mobile devices to run applications utilizing Big Data and intensive computing. In this paper, we propose the MObile Cloud-based Hybrid Architecture (MOCHA), which formulates a solution to permit mobile-cloud computing applications such as object recognition in the battlefield by introducing a mid-stage compute- and storage-layer, called the cloudlet. MOCHA is built on the key observation that many mobile-cloud applications have the following characteristics: 1) they are compute-intensive, requiring the compute-power of a supercomputer, and 2) they use Big Data, requiring a communications link to cloud-based database sources in near-real-time. In this paper, we describe the operation of MOCHA in battlefield applications, by formulating the aforementioned mobile and cloudlet to be housed within a soldier's vest and inside a military vehicle, respectively, and enabling access to the cloud through high latency satellite links. We provide simulations using the traditional mobile-cloud approach as well as utilizing MOCHA with a mid-stage cloudlet to quantify the utility of this architecture. We show that the MOCHA platform for mobile-cloud computing promises a future for critical battlefield applications that access Big Data, which is currently not possible using existing technology.

  5. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  6. Computational laser intensity stabilisation for organic molecule concentration estimation in low-resource settings

    NASA Astrophysics Data System (ADS)

    Haider, Shahid A.; Kazemzadeh, Farnoud; Wong, Alexander

    2017-03-01

    An ideal laser is a useful tool for the analysis of biological systems. In particular, the polarization property of lasers can allow for the concentration of important organic molecules in the human body, such as proteins, amino acids, lipids, and carbohydrates, to be estimated. However, lasers do not always work as intended and there can be effects such as mode hopping and thermal drift that can cause time-varying intensity fluctuations. The causes of these effects can be from the surrounding environment, where either an unstable current source is used or the temperature of the surrounding environment is not temporally stable. This intensity fluctuation can cause bias and error in typical organic molecule concentration estimation techniques. In a low-resource setting where cost must be limited and where environmental factors, like unregulated power supplies and temperature, cannot be controlled, the hardware required to correct for these intensity fluctuations can be prohibitive. We propose a method for computational laser intensity stabilisation that uses Bayesian state estimation to correct for the time-varying intensity fluctuations from electrical and thermal instabilities without the use of additional hardware. This method will allow for consistent intensities across all polarization measurements for accurate estimates of organic molecule concentrations.

  7. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  8. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  9. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  10. Structural system reliability calculation using a probabilistic fault tree analysis method

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  11. Exact analytical formulae for linearly distributed vortex and source sheets in uence computation in 2D vortex methods

    NASA Astrophysics Data System (ADS)

    Kuzmina, K. S.; Marchevsky, I. K.; Ryatina, E. P.

    2017-11-01

    We consider the methodology of numerical schemes development for two-dimensional vortex method. We describe two different approaches to deriving integral equation for unknown vortex sheet intensity. We simulate the velocity of the surface line of an airfoil as the influence of attached vortex and source sheets. We consider a polygonal approximation of the airfoil and assume intensity distributions of free and attached vortex sheets and attached source sheet to be approximated with piecewise constant or piecewise linear (continuous or discontinuous) functions. We describe several specific numerical schemes that provide different accuracy and have a different computational cost. The study shows that a Galerkin-type approach to solving boundary integral equation requires computing several integrals and double integrals over the panels. We obtain exact analytical formulae for all the necessary integrals, which makes it possible to raise significantly the accuracy of vortex sheet intensity computation and improve the quality of velocity and vorticity field representation, especially in proximity to the surface line of the airfoil. All the formulae are written down in the invariant form and depend only on the geometric relationship between the positions of the beginnings and ends of the panels.

  12. AIP1OGREN: Aerosol Observing Station Intensive Properties Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koontz, Annette; Flynn, Connor

    The aip1ogren value-added product (VAP) computes several aerosol intensive properties. It requires as input calibrated, corrected, aerosol extensive properties (scattering and absorption coefficients, primarily) from the Aerosol Observing Station (AOS). Aerosol extensive properties depend on both the nature of the aerosol and the amount of the aerosol. We compute several properties as relationships between the various extensive properties. These intensive properties are independent of aerosol amount and instead relate to intrinsic properties of the aerosol itself. Along with the original extensive properties we report aerosol single-scattering albedo, hemispheric backscatter fraction, asymmetry parameter, and Ångström exponent for scattering and absorption withmore » one-minute averaging. An hourly averaged file is produced from the 1-minute files that includes all extensive and intensive properties as well as submicron scattering and submicron absorption fractions. Finally, in both the minutely and hourly files the aerosol radiative forcing efficiency is provided.« less

  13. Big Data, Big Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Bill

    Data—lots of data—generated in seconds and piling up on the internet, streaming and stored in countless databases. Big data is important for commerce, society and our nation’s security. Yet the volume, velocity, variety and veracity of data is simply too great for any single analyst to make sense of alone. It requires advanced, data-intensive computing. Simply put, data-intensive computing is the use of sophisticated computers to sort through mounds of information and present analysts with solutions in the form of graphics, scenarios, formulas, new hypotheses and more. This scientific capability is foundational to PNNL’s energy, environment and security missions. Seniormore » Scientist and Division Director Bill Pike and his team are developing analytic tools that are used to solve important national challenges, including cyber systems defense, power grid control systems, intelligence analysis, climate change and scientific exploration.« less

  14. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  15. METCAN: The metal matrix composite analyzer

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Murthy, Pappu L. N.

    1988-01-01

    Metal matrix composites (MMC) are the subject of intensive study and are receiving serious consideration for critical structural applications in advanced aerospace systems. MMC structural analysis and design methodologies are studied. Predicting the mechanical and thermal behavior and the structural response of components fabricated from MMC requires the use of a variety of mathematical models. These models relate stresses to applied forces, stress intensities at the tips of cracks to nominal stresses, buckling resistance to applied force, or vibration response to excitation forces. The extensive research in computational mechanics methods for predicting the nonlinear behavior of MMC are described. This research has culminated in the development of the METCAN (METal Matrix Composite ANalyzer) computer code.

  16. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  17. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  18. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  19. A site oriented supercomputer for theoretical physics: The Fermilab Advanced Computer Program Multi Array Processor System (ACMAPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nash, T.; Atac, R.; Cook, A.

    1989-03-06

    The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less

  20. Regional Sustainability: The San Luis Basin Metrics Project

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...

  1. Development of a Multidisciplinary Approach to Access Sustainability

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relevan...

  2. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity

    PubMed Central

    2013-01-01

    Background Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD). The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. Methods A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Results Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P < 0.05). In parallel, women scored poorer work ability and ability to fulfil the requirements on productivity than men (P < 0.05). Strong positive correlations were found between pain intensity and pain duration for the forearm, elbow, neck and shoulder (P < 0.001). Moderate negative correlations were seen between pain intensity and work ability/productivity (P < 0.001). Conclusions The present results provide new key information on pain characteristics in office workers. The differences in pain characteristics, i.e., higher intensity, longer duration and more pain locations as well as poorer work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users. PMID:23915209

  3. An S N Algorithm for Modern Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    2016-08-29

    LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.

  4. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  5. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  6. Variational Calculations of Ro-Vibrational Energy Levels and Transition Intensities for Tetratomic Molecules

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1995-01-01

    A description is given of an algorithm for computing ro-vibrational energy levels for tetratomic molecules. The expressions required for evaluating transition intensities are also given. The variational principle is used to determine the energy levels and the kinetic energy operator is simple and evaluated exactly. The computational procedure is split up into the determination of one dimensional radial basis functions, the computation of a contracted rotational-bending basis, followed by a final variational step coupling all degrees of freedom. An angular basis is proposed whereby the rotational-bending contraction takes place in three steps. Angular matrix elements of the potential are evaluated by expansion in terms of a suitable basis and the angular integrals are given in a factorized form which simplifies their evaluation. The basis functions in the final variational step have the full permutation symmetries of the identical particles. Sample results are given for HCCH and BH3.

  7. Total reduction of distorted echelle spectrograms - An automatic procedure. [for computer controlled microdensitometer

    NASA Technical Reports Server (NTRS)

    Peterson, R. C.; Title, A. M.

    1975-01-01

    A total reduction procedure, notable for its use of a computer-controlled microdensitometer for semi-automatically tracing curved spectra, is applied to distorted high-dispersion echelle spectra recorded by an image tube. Microdensitometer specifications are presented and the FORTRAN, TRACEN and SPOTS programs are outlined. The intensity spectrum of the photographic or electrographic plate is plotted on a graphic display. The time requirements are discussed in detail.

  8. The Virtual Earthquake and Seismology Research Community e-science environment in Europe (VERCE) FP7-INFRA-2011-2 project

    NASA Astrophysics Data System (ADS)

    Vilotte, J.-P.; Atkinson, M.; Michelini, A.; Igel, H.; van Eck, T.

    2012-04-01

    Increasingly dense seismic and geodetic networks are continuously transmitting a growing wealth of data from around the world. The multi-use of these data leaded the seismological community to pioneer globally distributed open-access data infrastructures, standard services and formats, e.g., the Federation of Digital Seismic Networks (FDSN) and the European Integrated Data Archives (EIDA). Our ability to acquire observational data outpaces our ability to manage, analyze and model them. Research in seismology is today facing a fundamental paradigm shift. Enabling advanced data-intensive analysis and modeling applications challenges conventional storage, computation and communication models and requires a new holistic approach. It is instrumental to exploit the cornucopia of data, and to guarantee optimal operation and design of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of the seismological data-intensive applications in data analysis and modeling. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of those applications, and integrating the data infrastructures with Grid, Cloud and HPC infrastructures. It will allow prototyping solutions for new use cases as they emerge within the European Plate Observatory Systems (EPOS), the ESFRI initiative of the solid Earth community. Computational seismology, and information management, is increasingly revolving around massive amounts of data that stem from: (1) the flood of data from the observational systems; (2) the flood of data from large-scale simulations and inversions; (3) the ability to economically store petabytes of data online; (4) the evolving Internet and Data-aware computing capabilities. As data-intensive applications are rapidly increasing in scale and complexity, they require additional services-oriented architectures offering a virtualization-based flexibility for complex and re-usable workflows. Scientific information management poses computer science challenges: acquisition, organization, query and visualization tasks scale almost linearly with the data volumes. Commonly used FTP-GREP metaphor allows today to scan gigabyte-sized datasets but will not work for scanning terabyte-sized continuous waveform datasets. New data analysis and modeling methods, exploiting the signal coherence within dense network arrays, are nonlinear. Pair-algorithms on N points scale as N2. Wave form inversion and stochastic simulations raise computing and data handling challenges These applications are unfeasible for tera-scale datasets without new parallel algorithms that use near-linear processing, storage and bandwidth, and that can exploit new computing paradigms enabled by the intersection of several technologies (HPC, parallel scalable database crawler, data-aware HPC). This issues will be discussed based on a number of core pilot data-intensive applications and use cases retained in VERCE. This core applications are related to: (1) data processing and data analysis methods based on correlation techniques; (2) cpu-intensive applications such as large-scale simulation of synthetic waveforms in complex earth systems, and full waveform inversion and tomography. We shall analyze their workflow and data flow, and their requirements for a new service-oriented architecture and a data-aware platform with services and tools. Finally, we will outline the importance of a new collaborative environment between seismology and computer science, together with the need for the emergence and the recognition of 'research technologists' mastering the evolving data-aware technologies and the data-intensive research goals in seismology.

  9. Regional sustainable environmental management: sustainability metrics research for decision makers

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute. Moreover, individual metrics may not capture all aspects of a system that are relevant to sust...

  10. Development of a multidisciplinary approach to assess regional sustainability

    EPA Science Inventory

    There are a number of established, scientifically supported metrics of sustainability. Many of the metrics are data intensive and require extensive effort to collect data and compute the metrics. Moreover, individual metrics do not capture all aspects of a system that are relev...

  11. An associative capacitive network based on nanoscale complementary resistive switches for memory-intensive computing

    NASA Astrophysics Data System (ADS)

    Kavehei, Omid; Linn, Eike; Nielen, Lutz; Tappertzhofen, Stefan; Skafidas, Efstratios; Valov, Ilia; Waser, Rainer

    2013-05-01

    We report on the implementation of an Associative Capacitive Network (ACN) based on the nondestructive capacitive readout of two Complementary Resistive Switches (2-CRSs). ACNs are capable of performing a fully parallel search for Hamming distances (i.e. similarity) between input and stored templates. Unlike conventional associative memories where charge retention is a key function and hence, they require frequent refresh cycles, in ACNs, information is retained in a nonvolatile resistive state and normal tasks are carried out through capacitive coupling between input and output nodes. Each device consists of two CRS cells and no selective element is needed, therefore, CMOS circuitry is only required in the periphery, for addressing and read-out. Highly parallel processing, nonvolatility, wide interconnectivity and low-energy consumption are significant advantages of ACNs over conventional and emerging associative memories. These characteristics make ACNs one of the promising candidates for applications in memory-intensive and cognitive computing, switches and routers as binary and ternary Content Addressable Memories (CAMs) and intelligent data processing.

  12. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    NASA Astrophysics Data System (ADS)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  13. Accessing the public MIMIC-II intensive care relational database for clinical research.

    PubMed

    Scott, Daniel J; Lee, Joon; Silva, Ikaro; Park, Shinhyuk; Moody, George B; Celi, Leo A; Mark, Roger G

    2013-01-10

    The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge "Predicting mortality of ICU Patients". QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database.

  14. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    NASA Astrophysics Data System (ADS)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  15. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  16. GPU and APU computations of Finite Time Lyapunov Exponent fields

    NASA Astrophysics Data System (ADS)

    Conti, Christian; Rossinelli, Diego; Koumoutsakos, Petros

    2012-03-01

    We present GPU and APU accelerated computations of Finite-Time Lyapunov Exponent (FTLE) fields. The calculation of FTLEs is a computationally intensive process, as in order to obtain the sharp ridges associated with the Lagrangian Coherent Structures an extensive resampling of the flow field is required. The computational performance of this resampling is limited by the memory bandwidth of the underlying computer architecture. The present technique harnesses data-parallel execution of many-core architectures and relies on fast and accurate evaluations of moment conserving functions for the mesh to particle interpolations. We demonstrate how the computation of FTLEs can be efficiently performed on a GPU and on an APU through OpenCL and we report over one order of magnitude improvements over multi-threaded executions in FTLE computations of bluff body flows.

  17. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. The Domains for the Multi-Criteria Decisions about E-Learning Systems

    ERIC Educational Resources Information Center

    Uysal, Murat Pasa

    2012-01-01

    Developments in computer and information technologies continue to give opportunities for designing advanced E-learning systems while entailing objective and technical evaluation methodologies. Design and development of E-learning systems require time-consuming and labor-intensive processes; therefore any decision about these systems and their…

  19. In Situ Three-Dimensional Reciprocal-Space Mapping of Diffuse Scattering Intensity Distribution and Data Analysis for Precursor Phenomenon in Shape-Memory Alloy

    NASA Astrophysics Data System (ADS)

    Cheng, Tian-Le; Ma, Fengde D.; Zhou, Jie E.; Jennings, Guy; Ren, Yang; Jin, Yongmei M.; Wang, Yu U.

    2012-01-01

    Diffuse scattering contains rich information on various structural disorders, thus providing a useful means to study the nanoscale structural deviations from the average crystal structures determined by Bragg peak analysis. Extraction of maximal information from diffuse scattering requires concerted efforts in high-quality three-dimensional (3D) data measurement, quantitative data analysis and visualization, theoretical interpretation, and computer simulations. Such an endeavor is undertaken to study the correlated dynamic atomic position fluctuations caused by thermal vibrations (phonons) in precursor state of shape-memory alloys. High-quality 3D diffuse scattering intensity data around representative Bragg peaks are collected by using in situ high-energy synchrotron x-ray diffraction and two-dimensional digital x-ray detector (image plate). Computational algorithms and codes are developed to construct the 3D reciprocal-space map of diffuse scattering intensity distribution from the measured data, which are further visualized and quantitatively analyzed to reveal in situ physical behaviors. Diffuse scattering intensity distribution is explicitly formulated in terms of atomic position fluctuations to interpret the experimental observations and identify the most relevant physical mechanisms, which help set up reduced structural models with minimal parameters to be efficiently determined by computer simulations. Such combined procedures are demonstrated by a study of phonon softening phenomenon in precursor state and premartensitic transformation of Ni-Mn-Ga shape-memory alloy.

  20. Demonstrated Efficient Quasi-Monoenergetic Carbon-Ion Beams Approaching Fast Ignition (FI) Requirements

    NASA Astrophysics Data System (ADS)

    Fernández, Juan C.; Palaniyappan, S.; Huang, C.; Gautier, D. C.; Santiago, M.

    2015-11-01

    Using massive computer simulations of relativistic laser-plasma interactions, we have identified a self-organizing scheme that exploits persisting self-generated plasma electric (~TV/m) and magnetic (~104 Tesla) fields to reduce the ion energy spread of intense laser-driven ion beams after the laser exits the plasma. Consistent with the scheme, we have demonstrated on the LANL Trident laser carbon-ion beams with narrow spectral peaks at 220 MeV, with high conversion efficiency (~ 5%). These parameters are within a factor of 2 of FI requirements. The remaining gap may be bridged by increasing the laser intensity by a factor of 4, according to our data. We also discuss how this beam may be focused, to address the remaining requirement for FI, besides the total laser energy. This work is sponsored by the LANL LDRD Program.

  1. A Survey on the Feasibility of Sound Classification on Wireless Sensor Nodes

    PubMed Central

    Salomons, Etto L.; Havinga, Paul J. M.

    2015-01-01

    Wireless sensor networks are suitable to gain context awareness for indoor environments. As sound waves form a rich source of context information, equipping the nodes with microphones can be of great benefit. The algorithms to extract features from sound waves are often highly computationally intensive. This can be problematic as wireless nodes are usually restricted in resources. In order to be able to make a proper decision about which features to use, we survey how sound is used in the literature for global sound classification, age and gender classification, emotion recognition, person verification and identification and indoor and outdoor environmental sound classification. The results of the surveyed algorithms are compared with respect to accuracy and computational load. The accuracies are taken from the surveyed papers; the computational loads are determined by benchmarking the algorithms on an actual sensor node. We conclude that for indoor context awareness, the low-cost algorithms for feature extraction perform equally well as the more computationally-intensive variants. As the feature extraction still requires a large amount of processing time, we present four possible strategies to deal with this problem. PMID:25822142

  2. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  3. Acceleration of fluoro-CT reconstruction for a mobile C-Arm on GPU and FPGA hardware: a simulation study

    NASA Astrophysics Data System (ADS)

    Xue, Xinwei; Cheryauka, Arvi; Tubbs, David

    2006-03-01

    CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.

  4. A study of meteor spectroscopy and physics from earth-orbit: A preliminary survey into ultraviolet meteor spectra

    NASA Technical Reports Server (NTRS)

    Meisel, D. D.

    1976-01-01

    Preliminary data required to extrapolate available meteor physics information (obtained in the photographic, visual and near ultraviolet spectral regions) into the middle and far ultraviolet are presented. Wavelength tables, telluric attenuation factors, meteor rates, and telluric airglow data are summarized in the context of near-earth observation vehicle parameters using moderate to low spectral resolution instrumentation. Considerable attenuation is given to the problem of meteor excitation temperatures since these are required to predict the strength of UV features. Relative line intensities are computed for an assumed chondritic composition. Features of greatest predicted intensities, the major problems in meteor physics, detectability of UV meteor events, complications of spacecraft motion, and UV instrumentation options are summarized.

  5. QoS support for end users of I/O-intensive applications using shared storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2011-01-19

    I/O-intensive applications are becoming increasingly common on today's high-performance computing systems. While performance of compute-bound applications can be effectively guaranteed with techniques such as space sharing or QoS-aware process scheduling, it remains a challenge to meet QoS requirements for end users of I/O-intensive applications using shared storage systems because it is difficult to differentiate I/O services for different applications with individual quality requirements. Furthermore, it is difficult for end users to accurately specify performance goals to the storage system using I/O-related metrics such as request latency or throughput. As access patterns, request rates, and the system workload change in time,more » a fixed I/O performance goal, such as bounds on throughput or latency, can be expensive to achieve and may not lead to a meaningful performance guarantees such as bounded program execution time. We propose a scheme supporting end-users QoS goals, specified in terms of program execution time, in shared storage environments. We automatically translate the users performance goals into instantaneous I/O throughput bounds using a machine learning technique, and use dynamically determined service time windows to efficiently meet the throughput bounds. We have implemented this scheme in the PVFS2 parallel file system and have conducted an extensive evaluation. Our results show that this scheme can satisfy realistic end-user QoS requirements by making highly efficient use of the I/O resources. The scheme seeks to balance programs attainment of QoS requirements, and saves as much of the remaining I/O capacity as possible for best-effort programs.« less

  6. Pressure profiles of the BRing based on the simulation used in the CSRm

    NASA Astrophysics Data System (ADS)

    Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.

    2017-07-01

    HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.

  7. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  8. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  9. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  10. System on a chip with MPEG-4 capability

    NASA Astrophysics Data System (ADS)

    Yassa, Fathy; Schonfeld, Dan

    2002-12-01

    Current products supporting video communication applications rely on existing computer architectures. RISC processors have been used successfully in numerous applications over several decades. DSP processors have become ubiquitous in signal processing and communication applications. Real-time applications such as speech processing in cellular telephony rely extensively on the computational power of these processors. Video processors designed to implement the computationally intensive codec operations have also been used to address the high demands of video communication applications (e.g., cable set-top boxes and DVDs). This paper presents an overview of a system-on-chip (SOC) architecture used for real-time video in wireless communication applications. The SOC specifications answer to the system requirements imposed by the application environment. A CAM-based video processor is used to accelerate data intensive video compression tasks such as motion estimations and filtering. Other components are dedicated to system level data processing and audio processing. A rich set of I/Os allows the SOC to communicate with other system components such as baseband and memory subsystems.

  11. Accessing the public MIMIC-II intensive care relational database for clinical research

    PubMed Central

    2013-01-01

    Background The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database is a free, public resource for intensive care research. The database was officially released in 2006, and has attracted a growing number of researchers in academia and industry. We present the two major software tools that facilitate accessing the relational database: the web-based QueryBuilder and a downloadable virtual machine (VM) image. Results QueryBuilder and the MIMIC-II VM have been developed successfully and are freely available to MIMIC-II users. Simple example SQL queries and the resulting data are presented. Clinical studies pertaining to acute kidney injury and prediction of fluid requirements in the intensive care unit are shown as typical examples of research performed with MIMIC-II. In addition, MIMIC-II has also provided data for annual PhysioNet/Computing in Cardiology Challenges, including the 2012 Challenge “Predicting mortality of ICU Patients”. Conclusions QueryBuilder is a web-based tool that provides easy access to MIMIC-II. For more computationally intensive queries, one can locally install a complete copy of MIMIC-II in a VM. Both publicly available tools provide the MIMIC-II research community with convenient querying interfaces and complement the value of the MIMIC-II relational database. PMID:23302652

  12. CAPER 3.0: A Scalable Cloud-Based System for Data-Intensive Analysis of Chromosome-Centric Human Proteome Project Data Sets.

    PubMed

    Yang, Shuai; Zhang, Xinlei; Diao, Lihong; Guo, Feifei; Wang, Dan; Liu, Zhongyang; Li, Honglei; Zheng, Junjie; Pan, Jingshan; Nice, Edouard C; Li, Dong; He, Fuchu

    2015-09-04

    The Chromosome-centric Human Proteome Project (C-HPP) aims to catalog genome-encoded proteins using a chromosome-by-chromosome strategy. As the C-HPP proceeds, the increasing requirement for data-intensive analysis of the MS/MS data poses a challenge to the proteomic community, especially small laboratories lacking computational infrastructure. To address this challenge, we have updated the previous CAPER browser into a higher version, CAPER 3.0, which is a scalable cloud-based system for data-intensive analysis of C-HPP data sets. CAPER 3.0 uses cloud computing technology to facilitate MS/MS-based peptide identification. In particular, it can use both public and private cloud, facilitating the analysis of C-HPP data sets. CAPER 3.0 provides a graphical user interface (GUI) to help users transfer data, configure jobs, track progress, and visualize the results comprehensively. These features enable users without programming expertise to easily conduct data-intensive analysis using CAPER 3.0. Here, we illustrate the usage of CAPER 3.0 with four specific mass spectral data-intensive problems: detecting novel peptides, identifying single amino acid variants (SAVs) derived from known missense mutations, identifying sample-specific SAVs, and identifying exon-skipping events. CAPER 3.0 is available at http://prodigy.bprc.ac.cn/caper3.

  13. A Collaborative Multimedia Annotation Tool for Enhancing Knowledge Sharing in CSCL

    ERIC Educational Resources Information Center

    Yang, Stephen J. H.; Zhang, Jia; Su, Addison Y. S.; Tsai, Jeffrey J. P.

    2011-01-01

    Knowledge sharing in computer supported collaborative learning (CSCL) requires intensive social interactions among participants, typically in the form of annotations. An annotation refers to an explicit expression of knowledge that is attached to a document to reveal the conceptual meanings of an annotator's implicit thoughts. In this research, we…

  14. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  15. Investigation on experimental techniques to detect, locate and quantify gear noise in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Flanagan, P. M.; Atherton, W. J.

    1985-01-01

    A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.

  16. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  17. Multiscale Simulations of Reactive Transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, D. M.; Bakarji, J.

    2014-12-01

    Discrete, particle-based simulations offer distinct advantages when modeling solute transport and chemical reactions. For example, Brownian motion is often used to model diffusion in complex pore networks, and Gillespie-type algorithms allow one to handle multicomponent chemical reactions with uncertain reaction pathways. Yet such models can be computationally more intensive than their continuum-scale counterparts, e.g., advection-dispersion-reaction equations. Combining the discrete and continuum models has a potential to resolve the quantity of interest with a required degree of physicochemical granularity at acceptable computational cost. We present computational examples of such "hybrid models" and discuss the challenges associated with coupling these two levels of description.

  18. Calculation of Stress Intensity Factors for Interfacial Cracks in Fiber Metal Laminates

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2009-01-01

    Stress intensity factors for interfacial cracks in Fiber Metal Laminates (FML) are computed by using the displacement ratio method recently developed by Sun and Qian (1997, Int. J. Solids. Struct. 34, 2595-2609). Various FML configurations with single and multiple delaminations subjected to different loading conditions are investigated. The displacement ratio method requires the total energy release rate, bimaterial parameters, and relative crack surface displacements as input. Details of generating the energy release rates, defining bimaterial parameters with anisotropic elasticity, and selecting proper crack surface locations for obtaining relative crack surface displacements are discussed in the paper. Even though the individual energy release rates are nonconvergent, mesh-size-independent stress intensity factors can be obtained. This study also finds that the selection of reference length can affect the magnitudes and the mode mixity angles of the stress intensity factors; thus, it is important to report the reference length used with the calculated stress intensity factors.

  19. Application of computer assisted moire to the study of a crack tip

    NASA Astrophysics Data System (ADS)

    Sciammarella, C. A.; Albertazzi, A., Jr.; Mourikes, J.

    The basic principles of computer assisted moire are discussed. The influence of the image sensor and its finite dimensions on the sampling theorem requirements is discussed. Criteria for the selection of grating pitch on the basis of the spatial bandwidth of the pattern to be observed and the requirements arising from sensitivity considerations are given. The method is used to analyze the strain field in the neighborhood of the crack tip of a standard ASTM compact tension specimen. From the displacements the strains are computed, and from the strains the stresses are obtained using the generalized Ramberg-Osgood stress strain relationship. The stresses are used to compute the values for the J-integral in several circuits surrounding the crack. Good agreement is obtained between the values of the stress intensity factors obtained by different methods. The plastic region surrounding the crack does not show a HRR field and thus the usual rationale to justify the J-integral methods must be re-evaluated.

  20. Hierarchical Fuzzy Control Applied to Parallel Connected UPS Inverters Using Average Current Sharing Scheme

    NASA Astrophysics Data System (ADS)

    Singh, Santosh Kumar; Ghatak Choudhuri, Sumit

    2018-05-01

    Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.

  1. Sensor network based vehicle classification and license plate identification system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frigo, Janette Rose; Brennan, Sean M; Rosten, Edward J

    Typically, for energy efficiency and scalability purposes, sensor networks have been used in the context of environmental and traffic monitoring applications in which operations at the sensor level are not computationally intensive. But increasingly, sensor network applications require data and compute intensive sensors such video cameras and microphones. In this paper, we describe the design and implementation of two such systems: a vehicle classifier based on acoustic signals and a license plate identification system using a camera. The systems are implemented in an energy-efficient manner to the extent possible using commercially available hardware, the Mica motes and the Stargate platform.more » Our experience in designing these systems leads us to consider an alternate more flexible, modular, low-power mote architecture that uses a combination of FPGAs, specialized embedded processing units and sensor data acquisition systems.« less

  2. Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    2008-05-01

    The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.

  3. Online learning in optical tomography: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Li, Qin; Liu, Jian-Guo

    2018-07-01

    We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.

  4. TU-H-206-04: An Effective Homomorphic Unsharp Mask Filtering Method to Correct Intensity Inhomogeneity in Daily Treatment MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, D; Gach, H; Li, H

    Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less

  5. A parallel-processing approach to computing for the geographic sciences

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.

  6. Fan noise prediction assessment

    NASA Technical Reports Server (NTRS)

    Bent, Paul H.

    1995-01-01

    This report is an evaluation of two techniques for predicting the fan noise radiation from engine nacelles. The first is a relatively computational intensive finite element technique. The code is named ARC, an abbreviation of Acoustic Radiation Code, and was developed by Eversman. This is actually a suite of software that first generates a grid around the nacelle, then solves for the potential flowfield, and finally solves the acoustic radiation problem. The second approach is an analytical technique requiring minimal computational effort. This is termed the cutoff ratio technique and was developed by Rice. Details of the duct geometry, such as the hub-to-tip ratio and Mach number of the flow in the duct, and modal content of the duct noise are required for proper prediction.

  7. 27ps DFT Molecular Dynamics Simulation of a-maltose: A Reduced Basis Set Study.

    USDA-ARS?s Scientific Manuscript database

    DFT molecular dynamics simulations are time intensive when carried out on carbohydrates such as alpha-maltose, requiring up to three or more weeks on a fast 16-processor computer to obtain just 5ps of constant energy dynamics. In a recent publication [1] forces for dynamics were generated from B3LY...

  8. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  9. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  10. Efficient Memory Access with NumPy Global Arrays using Local Memory Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Berghofer, Dan C.

    This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less

  11. Measurement of the effect of physical exercise on the concentration of individuals with ADHD.

    PubMed

    Silva, Alessandro P; Prado, Sueli O S; Scardovelli, Terigi A; Boschi, Silvia R M S; Campos, Luiz C; Frère, Annie F

    2015-01-01

    Attention Deficit Hyperactivity Disorder (ADHD) mainly affects the academic performance of children and adolescents. In addition to bringing physical and mental health benefits, physical activity has been used to prevent and improve ADHD comorbidities; however, its effectiveness has not been quantified. In this study, the effect of physical activity on children's attention was measured using a computer game. Intense physical activity was promoted by a relay race, which requires a 5-min run without a rest interval. The proposed physical stimulus was performed with 28 volunteers: 14 with ADHD (GE-EF) and 14 without ADHD symptoms (GC-EF). After 5 min of rest, these volunteers accessed the computer game to accomplish the tasks in the shortest time possible. The computer game was also accessed by another 28 volunteers: 14 with ADHD (GE) and 14 without these symptoms (GC). The response time to solve the tasks that require attention was recorded. The results of the four groups were analyzed using D'Agostino statistical tests of normality, Kruskal-Wallis analyses of variance and post-hoc Dunn tests. The groups of volunteers with ADHD who performed exercise (GE-EF) showed improved performance for the tasks that require attention with a difference of 30.52% compared with the volunteers with ADHD who did not perform the exercise (GE). The (GE-EF) group showed similar performance (2.5% difference) with the volunteers in the (GC) group who have no ADHD symptoms and did not exercise. This study shows that intense exercise can improve the attention of children with ADHD and may help their school performance.

  12. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  13. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  14. Lateral mode coupling to reduce the electrical impedance of small elements required for high power ultrasound therapy phased arrays.

    PubMed

    Hynynen, Kullervo; Yin, Jianhua

    2009-03-01

    A method that uses lateral coupling to reduce the electrical impedance of small transducer elements in generating ultrasound waves was tested. Cylindrical, radially polled transducer elements were driven at their length resonance frequency. Computer simulation and experimental studies showed that the electrical impedance of the transducer element could be controlled by the cylinder wall thickness, while the operation frequency was determined by the cylinder length. Acoustic intensity (averaged over the cylinder diameter) over 10 W / cm(2) (a therapeutically relevant intensity) was measured from these elements.

  15. Analysis of deformable image registration accuracy using computational modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results showmore » that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.« less

  16. Protein requirements and muscle mass/strength changes during intensive training in novice bodybuilders.

    PubMed

    Lemon, P W; Tarnopolsky, M A; MacDougall, J D; Atkinson, S A

    1992-08-01

    This randomized double-blind cross-over study assessed protein (PRO) requirements during the early stages of intensive bodybuilding training and determined whether supplemental PRO intake (PROIN) enhanced muscle mass/strength gains. Twelve men [22.4 +/- 2.4 (SD) yr] received an isoenergetic PRO (total PROIN 2.62 g.kg-1.day-1) or carbohydrate (CHO; total PROIN 1.35 g.kg-1.day-1) supplement for 1 mo each during intensive (1.5 h/day, 6 days/wk) weight training. On the basis of 3-day nitrogen balance (NBAL) measurements after 3.5 wk on each treatment (8.9 +/- 4.2 and -3.4 +/- 1.9 g N/day, respectively), the PROIN necessary for zero NBAL (requirement) was 1.4-1.5 g.kg-1.day-1. The recommended intake (requirement + 2 SD) was 1.6-1.7 g.kg-1.day-1. However, strength (voluntary and electrically evoked) and muscle mass [density, creatinine excretion, muscle area (computer axial tomography scan), and biceps N content] gains were not different between diet treatments. These data indicate that, during the early stages of intensive bodybuilding training, PRO needs are approximately 100% greater than current recommendations but that PROIN increases from 1.35 to 2.62 g.kg-1.day-1 do not enhance muscle mass/strength gains, at least during the 1st mo of training. Whether differential gains would occur with longer training remains to be determined.

  17. SALUTE Grid Application using Message-Oriented Middleware

    NASA Astrophysics Data System (ADS)

    Atanassov, E.; Dimitrov, D. Sl.; Gurov, T.

    2009-10-01

    Stochastic ALgorithms for Ultra-fast Transport in sEmiconductors (SALUTE) is a grid application developed for solving various computationally intensive problems which describe ultra-fast carrier transport in semiconductors. SALUTE studies memory and quantum effects during the relaxation process due to electronphonon interaction in one-band semiconductors or quantum wires. Formally, SALUTE integrates a set of novel Monte Carlo, quasi-Monte Carlo and hybrid algorithms for solving various computationally intensive problems which describe the femtosecond relaxation process of optically excited carriers in one-band semiconductors or quantum wires. In this paper we present application-specific job submission and reservation management tool named a Job Track Server (JTS). It is developed using Message-Oriented middleware to implement robust, versatile job submission and tracing mechanism, which can be tailored to application specific failover and quality of service requirements. Experience from using the JTS for submission of SALUTE jobs is presented.

  18. A world-wide databridge supported by a commercial cloud provider

    NASA Astrophysics Data System (ADS)

    Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio

    2017-10-01

    Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.

  19. Automated Boundary Conditions for Wind Tunnel Simulations

    NASA Technical Reports Server (NTRS)

    Carlson, Jan-Renee

    2018-01-01

    Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.

  20. Far field and wavefront characterization of a high-power semiconductor laser for free space optical communications

    NASA Technical Reports Server (NTRS)

    Cornwell, Donald M., Jr.; Saif, Babak N.

    1991-01-01

    The spatial pointing angle and far field beamwidth of a high-power semiconductor laser are characterized as a function of CW power and also as a function of temperature. The time-averaged spatial pointing angle and spatial lobe width were measured under intensity-modulated conditions. The measured pointing deviations are determined to be well within the pointing requirements of the NASA Laser Communications Transceiver (LCT) program. A computer-controlled Mach-Zehnder phase-shifter interferometer is used to characterize the wavefront quality of the laser. The rms phase error over the entire pupil was measured as a function of CW output power. Time-averaged measurements of the wavefront quality are also made under intensity-modulated conditions. The measured rms phase errors are determined to be well within the wavefront quality requirements of the LCT program.

  1. Advancing Cyberinfrastructure to support high resolution water resources modeling

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Ogden, F. L.; Jones, N.; Horsburgh, J. S.

    2012-12-01

    Addressing the problem of how the availability and quality of water resources at large scales are sensitive to climate variability, watershed alterations and management activities requires computational resources that combine data from multiple sources and support integrated modeling. Related cyberinfrastructure challenges include: 1) how can we best structure data and computer models to address this scientific problem through the use of high-performance and data-intensive computing, and 2) how can we do this in a way that discipline scientists without extensive computational and algorithmic knowledge and experience can take advantage of advances in cyberinfrastructure? This presentation will describe a new system called CI-WATER that is being developed to address these challenges and advance high resolution water resources modeling in the Western U.S. We are building on existing tools that enable collaboration to develop model and data interfaces that link integrated system models running within an HPC environment to multiple data sources. Our goal is to enhance the use of computational simulation and data-intensive modeling to better understand water resources. Addressing water resource problems in the Western U.S. requires simulation of natural and engineered systems, as well as representation of legal (water rights) and institutional constraints alongside the representation of physical processes. We are establishing data services to represent the engineered infrastructure and legal and institutional systems in a way that they can be used with high resolution multi-physics watershed modeling at high spatial resolution. These services will enable incorporation of location-specific information on water management infrastructure and systems into the assessment of regional water availability in the face of growing demands, uncertain future meteorological forcings, and existing prior-appropriations water rights. This presentation will discuss the informatics challenges involved with data management and easy-to-use access to high performance computing being tackled in this project.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael; Lethin, Richard

    Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less

  3. A computationally efficient method for incorporating spike waveform information into decoding algorithms.

    PubMed

    Ventura, Valérie; Todorova, Sonia

    2015-05-01

    Spike-based brain-computer interfaces (BCIs) have the potential to restore motor ability to people with paralysis and amputation, and have shown impressive performance in the lab. To transition BCI devices from the lab to the clinic, decoding must proceed automatically and in real time, which prohibits the use of algorithms that are computationally intensive or require manual tweaking. A common choice is to avoid spike sorting and treat the signal on each electrode as if it came from a single neuron, which is fast, easy, and therefore desirable for clinical use. But this approach ignores the kinematic information provided by individual neurons recorded on the same electrode. The contribution of this letter is a linear decoding model that extracts kinematic information from individual neurons without spike-sorting the electrode signals. The method relies on modeling sample averages of waveform features as functions of kinematics, which is automatic and requires minimal data storage and computation. In offline reconstruction of arm trajectories of a nonhuman primate performing reaching tasks, the proposed method performs as well as decoders based on expertly manually and automatically sorted spikes.

  4. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  5. Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  6. Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  7. The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update

    PubMed Central

    Afgan, Enis; Baker, Dannon; van den Beek, Marius; Blankenberg, Daniel; Bouvier, Dave; Čech, Martin; Chilton, John; Clements, Dave; Coraor, Nate; Eberhard, Carl; Grüning, Björn; Guerler, Aysam; Hillman-Jackson, Jennifer; Von Kuster, Greg; Rasche, Eric; Soranzo, Nicola; Turaga, Nitesh; Taylor, James; Nekrutenko, Anton; Goecks, Jeremy

    2016-01-01

    High-throughput data production technologies, particularly ‘next-generation’ DNA sequencing, have ushered in widespread and disruptive changes to biomedical research. Making sense of the large datasets produced by these technologies requires sophisticated statistical and computational methods, as well as substantial computational power. This has led to an acute crisis in life sciences, as researchers without informatics training attempt to perform computation-dependent analyses. Since 2005, the Galaxy project has worked to address this problem by providing a framework that makes advanced computational tools usable by non experts. Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a Web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse. In this report we highlight recently added features enabling biomedical analyses on a large scale. PMID:27137889

  8. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  9. Quantum Interference Effects in Resonant Raman Spectroscopy of Single- and Triple-Layer MoTe2 from First-Principles

    NASA Astrophysics Data System (ADS)

    Miranda, Henrique P. C.; Reichardt, Sven; Froehlicher, Guillaume; Molina-Sánchez, Alejandro; Berciaud, Stéphane; Wirtz, Ludger

    2017-04-01

    We present a combined experimental and theoretical study of resonant Raman spectroscopy in single- and triple-layer MoTe$_2$. Raman intensities are computed entirely from first principles by calculating finite differences of the dielectric susceptibility. In our analysis, we investigate the role of quantum interference effects and the electron-phonon coupling. With this method, we explain the experimentally observed intensity inversion of the $A^\\prime_1$ vibrational modes in triple-layer MoTe2 with increasing laser photon energy. Finally, we show that a quantitative comparison with experimental data requires the proper inclusion of excitonic effects.

  10. Factors influencing hand/eye synchronicity in the computer age.

    PubMed

    Grant, A H

    1992-09-01

    In using a computer, the relation of vision to hand/finger actuated keyboard usage in performing fine motor-coordinated functions is influenced by the physical location, size, and collective placement of the keys. Traditional nonprehensile flat/rectangular keyboard applications usually require a high and nearly constant level of visual attention. Biometrically shaped keyboards would allow for prehensile hand-posturing, thus affording better tactile familiarity with the keys, requiring less intense and less constant level of visual attention to the task, and providing a greater measure of freedom from having to visualize the key(s). Workpace and related physiological changes, aging, onset of monocularization (intermittent lapsing of binocularity for near vision) that accompanies presbyopia, tool colors, and background contrast are factors affecting constancy of visual attention to task performance. Capitas extension, excessive excyclotorsion, and repetitive strain injuries (such as carpal tunnel syndrome) are common and debilitating concomitants to computer usage. These problems can be remedied by improved keyboard design. The salutary role of mnemonics in minimizing visual dependency is discussed.

  11. Power and Performance Trade-offs for Space Time Adaptive Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gawande, Nitin A.; Manzano Franco, Joseph B.; Tumeo, Antonino

    Computational efficiency – performance relative to power or energy – is one of the most important concerns when designing RADAR processing systems. This paper analyzes power and performance trade-offs for a typical Space Time Adaptive Processing (STAP) application. We study STAP implementations for CUDA and OpenMP on two computationally efficient architectures, Intel Haswell Core I7-4770TE and NVIDIA Kayla with a GK208 GPU. We analyze the power and performance of STAP’s computationally intensive kernels across the two hardware testbeds. We also show the impact and trade-offs of GPU optimization techniques. We show that data parallelism can be exploited for efficient implementationmore » on the Haswell CPU architecture. The GPU architecture is able to process large size data sets without increase in power requirement. The use of shared memory has a significant impact on the power requirement for the GPU. A balance between the use of shared memory and main memory access leads to an improved performance in a typical STAP application.« less

  12. The ReaxFF reactive force-field: Development, applications, and future directions

    DOE PAGES

    Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...

    2016-03-04

    The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less

  13. A flexible tool for diagnosing water, energy, and entropy budgets in climate models

    NASA Astrophysics Data System (ADS)

    Lembo, Valerio; Lucarini, Valerio

    2017-04-01

    We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.

  14. Computationally efficient optimization of radiation drives

    NASA Astrophysics Data System (ADS)

    Zimmerman, George; Swift, Damian

    2017-06-01

    For many applications of pulsed radiation, the temporal pulse shape is designed to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system, adjusting the shape until the desired response is induced. These simulations may be computationally intensive, and iterative forward optimization is then expensive and slow. In principle, a simulation program could be modified to adjust the radiation drive automatically until the desired instantaneous response is achieved, but this may be impracticable in a complicated multi-physics program. However, the computational time increment is typically much shorter than the time scale of changes in the desired response, so the radiation intensity can be adjusted so that the response tends toward the desired value. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies where different components had significantly different mechanical impedance, requiring careful pulse shaping. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. The Poynting-Stokes Tensor And Radiative Transfer In Turbid Media: The Microphysical Paradigm

    NASA Astrophysics Data System (ADS)

    Mishchenko, M. I.

    2010-12-01

    This paper solves the long-standing problem of establishing the fundamental physical link between the radiative transfer theory and macroscopic electromagnetics in the case of elastic scattering by a sparse discrete random medium. The radiative transfer equation (RTE) is derived directly from the macroscopic Maxwell equations by computing theoretically the appropriately defined so-called Poynting-Stokes tensor carrying informa-tion on both the direction, magnitude, and polarization characteristics of lo-cal electromagnetic energy flow. Our derivation from first principles shows that to compute the local Poynting vector averaged over a sufficiently long period of time, one can solve the RTE for the direction-dependent specific intensity column vector and then integrate the direction-weighted specific intensity over all directions. Furthermore, we demonstrate that the specific intensity (or specific intensity column vector) can be measured with a well-collimated radiometer (photopolarimeter), which provides the ultimate physical justification for the use of such instruments in radiation-budget and particle-characterization applications. However, the specific intensity cannot be interpreted in phenomenological terms as signifying the amount of elec-tromagnetic energy transported in a given direction per unit area normal to this direction per unit time per unit solid angle. Also, in the case of a densely packed scattering medium the relation of the measurement with a well-collimated radiometer to the time-averaged local Poynting vector re-mains uncertain, and the theoretical modeling of this measurement is likely to require a much more complicated approach than solving an RTE.

  16. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  17. Data privacy considerations in Intensive Care Grids.

    PubMed

    Luna, Jesus; Dikaiakos, Marios D; Kyprianou, Theodoros; Bilas, Angelos; Marazakis, Manolis

    2008-01-01

    Novel eHealth systems are being designed to provide a citizen-centered health system, however the even demanding need for computing and data resources has required the adoption of Grid technologies. In most of the cases, this novel Health Grid requires not only conveying patient's personal data through public networks, but also storing it into shared resources out of the hospital premises. These features introduce new security concerns, in particular related with privacy. In this paper we survey current legal and technological approaches that have been taken to protect a patient's personal data into eHealth systems, with a particular focus in Intensive Care Grids. However, thanks to a security analysis applied over the Intensive Care Grid system (ICGrid) we show that these security mechanisms are not enough to provide a comprehensive solution, mainly because the data-at-rest is still vulnerable to attacks coming from untrusted Storage Elements where an attacker may directly access them. To cope with these issues, we propose a new privacy-oriented protocol which uses a combination of encryption and fragmentation to improve data's assurance while keeping compatibility with current legislations and Health Grid security mechanisms.

  18. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  19. Computational physics of the mind

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  20. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  1. Long live the Data Scientist, but can he/she persist?

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.

    2011-12-01

    In recent years the fourth paradigm of data intensive science has slowly taken hold as the increased capacity of instruments and an increasing number of instruments (in particular sensor networks) have changed how fundamental research is undertaken. Most modern scientific research is about digital capture of data direct from instruments, processing it by computers, storing the results on computers and only publishing a small fraction of data in hard copy publications. At the same time, the rapid increase in capacity of supercomputers, particularly at petascale, means that far larger data sets can be analysed and to greater resolution than previously possible. The new cloud computing paradigm which allows distributed data, software and compute resources to be linked by seamless workflows, is creating new opportunities in processing of high volumes of data to an increasingly larger number of researchers. However, to take full advantage of these compute resources, data sets for analysis have to be aggregated from multiple sources to create high performance data sets. These new technology developments require that scientists must become more skilled in data management and/or have a higher degree of computer literacy. In almost every science discipline there is now an X-informatics branch and a computational X branch (eg, Geoinformatics and Computational Geoscience): both require a new breed of researcher that has skills in both the science fundamentals and also knowledge of some ICT aspects (computer programming, data base design and development, data curation, software engineering). People that can operate in both science and ICT are increasingly known as 'data scientists'. Data scientists are a critical element of many large scale earth and space science informatics projects, particularly those that are tackling current grand challenges at an international level on issues such as climate change, hazard prediction and sustainable development of our natural resources. These projects by their very nature require the integration of multiple digital data sets from multiple sources. Often the preparation of the data for computational analysis can take months and requires painstaking attention to detail to ensure that anomalies identified are real and are not just artefacts of the data preparation and/or the computational analysis. Although data scientists are increasingly vital to successful data intensive earth and space science projects, unless they are recognised for their capabilities in both the science and the computational domains they are likely to migrate to either a science role or an ICT role as their career advances. Most reward and recognition systems do not recognise those with skills in both, hence, getting trained data scientists to persist beyond one or two projects can be challenge. Those data scientists that persist in the profession are characteristically committed and enthusiastic people who have the support of their organisations to take on this role. They also tend to be people who share developments and are critical to the success of the open source software movement. However, the fact remains that survival of the data scientist as a species is being threatened unless something is done to recognise their invaluable contributions to the new fourth paradigm of science.

  2. Terascale direct numerical simulations of turbulent combustion using S3D

    NASA Astrophysics Data System (ADS)

    Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.

    2009-01-01

    Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.

  3. A Longitudinal Investigation of the Effects of Computer Anxiety on Performance in a Computing-Intensive Environment

    ERIC Educational Resources Information Center

    Buche, Mari W.; Davis, Larry R.; Vician, Chelley

    2007-01-01

    Computers are pervasive in business and education, and it would be easy to assume that all individuals embrace technology. However, evidence shows that roughly 30 to 40 percent of individuals experience some level of computer anxiety. Many academic programs involve computing-intensive courses, but the actual effects of this exposure on computer…

  4. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  5. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    NASA Technical Reports Server (NTRS)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.

  6. Web Service Model for Plasma Simulations with Automatic Post Processing and Generation of Visual Diagnostics*

    NASA Astrophysics Data System (ADS)

    Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.

    2003-10-01

    We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.

  7. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  8. Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.

    PubMed

    Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K

    1995-07-01

    The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.

  9. Prospective comparison of the usage of conventional film and PACS based computed radiography for portable chest x-ray imaging in a medical intensive care unit

    NASA Astrophysics Data System (ADS)

    Kundel, Harold L.; Seshadri, Sridhar B.; Langlotz, Curtis P.; Lanken, Paul N.; Horii, Steven C.; Polansky, Marcia; Kishore, Sheel; Finegold, Eric; Brikman, Inna; Bozzo, Mary T.; Redfern, Regina O.

    1995-05-01

    The purpose of this study was to compare the efficiency of image delivery, the effectiveness of image information transfer, and the timeliness of clinical actions in a medical intensive care unit (MICU) using either conventional screen-film imaging (SF-HC), computed radiography (CR-HC) or a CR based PACS. When the CR based PACS was in use, images could be viewed in the MICU on digital workstation (CR-WS) or in the radiology department as laser printed hard copy (CR-HC). Data were collected by daily interviews with the house-staff, by monitoring computer log-ons and other time stamped activities, and by observing film viewing times in the radiology department with surveillance cameras. The time at which image information was made available to the MICU physicians was decreased during the CR-PACS period as compared with either the SF-HC periods or the CR-HC periods but the image information was not accessed more quickly by the clinical staff. However, the time required to perform image related clinical actions for pulmonary and pleural problems was decreased when images were viewed on the workstation.

  10. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  11. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  12. SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments

    NASA Technical Reports Server (NTRS)

    Leonard, R. F.

    1977-01-01

    A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.

  13. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

    PubMed Central

    Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2012-01-01

    Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638

  14. Configuration interaction singles natural orbitals: An orbital basis for an efficient and size intensive multireference description of electronic excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shu, Yinan; Levine, Benjamin G., E-mail: levine@chemistry.msu.edu; Hohenstein, Edward G.

    2015-01-14

    Multireference quantum chemical methods, such as the complete active space self-consistent field (CASSCF) method, have long been the state of the art for computing regions of potential energy surfaces (PESs) where complex, multiconfigurational wavefunctions are required, such as near conical intersections. Herein, we present a computationally efficient alternative to the widely used CASSCF method based on a complete active space configuration interaction (CASCI) expansion built from the state-averaged natural orbitals of configuration interaction singles calculations (CISNOs). This CISNO-CASCI approach is shown to predict vertical excitation energies of molecules with closed-shell ground states similar to those predicted by state averaged (SA)-CASSCFmore » in many cases and to provide an excellent reference for a perturbative treatment of dynamic electron correlation. Absolute energies computed at the CISNO-CASCI level are found to be variationally superior, on average, to other CASCI methods. Unlike SA-CASSCF, CISNO-CASCI provides vertical excitation energies which are both size intensive and size consistent, thus suggesting that CISNO-CASCI would be preferable to SA-CASSCF for the study of systems with multiple excitable centers. The fact that SA-CASSCF and some other CASCI methods do not provide a size intensive/consistent description of excited states is attributed to changes in the orbitals that occur upon introduction of non-interacting subsystems. Finally, CISNO-CASCI is found to provide a suitable description of the PES surrounding a biradicaloid conical intersection in ethylene.« less

  15. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  16. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  17. Evidence-based ergonomics. A comparison of Japanese and American office layouts.

    PubMed

    Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke

    2003-01-01

    There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.

  18. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  19. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  20. The theory and measurement of noncoherent microwave scattering parameters. [for remote sensing of scenes via radar scatterometry

    NASA Technical Reports Server (NTRS)

    Claassen, J. P.; Fung, A. K.

    1977-01-01

    The radar equation for incoherent scenes is derived and scattering coefficients are introduced in a systematic way to account for the complete interaction between the incident wave and the random scene. Intensity (power) and correlation techniques similar to that for coherent targets are proposed to measure all the scattering parameters. The sensitivity of the intensity technique to various practical realizations of the antenna polarization requirements is evaluated by means of computer simulated measurements, conducted with a scattering characteristic similar to that of the sea. It was shown that for scenes satisfying reciprocity one must admit three new cross-correlation scattering coefficients in addition to the commonly measured autocorrelation coefficients.

  1. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.

    PubMed

    Wang, Lujia; Liu, Ming; Meng, Max Q-H

    2017-02-01

    Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.

  2. Use of declarative statements in creating and maintaining computer-interpretable knowledge bases for guideline-based care.

    PubMed

    Tu, Samson W; Hrabak, Karen M; Campbell, James R; Glasgow, Julie; Nyman, Mark A; McClure, Robert; McClay, James; Abarbanel, Robert; Mansfield, James G; Martins, Susana M; Goldstein, Mary K; Musen, Mark A

    2006-01-01

    Developing computer-interpretable clinical practice guidelines (CPGs) to provide decision support for guideline-based care is an extremely labor-intensive task. In the EON/ATHENA and SAGE projects, we formulated substantial portions of CPGs as computable statements that express declarative relationships between patient conditions and possible interventions. We developed query and expression languages that allow a decision-support system (DSS) to evaluate these statements in specific patient situations. A DSS can use these guideline statements in multiple ways, including: (1) as inputs for determining preferred alternatives in decision-making, and (2) as a way to provide targeted commentaries in the clinical information system. The use of these declarative statements significantly reduces the modeling expertise and effort required to create and maintain computer-interpretable knowledge bases for decision-support purpose. We discuss possible implications for sharing of such knowledge bases.

  3. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  4. On the Fast Evaluation Method of Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Planetary Atmospheres in Thermal IR and Microwave

    NASA Technical Reports Server (NTRS)

    Ustinov, E. A.

    1999-01-01

    Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.

  5. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  6. A system for the delivery of programmable, adaptive stimulation intensity envelopes for drop foot correction applications.

    PubMed

    Breen, P P; O'Keeffe, D T; Conway, R; Lyons, G M

    2006-03-01

    We describe the design of an intelligent drop foot stimulator unit for use in conjunction with a commercial neuromuscular electrical nerve stimulation (NMES) unit, the NT2000. The developed micro-controller unit interfaces to a personal computer (PC) and a graphical user interface (GUI) allows the clinician to graphically specify the shape of the stimulation intensity envelope required for a subject undergoing drop foot correction. The developed unit is based on the ADuC812S micro-controller evaluation board from Analog Devices and uses two force sensitive resistor (FSR) based foot-switches to control application of stimulus. The unit has the ability to display to the clinician how the stimulus intensity envelope is being delivered during walking using a data capture capability. The developed system has a built-in algorithm to dynamically adjust the delivery of stimulus to reflect changes both within the gait cycle and from cycle to cycle. Thus, adaptive control of stimulus intensity is achieved.

  7. Gaussian representation of high-intensity focused ultrasound beams.

    PubMed

    Soneson, Joshua E; Myers, Matthew R

    2007-11-01

    A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.

  8. Optimizing physician access to surgical intensive care unit laboratory information through mobile computing.

    PubMed

    Strain, J J; Felciano, R M; Seiver, A; Acuff, R; Fagan, L

    1996-01-01

    Approximately 30 minutes of computer access time are required by surgical residents at Stanford University Medical Center (SUMC) to examine the lab values of all patients on a surgical intensive care unit (ICU) service, a task that must be performed several times a day. To reduce the time accessing this information and simultaneously increase the readability and currency of the data, we have created a mobile, pen-based user interface and software system that delivers lab results to surgeons in the ICU. The ScroungeMaster system, loaded on a portable tablet computer, retrieves lab results for a subset of patients from the central laboratory computer and stores them in a local database cache. The cache can be updated on command; this update takes approximately 2.7 minutes for all ICU patients being followed by the surgeon, and can be performed as a background task while the user continues to access selected lab results. The user interface presents lab results according to physiologic system. Which labs are displayed first is governed by a layout selection algorithm based on previous accesses to the patient's lab information, physician preferences, and the nature of the patient's medical condition. Initial evaluation of the system has shown that physicians prefer the ScroungeMaster interface to that of existing systems at SUMC and are satisfied with the system's performance. We discuss the evolution of ScroungeMaster and make observations on changes to physician work flow with the presence of mobile, pen-based computing in the ICU.

  9. Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.

    PubMed

    Das, Biman; Drake, Eli; Jack, John

    2004-02-01

    Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.

  10. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  11. Laminography using resonant neutron attenuation for detection of drugs and explosives

    NASA Astrophysics Data System (ADS)

    Loveman, R. A.; Feinstein, R. L.; Bendahan, J.; Gozani, T.; Shea, P.

    1997-02-01

    Resonant neutron attenuation has been shown to be usable for assaying elements which constitute explosives, cocaine, and heroin. By careful analysis of attenuation measurements, the determination of the presence or absence of explosives can be determined. Simple two dimensional radiographic techniques only give results for areal density and consequently will be limited in their effectiveness. Classical tomographic techniques are both computationally very intensive and place strict requirements on the quality and amount of data acquired. These requirements and computations take time and are likely to be very difficult to perform in real time. Simulation studies described in this article have shown that laminographic image reconstruction can be used effectively with resonant neutron attenuation measurements to interrogate luggage for explosives or drugs. The design of the system described in this article is capable of pseudo-three dimensional image reconstruction of all of the elemental densities pertinent to explosive and drug detection.

  12. [The use of learning computers in the general educational system: hygienic aspects].

    PubMed

    Teksheva, L M; El'ksnina, E V; Perminov, M A

    2007-01-01

    The use of learning computers (LCs) in a present-day academic process poses for hygienists and physiologists new problems, such as to evaluate the influence of LCs on schoolchildren's health and to substantiate and to develop the ways of drawing up and presenting the materials in terms of their readability and regulation of learning regimens. Analysis of the currently available LCs has established the factors contributing to the accelerated development of visual and overall fatigue, its accumulation: the brightness characteristics of electronic pages, including both the violation of the allowable brightness levels and the irregularity of intensity distribution; a significant inadequacy of type sizes; a great variety of lettering and coloring. The recent LCs for general education are the visual aggressive medium for schoolchildren, which is certain to require their hygienic evaluation on the basis of specially developed hygienic requirements for LCs.

  13. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  14. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  15. High Resolution Nature Runs and the Big Data Challenge

    NASA Technical Reports Server (NTRS)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs

  16. Stress-intensity factors for small surface and corner cracks in plates

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Atluri, S. N.; Newman, J. C., Jr.

    1988-01-01

    Three-dimensional finite-element and finite-alternating methods were used to obtain the stress-intensity factors for small surface and corner cracked plates subjected to remote tension and bending loads. The crack-depth-to-crack-length ratios (a/c) ranged from 0.2 to 1 and the crack-depth-to-plate-thickness ratios (a/t) ranged from 0.05 to 0.2. The performance of the finite-element alternating method was studied on these crack configurations. A study of the computational effort involved in the finite-element alternating method showed that several crack configurations could be analyzed with a single rectangular mesh idealization, whereas the conventional finite-element method requires a different mesh for each configuration. The stress-intensity factors obtained with the finite-element-alternating method agreed well (within 5 percent) with those calculated from the finite-element method with singularity elements.

  17. A probabilistic seismic risk assessment procedure for nuclear power plants: (II) Application

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.

    2011-01-01

    This paper presents the procedures and results of intensity- and time-based seismic risk assessments of a sample nuclear power plant (NPP) to demonstrate the risk-assessment methodology proposed in its companion paper. The intensity-based assessments include three sets of sensitivity studies to identify the impact of the following factors on the seismic vulnerability of the sample NPP, namely: (1) the description of fragility curves for primary and secondary components of NPPs, (2) the number of simulations of NPP response required for risk assessment, and (3) the correlation in responses between NPP components. The time-based assessment is performed as a series of intensity-based assessments. The studies illustrate the utility of the response-based fragility curves and the inclusion of the correlation in the responses of NPP components directly in the risk computation. ?? 2011 Published by Elsevier B.V.

  18. Robust feature extraction for rapid classification of damage in composites

    NASA Astrophysics Data System (ADS)

    Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi

    2009-03-01

    The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.

  19. Computation of the tip vortex flowfield for advanced aircraft propellers

    NASA Technical Reports Server (NTRS)

    Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph

    1988-01-01

    The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

  20. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  1. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  2. REPORT OF THE SNOWMASS M6 WORKING GROUP ON HIGH INTENSITY PROTON SOURCES.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHOU,W.; WEI,J.

    The M6 working group had more than 40 active participants (listed in Section 4). During the three weeks at Snowmass, there were about 50 presentations, covering a wide range of topics associated with high intensity proton sources. The talks are listed in Section 5. This group also had joint sessions with a number of other working groups, including E1 (Neutrino Factories and Muon Colliders), E5 (Fixed-Target Experiments), M1 (Muon Based Systems), T4 (Particle Sources), T5 (Beam dynamics), T7 (High Performance Computing) and T9 (Diagnostics). The M6 group performed a survey of the beam parameters of existing and proposed high intensitymore » proton sources, in particular, of the proton drivers. The results are listed in Table 1. These parameters are compared with the requirements of high-energy physics users of secondary beams in Working Groups E1 and E5. According to the consensus reached in the E1 and E5 groups, the U.S. HEP program requires an intense proton source, a 1-4 MW Proton Driver, by the end of this decade.« less

  3. The intensity dependence of lesion position shift during focused ultrasound surgery.

    PubMed

    Meaney, P M; Cahill, M D; ter Haar, G R

    2000-03-01

    Knowledge of the spatial distribution of intensity loss from an ultrasonic beam is critical for predicting lesion formation in focused ultrasound (US) surgery (FUS). To date, most models have used linear propagation models to predict intensity profiles required to compute the temporally varying temperature distributions used to compute thermal dose contours. These are used to predict the extent of thermal damage. However, these simulations fail to describe adequately the abnormal lesion formation behaviour observed during ex vivo experiments in cases for which the transducer drive levels are varied over a wide range. In such experiments, the extent of thermal damage has been observed to move significantly closer to the transducer with increased transducer drive levels than would be predicted using linear-propagation models. The first set of simulations described herein use the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear propagation model with the parabolic approximation for highly focused US waves to demonstrate that both the peak intensity and the lesion positions do, indeed, move closer to the transducer. This illustrates that, for accurate modelling of heating during FUS, nonlinear effects should be considered. Additionally, a first order approximation has been employed that attempts to account for the abnormal heat deposition distributions that accompany high transducer drive level FUS exposures where cavitation and boiling may be present. The results of these simulations are presented. It is suggested that this type of approach may be a useful tool in understanding thermal damage mechanisms.

  4. Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.

  5. Design Requirements for Communication-Intensive Interactive Applications

    NASA Astrophysics Data System (ADS)

    Bolchini, Davide; Garzotto, Franca; Paolini, Paolo

    Online interactive applications call for new requirements paradigms to capture the growing complexity of computer-mediated communication. Crafting successful interactive applications (such as websites and multimedia) involves modeling the requirements for the user experience, including those leading to content design, usable information architecture and interaction, in profound coordination with the communication goals of all stakeholders involved, ranging from persuasion to social engagement, to call for action. To face this grand challenge, we propose a methodology for modeling communication requirements and provide a set of operational conceptual tools to be used in complex projects with multiple stakeholders. Through examples from real-life projects and lessons-learned from direct experience, we draw on the concepts of brand, value, communication goals, information and persuasion requirements to systematically guide analysts to master the multifaceted connections of these elements as drivers to inform successful communication designs.

  6. Diffraction Correlation to Reconstruct Highly Strained Particles

    NASA Astrophysics Data System (ADS)

    Brown, Douglas; Harder, Ross; Clark, Jesse; Kim, J. W.; Kiefer, Boris; Fullerton, Eric; Shpyrko, Oleg; Fohtung, Edwin

    2015-03-01

    Through the use of coherent x-ray diffraction a three-dimensional diffraction pattern of a highly strained nano-crystal can be recorded in reciprocal space by a detector. Only the intensities are recorded, resulting in a loss of the complex phase. The recorded diffraction pattern therefore requires computational processing to reconstruct the density and complex distribution of the diffracted nano-crystal. For highly strained crystals, standard methods using HIO and ER algorithms are no longer sufficient to reconstruct the diffraction pattern. Our solution is to correlate the symmetry in reciprocal space to generate an a priori shape constraint to guide the computational reconstruction of the diffraction pattern. This approach has improved the ability to accurately reconstruct highly strained nano-crystals.

  7. Window-based method for approximating the Hausdorff in three-dimensional range imagery

    DOEpatents

    Koch, Mark W [Albuquerque, NM

    2009-06-02

    One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.

  8. Semi-empirical quantum evaluation of peptide - MHC class II binding

    NASA Astrophysics Data System (ADS)

    González, Ronald; Suárez, Carlos F.; Bohórquez, Hugo J.; Patarroyo, Manuel A.; Patarroyo, Manuel E.

    2017-01-01

    Peptide presentation by the major histocompatibility complex (MHC) is a key process for triggering a specific immune response. Studying peptide-MHC (pMHC) binding from a structural-based approach has potential for reducing the costs of investigation into vaccine development. This study involved using two semi-empirical quantum chemistry methods (PM7 and FMO-DFTB) for computing the binding energies of peptides bonded to HLA-DR1 and HLA-DR2. We found that key stabilising water molecules involved in the peptide binding mechanism were required for finding high correlation with IC50 experimental values. Our proposal is computationally non-intensive, and is a reliable alternative for studying pMHC binding interactions.

  9. Center for Technology for Advanced Scientific Componet Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Govindaraju, Madhusudhan

    Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB.more » We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This disconnect creates a great barrier. To address this, we are working on a model coupling interface that will allow biogeochemical computations written in MATLAB to couple with Fortran codes. This will greatly improve the productivity of ecosystem scientists. 2. Low overhead and Elastic MapReduce Implementation Optimized for Memory and CPU-Intensive Applications: Since its inception, MapReduce has frequently been associated with Hadoop and large-scale datasets. Its deployment at Amazon in the cloud, and its applications at Yahoo! for large-scale distributed document indexing and database building, among other tasks, have thrust MapReduce to the forefront of the data processing application domain. The applicability of the paradigm however extends far beyond its use with data intensive applications and diskbased systems, and can also be brought to bear in processing small but CPU intensive distributed applications. MapReduce however carries its own burdens. Through experiments using Hadoop in the context of diverse applications, we uncovered latencies and delay conditions potentially inhibiting the expected performance of a parallel execution in CPU-intensive applications. Furthermore, as it currently stands, MapReduce is favored for data-centric applications, and as such tends to be solely applied to disk-based applications. The paradigm, falls short in bringing its novelty to diskless systems dedicated to in-memory applications, and compute intensive programs processing much smaller data, but requiring intensive computations. In this project, we focused both on the performance of processing large-scale hierarchical data in distributed scientific applications, as well as the processing of smaller but demanding input sizes primarily used in diskless, and memory resident I/O systems. We designed LEMO-MR [1], a Low overhead, elastic, configurable for in- memory applications, and on-demand fault tolerance, an optimized implementation of MapReduce, for both on disk and in memory applications. We conducted experiments to identify not only the necessary components of this model, but also trade offs and factors to be considered. We have initial results to show the efficacy of our implementation in terms of potential speedup that can be achieved for representative data sets used by cloud applications. We have quantified the performance gains exhibited by our MapReduce implementation over Apache Hadoop in a compute intensive environment. 3. Cache Performance Optimization for Processing XML and HDF-based Application Data on Multi-core Processors: It is important to design and develop scientific middleware libraries to harness the opportunities presented by emerging multi-core processors. Implementations of scientific middleware and applications that do not adapt to the programming paradigm when executing on emerging processors can severely impact the overall performance. In this project, we focused on the utilization of the L2 cache, which is a critical shared resource on chip multiprocessors (CMP). The access pattern of the shared L2 cache, which is dependent on how the application schedules and assigns processing work to each thread, can either enhance or hurt the ability to hide memory latency on a multi-core processor. Therefore, while processing scientific datasets such as HDF5, it is essential to conduct fine-grained analysis of cache utilization, to inform scheduling decisions in multi-threaded programming. In this project, using the TAU toolkit for performance feedback from dual- and quad-core machines, we conducted performance analysis and recommendations on how processing threads can be scheduled on multi-core nodes to enhance the performance of a class of scientific applications that requires processing of HDF5 data. In particular, we quantified the gains associated with the use of the adaptations we have made to the Cache-Affinity and Balanced-Set scheduling algorithms to improve L2 cache performance, and hence the overall application execution time [2]. References: 1. Zacharia Fadika, Madhusudhan Govindaraju, ``MapReduce Implementation for Memory-Based and Processing Intensive Applications'', accepted in 2nd IEEE International Conference on Cloud Computing Technology and Science, Indianapolis, USA, Nov 30 - Dec 3, 2010. 2. Rajdeep Bhowmik, Madhusudhan Govindaraju, ``Cache Performance Optimization for Processing XML-based Application Data on Multi-core Processors'', in proceedings of The 10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 17-20, 2010, Melbourne, Victoria, Australia. Contact Information: Madhusudhan Govindaraju Binghamton University State University of New York (SUNY) mgovinda@cs.binghamton.edu Phone: 607-777-4904« less

  10. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  11. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  12. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  13. Linear Array Ambient Noise Adjoint Tomography Reveals Intense Crust-Mantle Interactions in North China Craton

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua

    2018-01-01

    We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.

  14. Low-Light Image Enhancement Using Adaptive Digital Pixel Binning

    PubMed Central

    Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki

    2015-01-01

    This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609

  15. AGARD Flight Test Instrumentation Series. Volume 19. Digital Signal Conditioning for Flight Test. (Le Traitement du Signal Numerique pour les Essais n Vol)

    DTIC Science & Technology

    1991-06-01

    intensive systems, including the use of onboard digital computers. Topics include: measurements that are digital in origin, sampling, encoding, transmitting...Individuals charged with designing aircraft measuring systems to become better acquainted with new solutions to their requirements. This volume Is...concerned with aircraft measuring systems as related to flight test and flight research. Measure - ments that are digital in origin or that must be

  16. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  17. Combining Image Processing with Signal Processing to Improve Transmitter Geolocation Estimation

    DTIC Science & Technology

    2014-03-27

    transmitter by searching a grid of possible transmitter locations within the image region. At each evaluated grid point, theoretical TDOA values are computed...requires converting the image to a grayscale intensity image. This allows efficient manipulation of data and ease of comparison among pixel values . The...cluster of redundant y values along the top edge of an ideal rectangle. The same is true for the bottom edge, as well as for the x values along the

  18. Computation of Material Demand in the Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) Process

    DTIC Science & Technology

    2015-08-01

    Congress concerning requirements for the National Defense Stockpile (NDS) of strategic and critical non- fuel materials. 1 RAMF-SM, which was...critical non- fuel materials. The NDS was established in the World War II era and has been managed by the Department of Defense (DOD) since 1988. By...Department of the Interior. An alternative algorithm is used for materials with intensive defense demands. v Contents 1 .  Introduction

  19. Modeling of shock wave propagation in large amplitude ultrasound.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E

    2008-01-01

    The Rankine-Hugoniot relation for shock wave propagation describes the shock speed of a nonlinear wave. This paper investigates time-domain numerical methods that solve the nonlinear parabolic wave equation, or the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and the conditions they require to satisfy the Rankine-Hugoniot relation. Two numerical methods commonly used in hyperbolic conservation laws are adapted to solve the KZK equation: Godunov's method and the monotonic upwind scheme for conservation laws (MUSCL). It is shown that they satisfy the Rankine-Hugoniot relation regardless of attenuation. These two methods are compared with the current implicit solution based method. When the attenuation is small, such as in water, the current method requires a degree of grid refinement that is computationally impractical. All three numerical methods are compared in simulations for lithotripters and high intensity focused ultrasound (HIFU) where the attenuation is small compared to the nonlinearity because much of the propagation occurs in water. The simulations are performed on grid sizes that are consistent with present-day computational resources but are not sufficiently refined for the current method to satisfy the Rankine-Hugoniot condition. It is shown that satisfying the Rankine-Hugoniot conditions has a significant impact on metrics relevant to lithotripsy (such as peak pressures) and HIFU (intensity). Because the Godunov and MUSCL schemes satisfy the Rankine-Hugoniot conditions on coarse grids, they are particularly advantageous for three-dimensional simulations.

  20. Feasibility of an EEG-based brain-computer interface in the intensive care unit.

    PubMed

    Chatelle, Camille; Spencer, Camille A; Cash, Sydney S; Hochberg, Leigh R; Edlow, Brian L

    2018-05-09

    We tested the feasibility of deploying a commercially available EEG-based brain-computer interface (BCI) in the intensive care unit (ICU) to detect consciousness in patients with acute disorders of consciousness (DoC) or locked-in syndrome (LIS). Ten patients (9 DoC, 1 LIS) and 10 healthy subjects (HS) were enrolled. The BCI utilized oddball auditory evoked potentials, vibrotactile evoked potentials (VTP) and motor imagery (MoI) to assess consciousness. We recorded the assessment completion rate and the time required for assessment, and we calculated the sensitivity and specificity of each paradigm for detecting behavioral signs of consciousness. All 10 patients completed the assessment, 9 of whom required less than 1 h. The LIS patient reported fatigue before the end of the session. The HS and LIS patient showed more consistent BCI responses than DoC patients, but overall there was no association between BCI responses and behavioral signs of consciousness. The system is feasible to deploy in the ICU and may confirm consciousness in acute LIS, but it was unreliable in acute DoC. The accuracy of the paradigms for detecting consciousness must be improved and the duration of the protocol should be shortened before this commercially available BCI is ready for clinical implementation in the ICU in patients with acute DoC. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  1. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  2. Robotic neurorehabilitation: a computational motor learning perspective

    PubMed Central

    Huang, Vincent S; Krakauer, John W

    2009-01-01

    Conventional neurorehabilitation appears to have little impact on impairment over and above that of spontaneous biological recovery. Robotic neurorehabilitation has the potential for a greater impact on impairment due to easy deployment, its applicability across of a wide range of motor impairment, its high measurement reliability, and the capacity to deliver high dosage and high intensity training protocols. We first describe current knowledge of the natural history of arm recovery after stroke and of outcome prediction in individual patients. Rehabilitation strategies and outcome measures for impairment versus function are compared. The topics of dosage, intensity, and time of rehabilitation are then discussed. Robots are particularly suitable for both rigorous testing and application of motor learning principles to neurorehabilitation. Computational motor control and learning principles derived from studies in healthy subjects are introduced in the context of robotic neurorehabilitation. Particular attention is paid to the idea of context, task generalization and training schedule. The assumptions that underlie the choice of both movement trajectory programmed into the robot and the degree of active participation required by subjects are examined. We consider rehabilitation as a general learning problem, and examine it from the perspective of theoretical learning frameworks such as supervised and unsupervised learning. We discuss the limitations of current robotic neurorehabilitation paradigms and suggest new research directions from the perspective of computational motor learning. PMID:19243614

  3. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud

    PubMed Central

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966

  4. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    PubMed

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.

  5. Serial multiplier arrays for parallel computation

    NASA Technical Reports Server (NTRS)

    Winters, Kel

    1990-01-01

    Arrays of systolic serial-parallel multiplier elements are proposed as an alternative to conventional SIMD mesh serial adder arrays for applications that are multiplication intensive and require few stored operands. The design and operation of a number of multiplier and array configurations featuring locality of connection, modularity, and regularity of structure are discussed. A design methodology combining top-down and bottom-up techniques is described to facilitate development of custom high-performance CMOS multiplier element arrays as well as rapid synthesis of simulation models and semicustom prototype CMOS components. Finally, a differential version of NORA dynamic circuits requiring a single-phase uncomplemented clock signal introduced for this application.

  6. The MST radar technique: Requirements for operational weather forecasting

    NASA Technical Reports Server (NTRS)

    Larsen, M. F.

    1983-01-01

    There is a feeling that the accuracy of mesoscale forecasts for spatial scales of less than 1000 km and time scales of less than 12 hours can be improved significantly if resources are applied to the problem in an intensive effort over the next decade. Since the most dangerous and damaging types of weather occur at these scales, there are major advantages to be gained if such a program is successful. The interest in improving short term forecasting is evident. The technology at the present time is sufficiently developed, both in terms of new observing systems and the computing power to handle the observations, to warrant an intensive effort to improve stormscale forecasting. An assessment of the extent to which the so-called MST radar technique fulfills the requirements for an operational mesoscale observing network is reviewed and the extent to which improvements in various types of forecasting could be expected if such a network is put into operation are delineated.

  7. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  8. A finite element method to correct deformable image registration errors in low-contrast regions

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.

    2012-06-01

    Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the ‘demons’ registration. For each voxel in the registration's target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the ‘demons’ algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the ‘demons’ algorithm on the computed tomography (CT) images of lung and prostate patients. The performance of the FEM correction relating to the ‘demons’ registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the ‘demons’ registration has the maximum error of 1.2 cm, which can be corrected by the FEM to 0.4 cm, and the average error of the ‘demons’ registration is reduced from 0.17 to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the ‘demons’ algorithm were found unrealistic at several places. In these places, the displacement differences between the ‘demons’ registrations and their FEM corrections were found in the range of 0.4 and 1.1 cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 min of computation time on a 2.6 GHz computer. This study has demonstrated that the FEM can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions.

  9. A Bayesian and Physics-Based Ground Motion Parameters Map Generation System

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.

    2014-12-01

    We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the computations, and present some examples of recent earthquakes. We conclude that the system provides information with a fair goodness-of-fit against observations. This project is partially supported by DGAPA-PAPIIT (UNAM) project TB100313-RR170313.

  10. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  11. Simultaneous mapping of the unsteady flow fields by Particle Displacement Velocimetry (PDV)

    NASA Technical Reports Server (NTRS)

    Huang, Thomas T.; Fry, David J.; Liu, Han-Lieh; Katz, Joseph; Fu, Thomas C.

    1992-01-01

    Current experimental and computational techniques must be improved in order to advance the prediction capability of the longitudinal vortical flows shed by underwater vehicles. The generation, development, and breakdown mechanisms of the shed vortices at high Reynolds numbers are not fully understood. The ability to measure hull separated vortices associated with vehicle maneuvering does not exist at present. The existing point-by-point measurement techniques can only capture approximately the large 'mean' eddies but fail to meet the dynamics of small vortices during the initial stage of generation. A new technique, which offers a previously unavailable capability to measure the unsteady cross-flow distribution in the plane of the laser light sheet, is called Particle Displacement Velocimetry (PDV). PDV consists of illuminating a thin section of the flowfield with a pulsed laser. The water is seeded with microscopic, neutrally buoyant particles containing imbedded fluorescing dye which responds with intense spontaneous fluorescence with the illuminated section. The seeded particles in the vortical flow structure shed by the underwater vehicle are illuminated by the pulse laser and the corresponding particle traces are recorded in a single photographic frame. Two distinct approaches were utilized for determining the velocity distribution from the particle traces. The first method is based on matching the traces of the same particle and measuring the distance between them. The direction of the flow can be identified by keeping one of the pulses longer than the other. The second method is based on selecting a small window within the image and finding the mean shift of all the particles within that region. The computation of the auto-correlation of the intensity distribution within the selected sample window is used to determine the mean displacement of particles. The direction of the flow is identified by varying the intensity of the laser light between pulses. Considerable computational resources are required to compute the auto-correction of the intensity distribution. Parallel processing will be employed to speed up the data reduction. A few examples of measured unsteady vortical flow structures shed by the underwater vehicles will be presented.

  12. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  13. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  14. Preparing the Next Generation of Environmental Scientists to Work at the Frontier of Data-Intensive Research

    NASA Astrophysics Data System (ADS)

    Hampton, S. E.

    2015-12-01

    The science necessary to unravel complex environmental problems confronts severe computational challenges - coping with huge volumes of heterogeneous data, spanning vast spatial scales at high resolution, and requiring integration of disparate measurements from multiple disciplines. But as cyberinfrastructure advances to support such work, scientists in many fields lack sufficient computational skills to participate in interdisciplinary, data-intensive research. In response, we developed innovative training workshops for early-career scientists, in order to explore both the needs and solutions for training next-generation scientists in skills for data-intensive environmental research. In 2013 and 2014 we ran intensive 3-week training workshops for early-career researchers. One of the workshops was run concurrently in California and North Carolina, connected by virtual technologies and coordinated schedules. We attracted applicants to the workshop with the opportunity to pursue data-intensive small-group research projects that they proposed. This approach presented a realistic possibility that publishable products could result from 3 weeks of focused hands-on classroom instruction combined with self-directed group research in which instructors were present to assist trainees. Instruction addressed 1) collaboration modes and technologies, 2) data management, preservation, and sharing, 3) preparing data for analysis using scripting, 4) reproducible research, 5) sustainable software practices, 6) data analysis and modeling, and 7) communicating results to broad communities. The most dramatic improvements in technical skills were in data management, version control, and working with spatial data outside of proprietary software. In addition, participants built strong networks and collaborative skills that later resulted in a successful student-led grant proposal, published manuscripts, and participants reported that the training was a highly influential experience.

  15. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  16. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  17. Illumination Modulation for Improved Propagation-Based Phase Imaging

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tonmoy

    Propagation-based phase imaging enables the quantitative reconstruction of a light beam's phase from measurements of its intensity. Because the intensity depends on the time-averaged square of the field the relationship between intensity and phase is, in general, nonlinear. The transport of intensity equation (TIE), is a linear equation relating phase and propagated intensity that arises from restricting the propagation distance to be small. However, the TIE limits the spatial frequencies that can be reliably reconstructed to those below some cutoff, which limits the accuracy of reconstruction of fine features in phase. On the other hand, the low frequency components suffer from poor signal to noise ratio (SNR) unless the propagation distance is sufficiently large, which leads to low frequency artifacts that obscure the reconstruction. In this research, I will consider the use of incoherent primary sources of illumination, in a Kohler illumination setup, to enhance the low-frequency performance of the TIE. The necessary steps required to design and build a table-top imaging setup which is capable of capturing intensity at any defocused position while modulating the source will be explained. In addition, it will be shown how by employing such illumination, the steps required for computationally recovering the phase, i.e. Fourier transforms and frequency-domain filtering, may be performed in the optical system. While these methods can address the low-frequency performance of the TIE, they do not extend its high-frequency cutoff. To avoid this cutoff, for objects with slowly varying phase, the contrast transfer function (CTF) model, an alternative to the TIE, can be used to recover phase. By allowing the combination of longer propagation distances and incoherent sources, it will be shown how CTF can improve performance at both high and low frequencies.

  18. Computational Aspects of Data Assimilation and the ESMF

    NASA Technical Reports Server (NTRS)

    daSilva, A.

    2003-01-01

    The scientific challenge of developing advanced data assimilation applications is a daunting task. Independently developed components may have incompatible interfaces or may be written in different computer languages. The high-performance computer (HPC) platforms required by numerically intensive Earth system applications are complex, varied, rapidly evolving and multi-part systems themselves. Since the market for high-end platforms is relatively small, there is little robust middleware available to buffer the modeler from the difficulties of HPC programming. To complicate matters further, the collaborations required to develop large Earth system applications often span initiatives, institutions and agencies, involve geoscience, software engineering, and computer science communities, and cross national borders.The Earth System Modeling Framework (ESMF) project is a concerted response to these challenges. Its goal is to increase software reuse, interoperability, ease of use and performance in Earth system models through the use of a common software framework, developed in an open manner by leaders in the modeling community. The ESMF addresses the technical and to some extent the cultural - aspects of Earth system modeling, laying the groundwork for addressing the more difficult scientific aspects, such as the physical compatibility of components, in the future. In this talk we will discuss the general philosophy and architecture of the ESMF, focussing on those capabilities useful for developing advanced data assimilation applications.

  19. The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials

    NASA Technical Reports Server (NTRS)

    Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.

    2005-01-01

    The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.

  20. Effects of Computer-Based Practice on the Acquisition and Maintenance of Basic Academic Skills for Children with Moderate to Intensive Educational Needs

    ERIC Educational Resources Information Center

    Everhart, Julie M.; Alber-Morgan, Sheila R.; Park, Ju Hee

    2011-01-01

    This study investigated the effects of computer-based practice on the acquisition and maintenance of basic academic skills for two children with moderate to intensive disabilities. The special education teacher created individualized computer games that enabled the participants to independently practice academic skills that corresponded with their…

  1. Absolute versus relative intensity of physical activity in a dose-response context.

    PubMed

    Shephard, R J

    2001-06-01

    To examine the importance of relative versus absolute intensities of physical activity in the context of population health. A standard computer-search of the literature was supplemented by review of extensive personal files. Consensus reports (Category D Evidence) have commonly recommended moderate rather than hard physical activity in the context of population health. Much of the available literature provides Category C Evidence. It has often confounded issues of relative intensity with absolute intensity or total weekly dose of exercise. In terms of cardiovascular health, there is some evidence for a threshold intensity of effort, perhaps as high as 6 METs, in addition to a minimum volume of physical activity. Decreases in blood pressure and prevention of stroke seem best achieved by moderate rather than high relative intensities of physical activity. Many aspects of metabolic health depend on the total volume of activity; moderate relative intensities of effort are more effective in mobilizing body fat, but harder relative intensities may help to increase energy expenditures postexercise. Hard relative intensities seem needed to augment bone density, but this may reflect an associated increase in volume of activity. Hard relative intensities of exercise induce a transient immunosuppression. The optimal intensity of effort, relative or absolute, for protection against various types of cancer remains unresolved. Acute effects of exercise on mood state also require further study; long-term benefits seem associated with a moderate rather than a hard relative intensity of effort. The importance of relative versus absolute intensity of effort depends on the desired health outcome, and many issues remain to be resolved. Progress will depend on more precise epidemiological methods of assessing energy expenditures and studies that equate total energy expenditures between differing relative intensities. There is a need to focus on gains in quality-adjusted life expectancy.

  2. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    PubMed

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  3. Bringing MapReduce Closer To Data With Active Drives

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  4. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  5. Analysis of intensity variability in multislice and cone beam computed tomography.

    PubMed

    Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde

    2011-08-01

    The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.

  6. Meshfree and efficient modeling of swimming cells

    NASA Astrophysics Data System (ADS)

    Gallagher, Meurig T.; Smith, David J.

    2018-05-01

    Locomotion in Stokes flow is an intensively studied problem because it describes important biological phenomena such as the motility of many species' sperm, bacteria, algae, and protozoa. Numerical computations can be challenging, particularly in three dimensions, due to the presence of moving boundaries and complex geometries; methods which combine ease of implementation and computational efficiency are therefore needed. A recently proposed method to discretize the regularized Stokeslet boundary integral equation without the need for a connected mesh is applied to the inertialess locomotion problem in Stokes flow. The mathematical formulation and key aspects of the computational implementation in matlab® or GNU Octave are described, followed by numerical experiments with biflagellate algae and multiple uniflagellate sperm swimming between no-slip surfaces, for which both swimming trajectories and flow fields are calculated. These computational experiments required minutes of time on modest hardware; an extensible implementation is provided in a GitHub repository. The nearest-neighbor discretization dramatically improves convergence and robustness, a key challenge in extending the regularized Stokeslet method to complicated three-dimensional biological fluid problems.

  7. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  8. A simple method for the generation of organ and vessel contours from roentgenographic or fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Newell, J. D.; Keller, R. A.; Baily, N. A.

    1974-01-01

    A simple method for outlining or contouring any area defined by a change in film density or fluoroscopic screen intensity is described. The entire process, except for the positioning of an electronic window, is accomplished using a small computer having appropriate softwave. The electronic window is operator positioned over the area to be processed. The only requirement is that the window be large enough to encompass the total area to be considered.

  9. A Chamber Investigation of the IR and Visible Wavelength Obscuration Properties of Pyrotechnically Generated Smokes,

    DTIC Science & Technology

    1985-05-01

    13 3 Extinction Spectra (Visible to 14 pm Wavelength) of the NWC Smokes at High and Low Humidity.................. 17 4 Relative Size Distributions...percent over the wave- length interval from 2.5- 14 urn. Data acqitisition and reduction are computer controlled. Intensity measurements are obtained at...approximately 0.02 um wavelength intervals, with a complete 2.5- 14 um scan requiring 2 minutes. 4,.N N -Z . Ai,. Figure 1 INTERIOR VIEW OF CAISPAN’S 590m

  10. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  11. The Pixhawk Open-Source Computer Vision Framework for Mavs

    NASA Astrophysics Data System (ADS)

    Meier, L.; Tanskanen, P.; Fraundorfer, F.; Pollefeys, M.

    2011-09-01

    Unmanned aerial vehicles (UAV) and micro air vehicles (MAV) are already intensively used in geodetic applications. State of the art autonomous systems are however geared towards the application area in safe and obstacle-free altitudes greater than 30 meters. Applications at lower altitudes still require a human pilot. A new application field will be the reconstruction of structures and buildings, including the facades and roofs, with semi-autonomous MAVs. Ongoing research in the MAV robotics field is focusing on enabling this system class to operate at lower altitudes in proximity to nearby obstacles and humans. PIXHAWK is an open source and open hardware toolkit for this purpose. The quadrotor design is optimized for onboard computer vision and can connect up to four cameras to its onboard computer. The validity of the system design is shown with a fully autonomous capture flight along a building.

  12. Real-time fuzzy inference based robot path planning

    NASA Technical Reports Server (NTRS)

    Pacini, Peter J.; Teichrow, Jon S.

    1990-01-01

    This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.

  13. Reducing software mass through behavior control. [of planetary roving robots

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.

  14. A Real Time Controller For Applications In Smart Structures

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian P.; Claus, Richard O.

    1990-02-01

    Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.

  15. A grid-enabled web service for low-resolution crystal structure refinement.

    PubMed

    O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr

    2012-03-01

    Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.

  16. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.

  17. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  18. Building a Data Science capability for USGS water research and communication

    NASA Astrophysics Data System (ADS)

    Appling, A.; Read, E. K.

    2015-12-01

    Interpreting and communicating water issues in an era of exponentially increasing information requires a blend of domain expertise, computational proficiency, and communication skills. The USGS Office of Water Information has established a Data Science team to meet these needs, providing challenging careers for diverse domain scientists and innovators in the fields of information technology and data visualization. Here, we detail the experience of building a Data Science capability as a bridging element between traditional water resources analyses and modern computing tools and data management techniques. This approach includes four major components: 1) building reusable research tools, 2) documenting data-intensive research approaches in peer reviewed journals, 3) communicating complex water resources issues with interactive web visualizations, and 4) offering training programs for our peers in scientific computing. These components collectively improve the efficiency, transparency, and reproducibility of USGS data analyses and scientific workflows.

  19. An acceleration framework for synthetic aperture radar algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.

    2017-04-01

    Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.

  20. Introducing Ergonomics in Two US Elementary Schools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, C L; Tien, D

    2003-06-25

    The increasing presence of computers and other forms of information and communications technology (ICT) in schools has raised concerns in the United States (US) and elsewhere. Children are using computers more than any other age group in the US. It is not known whether early intensive use of ICT predisposes children to future injury. Ergonomics is not included in state curriculum standards or requirements but can be supported by some of the existing standards. Some who believe that children are better off being educated early about ergonomics are taking action to bring ergonomics into elementary and secondary schools. This papermore » describes the process used to introduce ergonomics into two elementary schools in two different states by initiators with two different roles.« less

  1. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  2. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  3. An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore

    USGS Publications Warehouse

    Jackson, J.C.; Ericksent, G.E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  4. An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore

    USGS Publications Warehouse

    John, C.; George, J.; Ericksen, E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  5. Solid particle dynamic behavior through twisted blade rows

    NASA Technical Reports Server (NTRS)

    Hamed, A.

    1982-01-01

    The particle trajectory calculations provide the essential information which is required for predicting the pattern and intensity of turbomachinery erosion. Consequently, the evaluation of the machine performance deterioration due to erosion is extremely sensitive to the accuracy of the flow field and blade geometry representation in the trajectory computational model. A model is presented that is simple and efficient yet versatile and general to be applicable to axial, radial and mixed flow machines, and to inlets, nozzles, return passages and separators. The results of the computations are presented for the particle trajectories through a row of twisted vanes in the inlet flow field. The effect of the particle size on their trajectories, blade impacts, and on their redistribution and separation are discussed.

  6. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  7. Opportunities and challenges for the life sciences community.

    PubMed

    Kolker, Eugene; Stewart, Elizabeth; Ozdemir, Vural

    2012-03-01

    Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19-20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16-17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org ) was formed to become a Digital Commons for the life sciences community.

  8. Opportunities and Challenges for the Life Sciences Community

    PubMed Central

    Stewart, Elizabeth; Ozdemir, Vural

    2012-01-01

    Abstract Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19–20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16–17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org) was formed to become a Digital Commons for the life sciences community. PMID:22401659

  9. Graphics processing units in bioinformatics, computational biology and systems biology.

    PubMed

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  10. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  11. surf3d: A 3-D finite-element program for the analysis of surface and corner cracks in solids subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1993-01-01

    A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.

  12. Fluorescence-based enhanced reality (FLER) for real-time estimation of bowel perfusion in minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Diana, Michele

    2016-03-01

    Pre-anastomotic bowel perfusion is a key factor for a successful healing process. Clinical judgment has limited accuracy to evaluate intestinal microperfusion. Fluorescence videography is a promising tool for image-guided intraoperative assessment of the bowel perfusion at the future anastomotic site in the setting of minimally invasive procedures. The standard configuration for fluorescence videography includes a Near-Infrared endoscope able to detect the signal emitted by a fluorescent dye, more frequently Indocyanine Green (ICG), which is administered by intravenous injection. Fluorescence intensity is proportional to the amount of fluorescent dye diffusing in the tissue and consequently is a surrogate marker of tissue perfusion. However, fluorescence intensity alone remains a subjective approach and an integrated computer-based analysis of the over-time evolution of the fluorescence signal is required to obtain quantitative data. We have developed a solution integrating computer-based analysis for intra-operative evaluation of the optimal resection site, based on the bowel perfusion as determined by the dynamic fluorescence intensity. The software can generate a "virtual perfusion cartography", based on the "fluorescence time-to-peak". The virtual perfusion cartography can be overlapped onto real-time laparoscopic images to obtain the Enhanced Reality effect. We have defined this approach FLuorescence-based Enhanced Reality (FLER). This manuscript describes the stepwise development of the FLER concept.

  13. A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.

    PubMed

    Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi

    2015-12-01

    Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by pairs. Further, in view of the dynamic characteristics of the changing data, we give the concept of influence degree to measure the coincidence of the current BN with new data, and then propose the corresponding two-pass MapReduce-based algorithms for BNs incremental learning. Experimental results show the efficiency, scalability, and effectiveness of our methods.

  14. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  15. An Innovative Approach to Bridge a Skill Gap and Grow a Workforce Pipeline: The Computer System, Cluster, and Networking Summer Institute

    DOE PAGES

    Connor, Carolyn Marie; Jacobson, Andree Lars; Bonnie, Amanda Marie; ...

    2016-11-01

    Sustainable and effective computing infrastructure depends critically on the skills and expertise of domain scientists and of committed and well-trained advanced computing professionals. But, in its ongoing High Performance Computing (HPC) work, Los Alamos National Laboratory noted a persistent shortage of well-prepared applicants, particularly for entry-level cluster administration, file systems administration, and high speed networking positions. Further, based upon recruiting efforts and interactions with universities graduating students in related majors of interest (e.g., computer science (CS)), there has been a long standing skillset gap, as focused training in HPC topics is typically lacking or absent in undergraduate and in evenmore » many graduate programs. Given that the effective operation and use of HPC systems requires specialized and often advanced training, that there is a recognized HPC skillset gap, and that there is intense global competition for computing and computational science talent, there is a long-standing and critical need for innovative approaches to help bridge the gap and create a well-prepared, next generation HPC workforce. Our paper places this need in the context of the HPC work and workforce requirements at Los Alamos National Laboratory (LANL) and presents one such innovative program conceived to address the need, bridge the gap, and grow an HPC workforce pipeline at LANL. The Computer System, Cluster, and Networking Summer Institute (CSCNSI) completed its 10th year in 2016. The story of the CSCNSI and its evolution is detailed below with a description of the design of its Boot Camp, and a summary of its success and some key factors that have enabled that success.« less

  16. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  17. Distributed Two-Dimensional Fourier Transforms on DSPs with an Application for Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey Scott

    2006-01-01

    Many applications of two-dimensional Fourier Transforms require fixed timing as defined by system specifications. One example is image-based wavefront sensing. The image-based approach has many benefits, yet it is a computational intensive solution for adaptive optic correction, where optical adjustments are made in real-time to correct for external (atmospheric turbulence) and internal (stability) aberrations, which cause image degradation. For phase retrieval, a type of image-based wavefront sensing, numerous two-dimensional Fast Fourier Transforms (FFTs) are used. To meet the required real-time specifications, a distributed system is needed, and thus, the 2-D FFT necessitates an all-to-all communication among the computational nodes. The 1-D floating point FFT is very efficient on a digital signal processor (DSP). For this study, several architectures and analysis of such are presented which address the all-to-all communication with DSPs. Emphasis of this research is on a 64-node cluster of Analog Devices TigerSharc TS-101 DSPs.

  18. FPGA design of correlation-based pattern recognition

    NASA Astrophysics Data System (ADS)

    Jridi, Maher; Alfalou, Ayman

    2017-05-01

    Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.

  19. Ventral-stream-like shape representation: from pixel intensity values to trainable object-selective COSFIRE models

    PubMed Central

    Azzopardi, George; Petkov, Nicolai

    2014-01-01

    The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068

  20. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    PubMed

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  1. Effect of crack curvature on stress intensity factors for ASTM standard compact tension specimens

    NASA Technical Reports Server (NTRS)

    Alam, J.; Mendelson, A.

    1983-01-01

    The stress intensity factors (SIF) are calculated using the method of lines for the compact tension specimen in tensile and shear loading for curved crack fronts. For the purely elastic case, it was found that as the crack front curvature increases, the SIF value at the center of the specimen decreases while increasing at the surface. For the higher values of crack front curvatures, the maximum value of the SIF occurs at an interior point located adjacent to the surface. A thickness average SIF was computed for parabolically applied shear loading. These results were used to assess the requirements of ASTM standards E399-71 and E399-81 on the shape of crack fronts. The SIF is assumed to reflect the average stress environment near the crack edge.

  2. Big Data Ecosystems Enable Scientific Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, Terence J.; Kleese van Dam, Kerstin

    Over the past 5 years, advances in experimental, sensor and computational technologies have driven the exponential growth in the volumes, acquisition rates, variety and complexity of scientific data. As noted by Hey et al in their 2009 e-book The Fourth Paradigm, this availability of large-quantities of scientifically meaningful data has given rise to a new scientific methodology - data intensive science. Data intensive science is the ability to formulate and evaluate hypotheses using data and analysis to extend, complement and, at times, replace experimentation, theory, or simulation. This new approach to science no longer requires scientists to interact directly withmore » the objects of their research; instead they can utilize digitally captured, reduced, calibrated, analyzed, synthesized and visualized results - allowing them carry out 'experiments' in data.« less

  3. Mobile computing device configured to compute irradiance, glint, and glare of the sun

    DOEpatents

    Gupta, Vipin P; Ho, Clifford K; Khalsa, Siri Sahib

    2014-03-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. A mobile computing device includes at least one camera that captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed by the mobile computing device.

  4. Capturing Requirements for Autonomous Spacecraft with Autonomy Requirements Engineering

    NASA Astrophysics Data System (ADS)

    Vassev, Emil; Hinchey, Mike

    2014-08-01

    The Autonomy Requirements Engineering (ARE) approach has been developed by Lero - the Irish Software Engineering Research Center within the mandate of a joint project with ESA, the European Space Agency. The approach is intended to help engineers develop missions for unmanned exploration, often with limited or no human control. Such robotics space missions rely on the most recent advances in automation and robotic technologies where autonomy and autonomic computing principles drive the design and implementation of unmanned spacecraft [1]. To tackle the integration and promotion of autonomy in software-intensive systems, ARE combines generic autonomy requirements (GAR) with goal-oriented requirements engineering (GORE). Using this approach, software engineers can determine what autonomic features to develop for a particular system (e.g., a space mission) as well as what artifacts that process might generate (e.g., goals models, requirements specification, etc.). The inputs required by this approach are the mission goals and the domain-specific GAR reflecting specifics of the mission class (e.g., interplanetary missions).

  5. Energy intensity of computer manufacturing: hybrid assessment combining process and economic input-output methods.

    PubMed

    Williams, Eric

    2004-11-15

    The total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively. This indicates that computer manufacturing is energy intensive: the ratio of fossil fuel use to product weight is 11, an order of magnitude larger than the factor of 1-2 for many other manufactured goods. This high energy intensity of manufacturing, combined with rapid turnover in computers, results in an annual life cycle energy burden that is surprisingly high: about 2600 MJ per year, 1.3 times that of a refrigerator. In contrast with many home appliances, life cycle energy use of a computer is dominated by production (81%) as opposed to operation (19%). Extension of usable lifespan (e.g. by reselling or upgrading) is thus a promising approach to mitigating energy impacts as well as other environmental burdens associated with manufacturing and disposal.

  6. Severe Traumatic Brain Injury at a Tertiary Referral Center in Tanzania: Epidemiology and Adherence to Brain Trauma Foundation Guidelines.

    PubMed

    Smart, Luke R; Mangat, Halinder S; Issarow, Benson; McClelland, Paul; Mayaya, Gerald; Kanumba, Emmanuel; Gerber, Linda M; Wu, Xian; Peck, Robert N; Ngayomela, Isidore; Fakhar, Malik; Stieg, Philip E; Härtl, Roger

    2017-09-01

    Severe traumatic brain injury (TBI) is a major cause of death and disability worldwide. Prospective TBI data from sub-Saharan Africa are sparse. This study examines epidemiology and explores management of patients with severe TBI and adherence to Brain Trauma Foundation Guidelines at a tertiary care referral hospital in Tanzania. Patients with severe TBI hospitalized at Bugando Medical Centre were recorded in a prospective registry including epidemiologic, clinical, treatment, and outcome data. Between September 2013 and October 2015, 371 patients with TBI were admitted; 33% (115/371) had severe TBI. Mean age was 32.0 years ± 20.1, and most patients were male (80.0%). Vehicular injuries were the most common cause of injury (65.2%). Approximately half of the patients (47.8%) were hospitalized on the day of injury. Computed tomography of the brain was performed in 49.6% of patients, and 58.3% were admitted to the intensive care unit. Continuous arterial blood pressure monitoring and intracranial pressure monitoring were not performed in any patient. Of patients with severe TBI, 38.3% received hyperosmolar therapy, and 35.7% underwent craniotomy. The 2-week mortality was 34.8%. Mortality of patients with severe TBI at Bugando Medical Centre, Tanzania, is approximately twice that in high-income countries. Intensive care unit care, computed tomography imaging, and continuous arterial blood pressure and intracranial pressure monitoring are underused or unavailable in the tertiary referral hospital setting. Improving outcomes after severe TBI will require concerted investment in prehospital care and improvement in availability of intensive care unit resources, computed tomography, and expertise in multidisciplinary care. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Quantum key distribution with an efficient countermeasure against correlated intensity fluctuations in optical pulses

    NASA Astrophysics Data System (ADS)

    Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa

    2018-03-01

    Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.

  8. Three-dimensional digital mapping of the optic nerve head cupping in glaucoma

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Ramirez, Manuel; Morales, Jose

    1992-08-01

    Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.

  9. Toward nonlinear magnonics: Intensity-dependent spin-wave switching in insulating side-coupled magnetic stripes

    NASA Astrophysics Data System (ADS)

    Sadovnikov, A. V.; Odintsov, S. A.; Beginin, E. N.; Sheshukova, S. E.; Sharaevskii, Yu. P.; Nikitov, S. A.

    2017-10-01

    We demonstrate that the nonlinear spin-wave transport in two laterally parallel magnetic stripes exhibit the intensity-dependent power exchange between the adjacent spin-wave channels. By the means of Brillouin light scattering technique, we investigate collective nonlinear spin-wave dynamics in the presence of magnetodipolar coupling. The nonlinear intensity-dependent effect reveals itself in the spin-wave mode transformation and differential nonlinear spin-wave phase shift in each adjacent magnetic stripe. The proposed analytical theory, based on the coupled Ginzburg-Landau equations, predicts the geometry design involving the reduction of power requirement to the all-magnonic switching. A very good agreement between calculation and experiment was found. In addition, a micromagnetic and finite-element approach has been independently used to study the nonlinear behavior of spin waves in adjacent stripes and the nonlinear transformation of spatial profiles of spin-wave modes. Our results show that the proposed spin-wave coupling mechanism provides the basis for nonlinear magnonic circuits and opens the perspectives for all-magnonic computing architecture.

  10. Ignition Prediction of Pressed HMX based on Hotspot Analysis Under Shock Pulse Loading

    NASA Astrophysics Data System (ADS)

    Kim, Seokpum; Miller, Christopher; Horie, Yasuyuki; Molek, Christopher; Welle, Eric; Zhou, Min

    The ignition behavior of pressed HMX under shock pulse loading with a flyer is analyzed using a cohesive finite element method (CFEM) which accounts for large deformation, microcracking, frictional heating, and thermal conduction. The simulations account for the controlled loading of thin-flyer shock experiments with flyer velocities between 1.7 and 4.0 km/s. The study focuses on the computational prediction of ignition threshold using James criterion which involves loading intensity and energy imparted to the material. The predicted thresholds are in good agreement with measurements from shock experiments. In particular, it is found that grain size significantly affects the ignition sensitivity of the materials, with smaller sizes leading to lower energy thresholds required for ignition. In addition, significant stress attenuation is observed in high intensity pulse loading as compared to low intensity pulse loading, which affects density of hotspot distribution. The microstructure-performance relations obtained can be used to design explosives with tailored attributes and safety envelopes.

  11. A strategy for selecting data mining techniques in metabolomics.

    PubMed

    Banimustafa, Ahmed Hmaidan; Hardy, Nigel W

    2012-01-01

    There is a general agreement that the development of metabolomics depends not only on advances in chemical analysis techniques but also on advances in computing and data analysis methods. Metabolomics data usually requires intensive pre-processing, analysis, and mining procedures. Selecting and applying such procedures requires attention to issues including justification, traceability, and reproducibility. We describe a strategy for selecting data mining techniques which takes into consideration the goals of data mining techniques on the one hand, and the goals of metabolomics investigations and the nature of the data on the other. The strategy aims to ensure the validity and soundness of results and promote the achievement of the investigation goals.

  12. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    PubMed Central

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-01

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343

  13. Ubiquitous computing in the military environment

    NASA Astrophysics Data System (ADS)

    Scholtz, Jean

    2001-08-01

    Increasingly people work and live on the move. To support this mobile lifestyle, especially as our work becomes more intensely information-based, companies are producing various portable and embedded information devices. The late Mark Weiser coined the term, 'ubiquitous computing' to describe an environment where computers have disappeared and are integrated into physical objects. Much industry research today is concerned with ubiquitous computing in the work and home environments. A ubiquitous computing environment would facilitate mobility by allowing information users to easily access and use information anytime, anywhere. As war fighters are inherently mobile, the question is what effect a ubiquitous computing environment would have on current military operations and doctrine. And, if ubiquitous computing is viewed as beneficial for the military, what research would be necessary to achieve a military ubiquitous computing environment? What is a vision for the use of mobile information access in a battle space? Are there different requirements for civilian and military users of this technology? What are those differences? Are there opportunities for research that will support both worlds? What type of research has been supported by the military and what areas need to be investigated? Although we don't yet have all the answers to these questions, this paper discusses the issues and presents the work we are doing to address these issues.

  14. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  15. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    PubMed Central

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607

  16. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    NASA Astrophysics Data System (ADS)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  17. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  18. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  19. Scaling effects in spiral capsule robots.

    PubMed

    Liang, Liang; Hu, Rong; Chen, Bai; Tang, Yong; Xu, Yan

    2017-04-01

    Spiral capsule robots can be applied to human gastrointestinal tracts and blood vessels. Because of significant variations in the sizes of the inner diameters of the intestines as well as blood vessels, this research has been unable to meet the requirements for medical applications. By applying the fluid dynamic equations, using the computational fluid dynamics method, to a robot axial length ranging from 10 -5 to 10 -2  m, the operational performance indicators (axial driving force, load torque, and maximum fluid pressure on the pipe wall) of the spiral capsule robot and the fluid turbulent intensity around the robot spiral surfaces was numerically calculated in a straight rigid pipe filled with fluid. The reasonableness and validity of the calculation method adopted in this study were verified by the consistency of the calculated values by the computational fluid dynamics method and the experimental values from a relevant literature. The results show that the greater the fluid turbulent intensity, the greater the impact of the fluid turbulence on the driving performance of the spiral capsule robot and the higher the energy consumption of the robot. For the same level of size of the robot, the axial driving force, the load torque, and the maximum fluid pressure on the pipe wall of the outer spiral robot were larger than those of the inner spiral robot. For different requirements of the operating environment, we can choose a certain kind of spiral capsule robot. This study provides a theoretical foundation for spiral capsule robots.

  20. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Technical Reports Server (NTRS)

    Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  1. Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data

    NASA Astrophysics Data System (ADS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.

    2018-01-01

    An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.

  2. Federated data storage and management infrastructure

    NASA Astrophysics Data System (ADS)

    Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.

    2016-10-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.

  3. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  4. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  5. DIaaS: Data-Intensive workflows as a service - Enabling easy composition and deployment of data-intensive workflows on Virtual Research Environments

    NASA Astrophysics Data System (ADS)

    Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.

    2016-12-01

    We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.

  6. SAR processing in the cloud for oil detection in the Arctic

    NASA Astrophysics Data System (ADS)

    Garron, J.; Stoner, C.; Meyer, F. J.

    2016-12-01

    A new world of opportunity is being thawed from the ice of the Arctic, driven by decreased persistent Arctic sea-ice cover, increases in shipping, tourism, natural resource development. Tools that can automatically monitor key sea ice characteristics and potential oil spills are essential for safe passage in these changing waters. Synthetic aperture radar (SAR) data can be used to discriminate sea ice types and oil on the ocean surface and also for feature tracking. Additionally, SAR can image the earth through the night and most weather conditions. SAR data is volumetrically large and requires significant computing power to manipulate. Algorithms designed to identify key environmental features, like oil spills, in SAR imagery require secondary processing, and are computationally intensive, which can functionally limit their application in a real-time setting. Cloud processing is designed to manage big data and big data processing jobs by means of small cycles of off-site computations, eliminating up-front hardware costs. Pairing SAR data with cloud processing has allowed us to create and solidify a processing pipeline for SAR data products in the cloud to compare operational algorithms efficiency and effectiveness when run using an Alaska Satellite Facility (ASF) defined Amazon Machine Image (AMI). The products created from this secondary processing, were compared to determine which algorithm was most accurate in Arctic feature identification, and what operational conditions were required to produce the results on the ASF defined AMI. Results will be used to inform a series of recommendations to oil-spill response data managers and SAR users interested in expanding their analytical computing power.

  7. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  8. Mechanistic experimental pain assessment in computer users with and without chronic musculoskeletal pain.

    PubMed

    Ge, Hong-You; Vangsgaard, Steffen; Omland, Øyvind; Madeleine, Pascal; Arendt-Nielsen, Lars

    2014-12-06

    Musculoskeletal pain from the upper extremity and shoulder region is commonly reported by computer users. However, the functional status of central pain mechanisms, i.e., central sensitization and conditioned pain modulation (CPM), has not been investigated in this population. The aim was to evaluate sensitization and CPM in computer users with and without chronic musculoskeletal pain. Pressure pain threshold (PPT) mapping in the neck-shoulder (15 points) and the elbow (12 points) was assessed together with PPT measurement at mid-point in the tibialis anterior (TA) muscle among 47 computer users with chronic pain in the upper extremity and/or neck-shoulder pain (pain group) and 17 pain-free computer users (control group). Induced pain intensities and profiles over time were recorded using a 0-10 cm electronic visual analogue scale (VAS) in response to different levels of pressure stimuli on the forearm with a new technique of dynamic pressure algometry. The efficiency of CPM was assessed using cuff-induced pain as conditioning pain stimulus and PPT at TA as test stimulus. The demographics, job seniority and number of working hours/week using a computer were similar between groups. The PPTs measured at all 15 points in the neck-shoulder region were not significantly different between groups. There were no significant differences between groups neither in PPTs nor pain intensity induced by dynamic pressure algometry. No significant difference in PPT was observed in TA between groups. During CPM, a significant increase in PPT at TA was observed in both groups (P < 0.05) without significant differences between groups. For the chronic pain group, higher clinical pain intensity, lower PPT values from the neck-shoulder and higher pain intensity evoked by the roller were all correlated with less efficient descending pain modulation (P < 0.05). This suggests that the excitability of the central pain system is normal in a large group of computer users with low pain intensity chronic upper extremity and/or neck-shoulder pain and that increased excitability of the pain system cannot explain the reported pain. However, computer users with higher pain intensity and lower PPTs were found to have decreased efficiency in descending pain modulation.

  9. Polycyclic Aromatic Hydrocarbons with Aliphatic Sidegroups: Intensity Scaling for the C-H Stretching Modes and Astrophysical Implications

    NASA Astrophysics Data System (ADS)

    Yang, X. J.; Li, Aigen; Glaser, R.; Zhong, J. X.

    2017-03-01

    The so-called unidentified infrared emission (UIE) features at 3.3, 6.2, 7.7, 8.6, and 11.3 μ {{m}} ubiquitously seen in a wide variety of astrophysical regions are generally attributed to polycyclic aromatic hydrocarbon (PAH) molecules. Astronomical PAHs may have an aliphatic component, as revealed by the detection in many UIE sources of the aliphatic C-H stretching feature at 3.4 μ {{m}}. The ratio of the observed intensity of the 3.4 μ {{m}} feature to that of the 3.3 μ {{m}} aromatic C-H feature allows one to estimate the aliphatic fraction of the UIE carriers. This requires knowledge of the intrinsic oscillator strengths of the 3.3 μ {{m}} aromatic C-H stretch ({A}3.3) and the 3.4 μ {{m}} aliphatic C-H stretch ({A}3.4). Lacking experimental data on {A}3.3 and {A}3.4 for the UIE candidate materials, one often has to rely on quantum-chemical computations. Although the second-order Møller-Plesset (MP2) perturbation theory with a large basis set is more accurate than the B3LYP density functional theory, MP2 is computationally very demanding and impractical for large molecules. Based on methylated PAHs, we show here that, by scaling the band strengths computed at an inexpensive level (e.g., B3LYP/6-31G*), we are able to obtain band strengths as accurate as those computed at far more expensive levels (e.g., MP2/6-311+G(3df,3pd)). We calculate the model spectra of methylated PAHs and their cations excited by starlight of different spectral shapes and intensities. We find that {({I}3.4/{I}3.3)}{mod}, the ratio of the model intensity of the 3.4 μ {{m}} feature to that of the 3.3 μ {{m}} feature, is insensitive to the spectral shape and intensity of the exciting starlight. We derive a straightforward relation for determining the aliphatic fraction of the UIE carriers (I.e., the ratio of the number of C atoms in aliphatic units {N}{{C},{ali}} to that in aromatic rings {N}{{C},{aro}}) from the observed band ratios {({I}3.4/{I}3.3)}{obs}: {N}{{C},{ali}}/{N}{{C},{aro}}≈ 0.57× {({I}3.4/{I}3.3)}{obs} for neutrals and {N}{{C},{ali}}/{N}{{C},{aro}}≈ 0.26× {({I}3.4/{I}3.3)}{obs} for cations.

  10. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  11. Rapid extraction of image texture by co-occurrence using a hybrid data structure

    NASA Astrophysics Data System (ADS)

    Clausi, David A.; Zhao, Yongping

    2002-07-01

    Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.

  12. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  13. X-ray beam-shaping via deformable mirrors: surface profile and point spread function computation for Gaussian beams using physical optics.

    PubMed

    Spiga, D

    2018-01-01

    X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less

  15. Ultrasonic power measurement system based on acousto-optic interaction.

    PubMed

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  16. The performance analysis of three-dimensional track-before-detect algorithm based on Fisher-Tippett-Gnedenko theorem

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan

    2016-09-01

    The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.

  17. Ultrasonic power measurement system based on acousto-optic interaction

    NASA Astrophysics Data System (ADS)

    He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan

    2016-05-01

    Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.

  18. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  19. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  20. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  1. Differences in muscle load between computer and non-computer work among office workers.

    PubMed

    Richter, J M; Mathiassen, S E; Slijper, H P; Over, E A B; Frens, M A

    2009-12-01

    Introduction of more non-computer tasks has been suggested to increase exposure variation and thus reduce musculoskeletal complaints (MSC) in computer-intensive office work. This study investigated whether muscle activity did, indeed, differ between computer and non-computer activities. Whole-day logs of input device use in 30 office workers were used to identify computer and non-computer work, using a range of classification thresholds (non-computer thresholds (NCTs)). Exposure during these activities was assessed by bilateral electromyography recordings from the upper trapezius and lower arm. Contrasts in muscle activity between computer and non-computer work were distinct but small, even at the individualised, optimal NCT. Using an average group-based NCT resulted in less contrast, even in smaller subgroups defined by job function or MSC. Thus, computer activity logs should be used cautiously as proxies of biomechanical exposure. Conventional non-computer tasks may have a limited potential to increase variation in muscle activity during computer-intensive office work.

  2. Optimizing R with SparkR on a commodity cluster for biomedical research.

    PubMed

    Sedlmayr, Martin; Würfl, Tobias; Maier, Christian; Häberle, Lothar; Fasching, Peter; Prokosch, Hans-Ulrich; Christoph, Jan

    2016-12-01

    Medical researchers are challenged today by the enormous amount of data collected in healthcare. Analysis methods such as genome-wide association studies (GWAS) are often computationally intensive and thus require enormous resources to be performed in a reasonable amount of time. While dedicated clusters and public clouds may deliver the desired performance, their use requires upfront financial efforts or anonymous data, which is often not possible for preliminary or occasional tasks. We explored the possibilities to build a private, flexible cluster for processing scripts in R based on commodity, non-dedicated hardware of our department. For this, a GWAS-calculation in R on a single desktop computer, a Message Passing Interface (MPI)-cluster, and a SparkR-cluster were compared with regards to the performance, scalability, quality, and simplicity. The original script had a projected runtime of three years on a single desktop computer. Optimizing the script in R already yielded a significant reduction in computing time (2 weeks). By using R-MPI and SparkR, we were able to parallelize the computation and reduce the time to less than three hours (2.6 h) on already available, standard office computers. While MPI is a proven approach in high-performance clusters, it requires rather static, dedicated nodes. SparkR and its Hadoop siblings allow for a dynamic, elastic environment with automated failure handling. SparkR also scales better with the number of nodes in the cluster than MPI due to optimized data communication. R is a popular environment for clinical data analysis. The new SparkR solution offers elastic resources and allows supporting big data analysis using R even on non-dedicated resources with minimal change to the original code. To unleash the full potential, additional efforts should be invested to customize and improve the algorithms, especially with regards to data distribution. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    PubMed

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Effect of low-intensity pulsed ultrasound stimulation on gap healing in a rabbit osteotomy model evaluated by quantitative micro-computed tomography-based cross-sectional moment of inertia.

    PubMed

    Tobita, Kenji; Matsumoto, Takuya; Ohashi, Satoru; Bessho, Masahiko; Kaneko, Masako; Ohnishi, Isao

    2012-07-01

    It has been previously demonstrated that low-intensity pulsed ultrasound stimulation (LIPUS) enhances formation of the medullary canal and cortex in a gap-healing model of the tibia in rabbits, shortens the time required for remodeling, and enhances mineralization of the callus. In the current study, the mechanical integrity of these models was confirmed. In order to do this, the cross-sectional moment of inertia (CSMI) obtained from quantitative micro-computed tomography scans was calculated, and a comparison was made with a four-point bending test. This parameter can be analyzed in any direction, and three directions were selected in order to adopt an XYZ coordinate (X and Y for bending; Z for torsion). The present results demonstrated that LIPUS improved earlier restoration of bending stiffness at the healing site. In addition, LIPUS was effective not only in the ultrasound-irradiated plane, but also in the other two planes. CSMI may provide the structural as well as compositional determinants to assess fracture healing and would be very useful to replace the mechanical testing.

  5. A Structured Grid Based Solution-Adaptive Technique for Complex Separated Flows

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh; Soni, Bharat K.; Kishore, Boyalakuntla; Yu, Robert

    1996-01-01

    The objective of this work was to enhance the predictive capability of widely used computational fluid dynamic (CFD) codes through the use of solution adaptive gridding. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. In order to study the accuracy and efficiency improvements due to the grid adaptation, it is necessary to quantify grid size and distribution requirements as well as computational times of non-adapted solutions. Flow fields about launch vehicles of practical interest often involve supersonic freestream conditions at angle of attack exhibiting large scale separate vortical flow, vortex-vortex and vortex-surface interactions, separated shear layers and multiple shocks of different intensity. In this work, a weight function and an associated mesh redistribution procedure is presented which detects and resolves these features without user intervention. Particular emphasis has been placed upon accurate resolution of expansion regions and boundary layers. Flow past a wedge at Mach=2.0 is used to illustrate the enhanced detection capabilities of this newly developed weight function.

  6. Modeling Studies of the MODIS Solar Diffuser Attenuation Screen and Comparison with On-Orbit Measurements

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiong, Xiao-Xiong; Guenther, Bruce; Barnes, William; VanSalomonson, Vincent V.

    2004-01-01

    The MODIS instrument relies on solar calibration to achieve the required radiometric accuracy. This solar calibration occurs as the TERRA spacecraft comes up over the North Pole. The earth underneath the spacecraft is still dark for approximately one minute and the sun is just rising over the earth's north polar regions. During this time the sun moves through about 4 degrees, the scan mirror rotates about 19 times and about 50 frames (exposures) are made of the white solar diffuser. For some of MODIS'S bands the brightness of the screen is reduced, to prevent detector saturation, by means of a pinhole screen, which produces approximately 600 pinhole images of the sun within the field of view of any one detector. Previous attempts at creating a detailed radiometric model of this calibration scenario produced intensity variations on the focal planes with insufficient detail to be useful. The current computational approach produces results, which take into account the motion of the sun and the scan mirror and produces variations, which strongly resemble the observed focal plane intensity variations. The computational approach and results and a comparison with actual observational data are presented.

  7. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  8. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  9. Theoretical framework for quantitatively estimating ultrasound beam intensities using infrared thermography.

    PubMed

    Myers, Matthew R; Giridhar, Dushyanth

    2011-06-01

    In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. © 2011 Acoustical Society of America

  10. Noncontact methods for optical testing of convex aspheric mirrors for future large telescopes

    NASA Astrophysics Data System (ADS)

    Goncharov, Alexander V.; Druzhin, Vladislav V.; Batshev, Vladislav I.

    2009-06-01

    Non-contact methods for testing of large rotationally symmetric convex aspheric mirrors are proposed. These methods are based on non-null testing with side illumination schemes, in which a narrow collimated beam is reflected from the meridional aspheric profile of a mirror. The figure error of the mirror is deduced from the intensity pattern from the reflected beam obtained on a screen, which is positioned in the tangential plane (containing the optical axis) and perpendicular to the incoming beam. Testing of the entire surface is carried out by rotating the mirror about its optical axis and registering the characteristics of the intensity pattern on the screen. The intensity pattern can be formed using three different techniques: modified Hartman test, interference and boundary curve test. All these techniques are well known but have not been used in the proposed side illumination scheme. Analytical expressions characterizing the shape and location of the intensity pattern on the screen or a CCD have been developed for all types of conic surfaces. The main advantage of these testing methods compared with existing methods (Hindle sphere, null lens, computer generated hologram) is that the reference system does not require large optical components.

  11. Geocoronal imaging with Dynamics Explorer

    NASA Technical Reports Server (NTRS)

    Rairden, R. L.; Frank, L. A.; Craven, J. D.

    1986-01-01

    The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer.

  12. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  13. Guidelines for composite materials research related to general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Dow, N. F.; Humphreys, E. A.; Rosen, B. W.

    1983-01-01

    Guidelines for research on composite materials directed toward the improvement of all aspects of their applicability for general aviation aircraft were developed from extensive studies of their performance, manufacturability, and cost effectiveness. Specific areas for research and for manufacturing development were identified and evaluated. Inputs developed from visits to manufacturers were used in part to guide these evaluations, particularly in the area of cost effectiveness. Throughout the emphasis was to direct the research toward the requirements of general aviation aircraft, for which relatively low load intensities are encountered, economy of production is a prime requirement, and yet performance still commands a premium. A number of implications regarding further directions for developments in composites to meet these requirements also emerged from the studies. Chief among these is the need for an integrated (computer program) aerodynamic/structures approach to aircraft design.

  14. The large-scale structure of software-intensive systems

    PubMed Central

    Booch, Grady

    2012-01-01

    The computer metaphor is dominant in most discussions of neuroscience, but the semantics attached to that metaphor are often quite naive. Herein, we examine the ontology of software-intensive systems, the nature of their structure and the application of the computer metaphor to the metaphysical questions of self and causation. PMID:23386964

  15. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  16. New Approaches For Asteroid Spin State and Shape Modeling From Delay-Doppler Radar Images

    NASA Astrophysics Data System (ADS)

    Raissi, Chedy; Lamee, Mehdi; Mosiane, Olorato; Vassallo, Corinne; Busch, Michael W.; Greenberg, Adam; Benner, Lance A. M.; Naidu, Shantanu P.; Duong, Nicholas

    2016-10-01

    Delay-Doppler radar imaging is a powerful technique to characterize the trajectories, shapes, and spin states of near-Earth asteroids; and has yielded detailed models of dozens of objects. Reconstructing objects' shapes and spins from delay-Doppler data is a computationally intensive inversion problem. Since the 1990s, delay-Doppler data has been analyzed using the SHAPE software. SHAPE performs sequential single-parameter fitting, and requires considerable computer runtime and human intervention (Hudson 1993, Magri et al. 2007). Recently, multiple-parameter fitting algorithms have been shown to more efficiently invert delay-Doppler datasets (Greenberg & Margot 2015) - decreasing runtime while improving accuracy. However, extensive human oversight of the shape modeling process is still required. We have explored two new techniques to better automate delay-Doppler shape modeling: Bayesian optimization and a machine-learning neural network.One of the most time-intensive steps of the shape modeling process is to perform a grid search to constrain the target's spin state. We have implemented a Bayesian optimization routine that uses SHAPE to autonomously search the space of spin-state parameters. To test the efficacy of this technique, we compared it to results with human-guided SHAPE for asteroids 1992 UY4, 2000 RS11, and 2008 EV5. Bayesian optimization yielded similar spin state constraints within a factor of 3 less computer runtime.The shape modeling process could be further accelerated using a deep neural network to replace iterative fitting. We have implemented a neural network with a variational autoencoder (VAE), using a subset of known asteroid shapes and a large set of synthetic radar images as inputs to train the network. Conditioning the VAE in this manner allows the user to give the network a set of radar images and get a 3D shape model as an output. Additional development will be required to train a network to reliably render shapes from delay-Doppler images.This work was supported by NASA Ames, NVIDIA, Autodesk and the SETI Institute as part of the NASA Frontier Development Lab program.

  17. Comment on the paper ;NDSD-1000: High-resolution, high-temperature nitrogen dioxide spectroscopic Databank; by A.A. Lukashevskaya, N.N. Lavrentieva, A.C. Dudaryonok, V.I. Perevalov, J Quant Spectrosc Radiat Transfer 2016;184:205-17

    NASA Astrophysics Data System (ADS)

    Perrin, A.; Ndao, M.; Manceron, L.

    2017-10-01

    A recent paper [1] presents a high-resolution, high-temperature version of the Nitrogen Dioxide Spectroscopic Databank called NDSD-1000. The NDSD-1000 database contains line parameters (positions, intensities, self- and air-broadening coefficients, exponents of the temperature dependence of self- and air-broadening coefficients) for numerous cold and hot bands of the 14N16O2 isotopomer of nitrogen dioxide. The parameters used for the line positions and intensities calculation were generated through a global modeling of experimental data collected in the literature within the framework of the method of effective operators. However, the form of the effective dipole moment operator used to compute the NO2 line intensities in the NDSD-1000 database differs from the classical one used for line intensities calculation in the NO2 infrared literature [12]. Using Fourier transform spectra recorded at high resolution in the 6.3 μm region, it is shown here, that the NDSD-1000 formulation is incorrect since the computed intensities do not account properly for the (Int(+)/Int(-)) intensity ratio between the (+) (J = N+ 1/2) and (-) (J = N-1/2) electron - spin rotation subcomponents of the computed vibration rotation transitions. On the other hand, in the HITRAN or GEISA spectroscopic databases, the NO2 line intensities were computed using the classical theoretical approach, and it is shown here that these data lead to a significant better agreement between the observed and calculated spectra.

  18. Decision making and preferences for acoustic signals in choice situations by female crickets.

    PubMed

    Gabel, Eileen; Kuntze, Janine; Hennig, R Matthias

    2015-08-01

    Multiple attributes usually have to be assessed when choosing a mate. Efficient choice of the best mate is complicated if the available cues are not positively correlated, as is often the case during acoustic communication. Because of varying distances of signalers, a female may be confronted with signals of diverse quality at different intensities. Here, we examined how available cues are weighted for a decision by female crickets. Two songs with different temporal patterns and/or sound intensities were presented in a choice paradigm and compared with female responses from a no-choice test. When both patterns were presented at equal intensity, preference functions became wider in choice situations compared with a no-choice paradigm. When the stimuli in two-choice tests were presented at different intensities, this effect was counteracted as preference functions became narrower compared with choice tests using stimuli of equal intensity. The weighting of intensity differences depended on pattern quality and was therefore non-linear. A simple computational model based on pattern and intensity cues reliably predicted female decisions. A comparison of processing schemes suggested that the computations for pattern recognition and directionality are performed in a network with parallel topology. However, the computational flow of information corresponded to serial processing. © 2015. Published by The Company of Biologists Ltd.

  19. The medical science DMZ: a network design pattern for data-intensive medical science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean; Dart, Eli; Barnett, William

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations.High-end networking, packet-filter firewalls, network intrusion-detection systems.We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs.The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and networkmore » resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows.By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.« less

  20. The medical science DMZ: a network design pattern for data-intensive medical science.

    PubMed

    Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2017-10-06

    We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet-filter firewalls, network intrusion-detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  1. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  2. Seismic signal processing on heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.

  3. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    PubMed

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.

  4. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization

    NASA Astrophysics Data System (ADS)

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-01

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  5. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization.

    PubMed

    Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo

    2018-01-16

    We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.

  6. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    PubMed

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  7. HALOS: fast, autonomous, holographic adaptive optics

    NASA Astrophysics Data System (ADS)

    Andersen, Geoff P.; Gelsinger-Austin, Paul; Gaddipati, Ravi; Gaddipati, Phani; Ghebremichael, Fassil

    2014-08-01

    We present progress on our holographic adaptive laser optics system (HALOS): a compact, closed-loop aberration correction system that uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. The wavefront characterization is based on simple, parallel measurements of the intensity of fixed focal spots and does not require any complex calculations. As such, the system does not require a computer and is thus much cheaper, less complex than conventional approaches. We present details of a fully functional, closed-loop prototype incorporating a 32-element MEMS mirror, operating at a bandwidth of over 10kHz. Additionally, since the all-optical sensing is made in parallel, the speed is independent of actuator number - running at the same bandwidth for one actuator as for a million.

  8. Direct Numerical Simulation of Automobile Cavity Tones

    NASA Technical Reports Server (NTRS)

    Kurbatskii, Konstantin; Tam, Christopher K. W.

    2000-01-01

    The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.

  9. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  10. Quantitative ROESY analysis of computational models: structural studies of citalopram and β-cyclodextrin complexes by (1) H-NMR and computational methods.

    PubMed

    Ali, Syed Mashhood; Shamim, Shazia

    2015-07-01

    Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.

  11. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  12. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  13. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. User Guide to RockJock - A Program for Determining Quantitative Mineralogy from X-Ray Diffraction Data

    USGS Publications Warehouse

    Eberl, D.D.

    2003-01-01

    RockJock is a computer program that determines quantitative mineralogy in powdered samples by comparing the integrated X-ray diffraction (XRD) intensities of individual minerals in complex mixtures to the intensities of an internal standard. Analysis without an internal standard (standardless analysis) also is an option. This manual discusses how to prepare and X-ray samples and mineral standards for these types of analyses and describes the operation of the program. Carefully weighed samples containing an internal standard (zincite) are ground in a McCrone mill. Randomly oriented preparations then are X-rayed, and the X-ray data are entered into the RockJock program. Minerals likely to be present in the sample are chosen from a list of standards, and the calculation is begun. The program then automatically fits the sum of stored XRD patterns of pure standard minerals (the calculated pattern) to the measured pattern by varying the fraction of each mineral standard pattern, using the Solver function in Microsoft Excel to minimize a degree of fit parameter between the calculated and measured pattern. The calculation analyzes the pattern (usually 20 to 65 degrees two-theta) to find integrated intensities for the minerals. Integrated intensities for each mineral then are determined from the proportion of each mineral standard pattern required to give the best fit. These integrated intensities then are compared to the integrated intensity of the internal standard, and the weight percentages of the minerals are calculated. The results are presented as a list of minerals with their corresponding weight percent. To some extent, the quality of the analysis can be checked because each mineral is analyzed independently, and, therefore, the sum of the analysis should approach 100 percent. Also, the method has been shown to give good results with artificial mixtures. The program is easy to use, but does require an understanding of mineralogy, of X-ray diffraction practice, and an elementary knowledge of the Excel program.

  15. An infrastructure for data-intensive seismology using ADMIRE: laying the bricks for a new data highway

    NASA Astrophysics Data System (ADS)

    Trani, L.; Spinuso, A.; Galea, M.; Atkinson, M.; Van Eck, T.; Vilotte, J.

    2011-12-01

    The data bonanza generated by today's digital revolution is forcing scientists to rethink their methodologies and working practices. Traditional approaches to knowledge discovery are pushed to their limit and struggle to keep apace with the data flows produced by modern systems. This work shows how the ADMIRE data-intensive architecture supports seismologists by enabling them to focus on their scientific goals and questions, abstracting away the underlying technology platform that enacts their data integration and analysis tasks. ADMIRE accomplishes this partly by recognizing 3 different types of experts that require clearly defined interfaces between their interaction: the domain expert who is the application specialist, the data-analysis expert who is a specialist in extracting information from data, and the data-intensive engineer who develops the infrastructure for data-intensive computation. In order to provide a context in which each category of expert may flourish, ADMIRE uses a 3-level architecture: the upper - tool - level supports the work of both domain and data-analysis experts, housing an extensive and evolving set of portals, tools and development environments; the lower - enactment - level houses a large and dynamic community of providers delivering data and data-intensive enactment environments as an evolving infrastructure that supports all of the work underway in the upper layer. Most data-intensive engineers work here; the crucial innovation lies in the middle level, a gateway that is a tightly defined and stable interface through which the two diverse and dynamic upper and lower layers communicate. This is a minimal and simple protocol and language (DISPEL), ultimately to be controlled by standards, so that the upper and lower communities may invest, secure in the knowledge that changes in this interface will be carefully managed. We implemented a well-established procedure for processing seismic ambient noise on the prototype architecture. The primary goal was to evaluate its capabilities for large-scale integration and analysis of distributed data. A secondary goal was to gauge its potential and the added value that it might bring to the seismological community. Though still in its infant state, the architecture met the demands of our use case and promises to cater for our future requirements. We shall continue to develop its capabilities as part of an EU funded project VERCE - Virtual Earthquake and Seismology Research Community for Europe. VERCE aims to significantly advance our understanding of the Earth in order to aid society in its management of natural resources and hazards. Its strategy is to enable seismologists to fully exploit the under-utilized wealth of seismic data, and key to this is a data-intensive computation framework adapted to the scale and diversity of the community. This is a first step in building a data-intensive highway for geoscientists, smoothing their travel from the primary sources of data to new insights and rapid delivery of actionable information.

  16. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  17. GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.

    PubMed

    Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun

    2018-01-19

    Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .

  18. Exploring the Earth Using Deep Learning Techniques

    NASA Astrophysics Data System (ADS)

    Larraondo, P. R.; Evans, B. J. K.; Antony, J.

    2016-12-01

    Research using deep neural networks have significantly matured in recent times, and there is now a surge in interest to apply such methods to Earth systems science and the geosciences. When combined with Big Data, we believe there are opportunities for significantly transforming a number of areas relevant to researchers and policy makers. In particular, by using a combination of data from a range of satellite Earth observations as well as computer simulations from climate models and reanalysis, we can gain new insights into the information that is locked within the data. Global geospatial datasets describe a wide range of physical and chemical parameters, which are mostly available using regular grids covering large spatial and temporal extents. This makes them perfect candidates to apply deep learning methods. So far, these techniques have been successfully applied to image analysis through the use of convolutional neural networks. However, this is only one field of interest, and there is potential for many more use cases to be explored. The deep learning algorithms require fast access to large amounts of data in the form of tensors and make intensive use of CPU in order to train its models. The Australian National Computational Infrastructure (NCI) has recently augmented its Raijin 1.2 PFlop supercomputer with hardware accelerators. Together with NCI's 3000 core high performance OpenStack cloud, these computational systems have direct access to NCI's 10+ PBytes of datasets and associated Big Data software technologies (see http://geonetwork.nci.org.au/ and http://nci.org.au/systems-services/national-facility/nerdip/). To effectively use these computing infrastructures requires that both the data and software are organised in a way that readily supports the deep learning software ecosystem. Deep learning software, such as the open source TensorFlow library, has allowed us to demonstrate the possibility of generating geospatial models by combining information from our different data sources. This opens the door to an exciting new way of generating products and extracting features that have previously been labour intensive. In this paper, we will explore some of these geospatial use cases and share some of the lessons learned from this experience.

  19. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  20. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  1. Automated Creation of Labeled Pointcloud Datasets in Support of Machine-Learning Based Perception

    DTIC Science & Technology

    2017-12-01

    computationally intensive 3D vector math and took more than ten seconds to segment a single LIDAR frame from the HDL-32e with the Dell XPS15 9650’s Intel...Core i7 CPU. Depth Clustering avoids the computationally intensive 3D vector math of Euclidean Clustering-based DON segmentation and, instead

  2. PNNL Data-Intensive Computing for a Smarter Energy Grid

    ScienceCinema

    Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria

    2017-12-09

    The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.

  3. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  4. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  5. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  6. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  7. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  8. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  9. A Big Data Approach to Analyzing Market Volatility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less

  10. Addressing the computational cost of large EIT solutions.

    PubMed

    Boyle, Alistair; Borsic, Andrea; Adler, Andy

    2012-05-01

    Electrical impedance tomography (EIT) is a soft field tomography modality based on the application of electric current to a body and measurement of voltages through electrodes at the boundary. The interior conductivity is reconstructed on a discrete representation of the domain using a finite-element method (FEM) mesh and a parametrization of that domain. The reconstruction requires a sequence of numerically intensive calculations. There is strong interest in reducing the cost of these calculations. An improvement in the compute time for current problems would encourage further exploration of computationally challenging problems such as the incorporation of time series data, wide-spread adoption of three-dimensional simulations and correlation of other modalities such as CT and ultrasound. Multicore processors offer an opportunity to reduce EIT computation times but may require some restructuring of the underlying algorithms to maximize the use of available resources. This work profiles two EIT software packages (EIDORS and NDRM) to experimentally determine where the computational costs arise in EIT as problems scale. Sparse matrix solvers, a key component for the FEM forward problem and sensitivity estimates in the inverse problem, are shown to take a considerable portion of the total compute time in these packages. A sparse matrix solver performance measurement tool, Meagre-Crowd, is developed to interface with a variety of solvers and compare their performance over a range of two- and three-dimensional problems of increasing node density. Results show that distributed sparse matrix solvers that operate on multiple cores are advantageous up to a limit that increases as the node density increases. We recommend a selection procedure to find a solver and hardware arrangement matched to the problem and provide guidance and tools to perform that selection.

  11. BT-Nurse: computer generation of natural language shift summaries from complex heterogeneous medical data.

    PubMed

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave

    2011-01-01

    The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed.

  12. Optimized temporal pattern of brain stimulation designed by computational evolution

    PubMed Central

    Brocker, David T.; Swan, Brandon D.; So, Rosa Q.; Turner, Dennis A.; Gross, Robert E.; Grill, Warren M.

    2017-01-01

    Brain stimulation is a promising therapy for several neurological disorders, including Parkinson’s disease. Stimulation parameters are selected empirically and are limited to the frequency and intensity of stimulation. We used the temporal pattern of stimulation as a novel parameter of deep brain stimulation to ameliorate symptoms in a parkinsonian animal model and in humans with Parkinson’s disease. We used model-based computational evolution to optimize the stimulation pattern. The optimized pattern produced symptom relief comparable to that from standard high-frequency stimulation (a constant rate of 130 or 185 Hz) and outperformed frequency-matched standard stimulation in the parkinsonian rat and in patients. Both optimized and standard stimulation suppressed abnormal oscillatory activity in the basal ganglia of rats and humans. The results illustrate the utility of model-based computational evolution to design temporal pattern of stimulation to increase the efficiency of brain stimulation in Parkinson’s disease, thereby requiring substantially less energy than traditional brain stimulation. PMID:28053151

  13. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  14. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  15. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  16. BT-Nurse: computer generation of natural language shift summaries from complex heterogeneous medical data

    PubMed Central

    Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy; Westwater, Dave

    2011-01-01

    The BT-Nurse system uses data-to-text technology to automatically generate a natural language nursing shift summary in a neonatal intensive care unit (NICU). The summary is solely based on data held in an electronic patient record system, no additional data-entry is required. BT-Nurse was tested for two months in the Royal Infirmary of Edinburgh NICU. Nurses were asked to rate the understandability, accuracy, and helpfulness of the computer-generated summaries; they were also asked for free-text comments about the summaries. The nurses found the majority of the summaries to be understandable, accurate, and helpful (p<0.001 for all measures). However, nurses also pointed out many deficiencies, especially with regard to extra content they wanted to see in the computer-generated summaries. In conclusion, natural language NICU shift summaries can be automatically generated from an electronic patient record, but our proof-of-concept software needs considerable additional development work before it can be deployed. PMID:21724739

  17. Floating-Point Modules Targeted for Use with RC Compilation Tools

    NASA Technical Reports Server (NTRS)

    Sahin, Ibrahin; Gloster, Clay S.

    2000-01-01

    Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.

  18. X-38 Experimental Controls Laws

    NASA Technical Reports Server (NTRS)

    Munday, Steve; Estes, Jay; Bordano, Aldo J.

    2000-01-01

    X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.

  19. Simulation of DKIST solar adaptive optics system

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Carlisle, Elizabeth; Schmidt, Dirk

    2016-07-01

    Solar adaptive optics (AO) simulations are a valuable tool to guide the design and optimization process of current and future solar AO and multi-conjugate AO (MCAO) systems. Solar AO and MCAO systems rely on extended object cross-correlating Shack-Hartmann wavefront sensors to measure the wavefront. Accurate solar AO simulations require computationally intensive operations, which have until recently presented a prohibitive computational cost. We present an update on the status of a solar AO and MCAO simulation tool being developed at the National Solar Observatory. The simulation tool is a multi-threaded application written in the C++ language that takes advantage of current large multi-core CPU computer systems and fast ethernet connections to provide accurate full simulation of solar AO and MCAO systems. It interfaces with KAOS, a state of the art solar AO control software developed by the Kiepenheuer-Institut fuer Sonnenphysik, that provides reliable AO control. We report on the latest results produced by the solar AO simulation tool.

  20. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  1. Cooperative and competitive concurrency in scientific computing. A full open-source upgrade of the program for dynamical calculations of RHEED intensity oscillations

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2011-06-01

    A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue identifier of previous version: ADVL_v3_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 709 Does the new version supersede the previous version?: Yes Nature of problem: Reflection high-energy electron diffraction (RHEED) is an important in-situ analysis technique, which is capable of giving quantitative information about the growth process of thin layers and its control. It can be used to calibrate growth rate, analyze surface morphology, calibrate surface temperature, monitor the arrangement of the surface atoms, and provide information about growth kinetics. Such control allows the development of structures where the electrons can be confined in space, giving quantum wells or even quantum dots. In order to determine the atomic positions of atoms in the first few layers, the RHEED intensity must be measured as a function of the scattering angles and then compared with dynamic calculations. The objective of this release is to address the design of architecture for application that simulates the rocking curves RHEED intensities during hetero-epitaxial growth process of thin films. Solution method: The GrowthCP is a complex numerical model that uses multiple threads for simulation of epitaxial growth of thin layers. This model consists of two transactional parts. The first part is a mathematical model being based on the Runge-Kutta method with adaptive step-size control. The second part represents first-principles of the one-dimensional RHEED computational model. This model is based on solving a one-dimensional Schrödinger equation. Several problems can arise when applications contain a mixture of data access code, numerical code, and presentation code. Such applications are difficult to maintain, because interdependencies between all the components cause strong ripple effects whenever a change is made anywhere. Adding new data views often requires reimplementing a numerical code, which then requires maintenance in multiple places. In order to solve problems of this type, the computational and threading layers of the project have been implemented in the form of one design pattern as a part of Model-View-Controller architecture. Reasons for new version: Responding to the users' feedback the Growth09 project has been upgraded to a standard that allows the carrying out of sample computations of the RHEED intensities for a disordered surface for a wide range of single- and epitaxial hetero-structures. The design pattern on which the project is based has also been improved. It is shown that this model can be effectively used for multithreaded growth simulations of thin epitaxial layers and corresponding RHEED intensities for a wide range of single- and hetero-structures. Responding to the users' feedback the present release has been implemented using a well-documented free compiler [1] not requiring the special configuration and installation additional libraries. Summary of revisions: The logical structure of the Growth09 program has been modified according to the scheme showed in Fig. 1. The class diagram in Fig. 1 is a static view of the main platform-specific elements of the GrowthCP architecture. Fig. 2 provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. The program requires the user to provide the appropriate parameters in the form of a knowledge base for the crystal structures under investigation. These parameters are loaded from the parameters. ini files at run-time. Instructions to prepare the .ini files can be found in the new distribution. The program enables carrying out different growth models and one-dimensional dynamical RHEED calculations for the fcc lattice with basis of three-atoms, fcc lattice with basis of two-atoms, fcc lattice with single atom basis, Zinc-Blende, Sodium Chloride, and Wurtzite crystalline structures and hetero-structures, but yet the Fourier component of the scattering potential in the TRHEEDCalculations.crystPotUgXXX() procedure can be modified and implemented according to users' specific application requirements. The Fourier component of the scattering potential of the whole crystalline hetero-structures can be determined as a sum of contributions coming from all thin slices of individual atomic layers. To carry out one-dimensional calculations of the scattering potentials, the program uses properly constructed self-consistent procedures. Each component of the system shown in Figs. 1 and 2 is fully extendable and can easily be adapted to new changeable requirements. Two essential logical elements of the system, i.e. TGrowthTransaction and TRHEEDCalculations classes, were designed and implemented in this way for them to pass the information to themselves without the need to use the data-exchange files given. In consequence each of them can be independently modified and/or extended. Implementing other types of differential equations and the different algorithm for solving them in the TGrowthTransaction class does not require another implementation of the TRHEEDCalculations class. Similarly, implementing other forms of scattering potential and different algorithm for RHEED calculation stays without the influence on the TGrowthTransaction class construction. Unusual features: The program is distributed in the form of main project GrowthCP.lpr, with associated files, and should be compiled using Lazarus IDE. The program should be compiled with English/USA regional and language options. Running time: The typical running time is machine and user-parameters dependent. References: http://sourceforge.net/projects/lazarus/files/.

  2. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  3. Astropulse: A Search for Microsecond Transient Radio Signals Using Distributed Computing. I. Methodology

    NASA Astrophysics Data System (ADS)

    Von Korff, J.; Demorest, P.; Heien, E.; Korpela, E.; Werthimer, D.; Cobb, J.; Lebofsky, M.; Anderson, D.; Bankay, B.; Siemion, A.

    2013-04-01

    We are performing a transient, microsecond timescale radio sky survey, called "Astropulse," using the Arecibo telescope. Astropulse searches for brief (0.4 μs to 204.8 μs ), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1420 MHz. Astropulse is a commensal (piggyback) survey, and scans the sky between declinations of -1.°33 and 38.°03. We obtained 1540 hr of data in each of seven beams of the ALFA receiver, with two polarizations per beam. The data are one-bit complex sampled at the Nyquist limit of 0.4 μs per sample. Examination of timescales on the order of microseconds is possible because we used coherent dedispersion, a technique that has frequently been used for targeted observations, but has never been associated with a radio sky survey. The more usual technique, incoherent dedispersion, cannot resolve signals below a minimum timescale which depends on the signal's dispersion measure (DM) and frequency. However, coherent dedispersion requires more intensive computation than incoherent dedispersion. The required processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, allowing us to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Astrophysical events that might produce brief radio pulses include giant pulses from pulsars, rotating radio transients, exploding primordial black holes, or new sources yet to be imagined. Radio frequency interference and noise contaminate the data; these are mitigated by a number of techniques including multi-polarization correlation, DM repetition detection, and frequency profiling.

  4. QMachine: commodity supercomputing in web browsers

    PubMed Central

    2014-01-01

    Background Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics’ “Big Data” from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. Results QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running “download and install” software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. Conclusions QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments. PMID:24913605

  5. Computational Modeling of Space Physiology for Informing Spaceflight Countermeasure Design and Predictions of Efficacy

    NASA Technical Reports Server (NTRS)

    Lewandowski, B. E.; DeWitt, J. K.; Gallo, C. A.; Gilkey, K. M.; Godfrey, A. P.; Humphreys, B. T.; Jagodnik, K. M.; Kassemi, M.; Myers, J. G.; Nelson, E. S.; hide

    2017-01-01

    MOTIVATION: Spaceflight countermeasures mitigate the harmful effects of the space environment on astronaut health and performance. Exercise has historically been used as a countermeasure to physical deconditioning, and additional countermeasures including lower body negative pressure, blood flow occlusion and artificial gravity are being researched as countermeasures to spaceflight-induced fluid shifts. The NASA Digital Astronaut Project uses computational models of physiological systems to inform countermeasure design and to predict countermeasure efficacy.OVERVIEW: Computational modeling supports the development of the exercise devices that will be flown on NASAs new exploration crew vehicles. Biomechanical modeling is used to inform design requirements to ensure that exercises can be properly performed within the volume allocated for exercise and to determine whether the limited mass, volume and power requirements of the devices will affect biomechanical outcomes. Models of muscle atrophy and bone remodeling can predict device efficacy for protecting musculoskeletal health during long-duration missions. A lumped-parameter whole-body model of the fluids within the body, which includes the blood within the cardiovascular system, the cerebral spinal fluid, interstitial fluid and lymphatic system fluid, estimates compartmental changes in pressure and volume due to gravitational changes. These models simulate fluid shift countermeasure effects and predict the associated changes in tissue strain in areas of physiological interest to aid in predicting countermeasure effectiveness. SIGNIFICANCE: Development and testing of spaceflight countermeasure prototypes are resource-intensive efforts. Computational modeling can supplement this process by performing simulations that reduce the amount of necessary experimental testing. Outcomes of the simulations are often important for the definition of design requirements and the identification of factors essential in ensuring countermeasure efficacy.

  6. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  8. Computer-based psychological treatment for comorbid depression and problematic alcohol and/or cannabis use: a randomized controlled trial of clinical efficacy.

    PubMed

    Kay-Lambkin, Frances J; Baker, Amanda L; Lewin, Terry J; Carr, Vaughan J

    2009-03-01

    To evaluate computer- versus therapist-delivered psychological treatment for people with comorbid depression and alcohol/cannabis use problems. Randomized controlled trial. Community-based participants in the Hunter Region of New South Wales, Australia. Ninety-seven people with comorbid major depression and alcohol/cannabis misuse. All participants received a brief intervention (BI) for depressive symptoms and substance misuse, followed by random assignment to: no further treatment (BI alone); or nine sessions of motivational interviewing and cognitive behaviour therapy (intensive MI/CBT). Participants allocated to the intensive MI/CBT condition were selected at random to receive their treatment 'live' (i.e. delivered by a psychologist) or via a computer-based program (with brief weekly input from a psychologist). Depression, alcohol/cannabis use and hazardous substance use index scores measured at baseline, and 3, 6 and 12 months post-baseline assessment. (i) Depression responded better to intensive MI/CBT compared to BI alone, with 'live' treatment demonstrating a strong short-term beneficial effect which was matched by computer-based treatment at 12-month follow-up; (ii) problematic alcohol use responded well to BI alone and even better to the intensive MI/CBT intervention; (iii) intensive MI/CBT was significantly better than BI alone in reducing cannabis use and hazardous substance use, with computer-based therapy showing the largest treatment effect. Computer-based treatment, targeting both depression and substance use simultaneously, results in at least equivalent 12-month outcomes relative to a 'live' intervention. For clinicians treating people with comorbid depression and alcohol problems, BIs addressing both issues appear to be an appropriate and efficacious treatment option. Primary care of those with comorbid depression and cannabis use problems could involve computer-based integrated interventions for depression and cannabis use, with brief regular contact with the clinician to check on progress.

  9. A parallel computing engine for a class of time critical processes.

    PubMed

    Nabhan, T M; Zomaya, A Y

    1997-01-01

    This paper focuses on the efficient parallel implementation of systems of numerically intensive nature over loosely coupled multiprocessor architectures. These analytical models are of significant importance to many real-time systems that have to meet severe time constants. A parallel computing engine (PCE) has been developed in this work for the efficient simplification and the near optimal scheduling of numerical models over the different cooperating processors of the parallel computer. First, the analytical system is efficiently coded in its general form. The model is then simplified by using any available information (e.g., constant parameters). A task graph representing the interconnections among the different components (or equations) is generated. The graph can then be compressed to control the computation/communication requirements. The task scheduler employs a graph-based iterative scheme, based on the simulated annealing algorithm, to map the vertices of the task graph onto a Multiple-Instruction-stream Multiple-Data-stream (MIMD) type of architecture. The algorithm uses a nonanalytical cost function that properly considers the computation capability of the processors, the network topology, the communication time, and congestion possibilities. Moreover, the proposed technique is simple, flexible, and computationally viable. The efficiency of the algorithm is demonstrated by two case studies with good results.

  10. A micro-hydrology computation ordering algorithm

    NASA Astrophysics Data System (ADS)

    Croley, Thomas E.

    1980-11-01

    Discrete-distributed-parameter models are essential for watershed modelling where practical consideration of spatial variations in watershed properties and inputs is desired. Such modelling is necessary for analysis of detailed hydrologic impacts from management strategies and land-use effects. Trade-offs between model validity and model complexity exist in resolution of the watershed. Once these are determined, the watershed is then broken into sub-areas which each have essentially spatially-uniform properties. Lumped-parameter (micro-hydrology) models are applied to these sub-areas and their outputs are combined through the use of a computation ordering technique, as illustrated by many discrete-distributed-parameter hydrology models. Manual ordering of these computations requires fore-thought, and is tedious, error prone, sometimes storage intensive and least adaptable to changes in watershed resolution. A programmable algorithm for ordering micro-hydrology computations is presented that enables automatic ordering of computations within the computer via an easily understood and easily implemented "node" definition, numbering and coding scheme. This scheme and the algorithm are detailed in logic flow-charts and an example application is presented. Extensions and modifications of the algorithm are easily made for complex geometries or differing microhydrology models. The algorithm is shown to be superior to manual ordering techniques and has potential use in high-resolution studies.

  11. LBM-EP: Lattice-Boltzmann method for fast cardiac electrophysiology simulation from 3D images.

    PubMed

    Rapaka, S; Mansi, T; Georgescu, B; Pop, M; Wright, G A; Kamen, A; Comaniciu, Dorin

    2012-01-01

    Current treatments of heart rhythm troubles require careful planning and guidance for optimal outcomes. Computational models of cardiac electrophysiology are being proposed for therapy planning but current approaches are either too simplified or too computationally intensive for patient-specific simulations in clinical practice. This paper presents a novel approach, LBM-EP, to solve any type of mono-domain cardiac electrophysiology models at near real-time that is especially tailored for patient-specific simulations. The domain is discretized on a Cartesian grid with a level-set representation of patient's heart geometry, previously estimated from images automatically. The cell model is calculated node-wise, while the transmembrane potential is diffused using Lattice-Boltzmann method within the domain defined by the level-set. Experiments on synthetic cases, on a data set from CESC'10 and on one patient with myocardium scar showed that LBM-EP provides results comparable to an FEM implementation, while being 10 - 45 times faster. Fast, accurate, scalable and requiring no specific meshing, LBM-EP paves the way to efficient and detailed models of cardiac electrophysiology for therapy planning.

  12. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  13. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  14. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  15. Orthodontic decompensation in skeletal Class III malocclusion: redefining the amount of movement assessed by Cone-Beam Computed Tomography

    PubMed Central

    Cappellozza, José Antonio Zuega; Guedes, Fabio Pinto; Nary, Hugo; Capelozza, Leopoldino; Cardoso, Mauricio de Almeida

    2015-01-01

    Introduction: Cone-Beam Computed Tomography (CBCT) is essential for tridimensional planning of orthognathic surgery, as it allows visualization and evaluation of bone structures and mineralized tissues. Tomographic slices allow evaluation of tooth inclination and individualization of movement performed during preoperative decompensation. The aim of this paper was to assess maxillary and mandibular incisors inclination pre and post orthodontic decompensation in skeletal Class III malocclusion. Methods: The study was conducted on six individuals with skeletal Class III malocclusion, surgically treated, who had Cone-Beam Computed Tomographic scans obtained before and after orthodontic decompensation. On multiplanar reconstruction view, tomographic slices (axial, coronal and sagittal) were obtained on the long axis of each incisor. The sagittal slice was used for measurement taking, whereas the references used to assess tooth inclination were the long axis of maxillary teeth in relation to the palatal plane and the long axis of mandibular teeth in relation to the mandibular plane. Results: There was significant variation in the inclination of incisors before and after orthodontic decompensation. This change was of greater magnitude in the mandibular arch, evidencing that natural compensation is more effective in this arch, thereby requiring more intensive decompensation. Conclusion: When routinely performed, the protocols of decompensation treatment in surgical individuals often result in intensive movements, which should be reevaluated, since the extent of movement predisposes to reduction in bone attachment levels and root length. PMID:26560818

  16. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    PubMed

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  17. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei

    2013-09-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  18. Scaling and efficiency of PRISM in adaptive simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonse, Shaheen R.; Bell, J.B.; Brown, N.J.

    1999-12-01

    The dominant computational cost in modeling turbulent combustion phenomena numerically with high fidelity chemical mechanisms is the time required to solve the ordinary differential equations associated with chemical kinetics. One approach to reducing that computational cost is to develop an inexpensive surrogate model that accurately represents evolution of chemical kinetics. One such approach, PRISM, develops a polynomial representation of the chemistry evolution in a local region of chemical composition space. This representation is then stored for later use. As the computation proceeds, the chemistry evolution for other points within the same region are computed by evaluating these polynomials instead ofmore » calling an ordinary differential equation solver. If initial data for advancing the chemistry is encountered that is not in any region for which a polynomial is defined, the methodology dynamically samples that region and constructs a new representation for that region. The utility of this approach is determined by the size of the regions over which the representation provides a good approximation to the kinetics and the number of these regions that are necessary to model the subset of composition space that is active during a simulation. In this paper, we assess the PRISM methodology in the context of a turbulent premixed flame in two dimensions. We consider a range of turbulent intensities ranging from weak turbulence that has little effect on the flame to strong turbulence that tears pockets of burning fluid from the main flame. For each case, we explore a range of sizes for the local regions and determine the scaling behavior as a function of region size and turbulent intensity.« less

  19. PNNLs Data Intensive Computing research battles Homeland Security threats

    ScienceCinema

    David Thurman; Joe Kielman; Katherine Wolf; David Atkinson

    2018-05-11

    The Pacific Northwest National Laboratorys (PNNL's) approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architecture, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  20. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema

    Deborah Gracio; David Koppenaal; Ruby Leung

    2018-05-18

    The Pacific Northwest National Laboratory's approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  1. Self-Administered Cued Naming Therapy: A Single-Participant Investigation of a Computer-Based Therapy Program Replicated in Four Cases

    ERIC Educational Resources Information Center

    Ramsberger, Gail; Marie, Basem

    2007-01-01

    Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…

  2. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  3. Using a multifrontal sparse solver in a high performance, finite element code

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Lucas, Robert; Raefsky, Arthur

    1990-01-01

    We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.

  4. The Application of High Energy Resolution Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.

    2012-04-01

    Radiation detectors installed at key interdiction points provide defense against nuclear smuggling attempts by scanning vehicles and traffic for illicit nuclear material. These hypothetical threat scenarios may be modeled using radiation transport simulations. However, high-fidelity models are computationally intensive. Furthermore, the range of smuggler attributes and detector technologies create a large problem space not easily overcome by brute-force methods. Previous research has demonstrated that decomposing the scenario into independently simulated components using Green's functions can simulate photon detector signals with coarse energy resolution. This paper extends this methodology by presenting physics enhancements and numerical treatments which allow for an arbitrary level of energy resolution for photon transport. As a result, spectroscopic detector signals produced from full forward transport simulations can be replicated while requiring multiple orders of magnitude less computation time.

  5. High-intensity exercise interventions in cancer survivors: a systematic review exploring the impact on health outcomes.

    PubMed

    Toohey, Kellie; Pumpa, Kate; McKune, Andrew; Cooke, Julie; Semple, Stuart

    2018-01-01

    There is an increasing body of evidence underpinning high-intensity exercise as an effective and time-efficient intervention for improving health in cancer survivors. The aim of this study was to, (1) evaluate the efficacy and (2) the safety of high-intensity exercise interventions in improving selected health outcomes in cancer survivors. Design Systematic review. Data sources Google Scholar and EBSCO, CINAHL Plus, Computers and Applied Sciences Complete, Health Source-Consumer Edition, Health Source: Nursing/Academic Edition, MEDLINE, Web of Science and SPORTDiscuss from inception up until August 2017. Eligibility criteria Randomized controlled trials of high-intensity exercise interventions in cancer survivors (all cancer types) with health-related outcome measures. The guidelines adopted for this review were the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA). The search returned 447 articles, of which nine articles (n = 531 participants mean, age 58 ± 9.5 years) met the eligibility criteria. Exercise interventions of between 4 and 18 weeks consisting of high-intensity interval bouts of up to 4-min were compared with a continuous moderate intensity (CMIT) intervention or a control group. High-intensity exercise interventions elicited significant improvements in VO 2 max, strength, body mass, body fat and hip and waist circumference compared with CMIT and/or control groups. The studies reviewed showed low risk in participating in supervised high-intensity exercise interventions. Mixed mode high-intensity interventions which included both aerobic and resistance exercises were most effective improving the aerobic fitness levels of cancer survivors by 12.45-21.35%, from baseline to post-intervention. High-intensity exercise interventions improved physical and physiological health-related outcome measures such as cardiovascular fitness and strength in cancer survivors. Given that high-intensity exercise sessions require a shorter time commitment, it may be a useful modality to improve health outcomes in those who are time poor. The risk of adverse events associated with high-intensity exercise was low.

  6. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  7. A multi-emitter fitting algorithm for potential live cell super-resolution imaging over a wide range of molecular densities.

    PubMed

    Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S

    2018-05-25

    Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  8. Regional scale landslide risk assessment with a dynamic physical model - development, application and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Luna, Byron Quan; Vidar Vangelsten, Bjørn; Liu, Zhongqiang; Eidsvig, Unni; Nadim, Farrokh

    2013-04-01

    Landslide risk must be assessed at the appropriate scale in order to allow effective risk management. At the moment, few deterministic models exist that can do all the computations required for a complete landslide risk assessment at a regional scale. This arises from the difficulty to precisely define the location and volume of the released mass and from the inability of the models to compute the displacement with a large amount of individual initiation areas (computationally exhaustive). This paper presents a medium-scale, dynamic physical model for rapid mass movements in mountainous and volcanic areas. The deterministic nature of the approach makes it possible to apply it to other sites since it considers the frictional equilibrium conditions for the initiation process, the rheological resistance of the displaced flow for the run-out process and fragility curve that links intensity to economic loss for each building. The model takes into account the triggering effect of an earthquake, intense rainfall and a combination of both (spatial and temporal). The run-out module of the model considers the flow as a 2-D continuum medium solving the equations of mass balance and momentum conservation. The model is embedded in an open source environment geographical information system (GIS), it is computationally efficient and it is transparent (understandable and comprehensible) for the end-user. The model was applied to a virtual region, assessing landslide hazard, vulnerability and risk. A Monte Carlo simulation scheme was applied to quantify, propagate and communicate the effects of uncertainty in input parameters on the final results. In this technique, the input distributions are recreated through sampling and the failure criteria are calculated for each stochastic realisation of the site properties. The model is able to identify the released volumes of the critical slopes and the areas threatened by the run-out intensity. The obtained final outcome is the estimation of individual building damage and total economic risk. The research leading to these results has received funding from the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement No 265138 New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe (MATRIX).

  9. Case 223: Arytenoid Dislocation.

    PubMed

    Oppenheimer, Avi G; Gulati, Vishal; Kirsch, Jacobo; Alemar, Gilberto O

    2015-11-01

    A 71-year-old man with a history of atrial fibrillation refractory to medical therapy and lung cancer status after left upper lobectomy presented to our hospital for elective cardioversion and rate control with tikosyn. Overnight, the patient became unresponsive and was found to be in a state of cardiogenic shock. A code was called, and he was stabilized after cardioversion and bedside intubation. His stay in the intensive care unit was complicated by ventilator-associated pneumonia. The patient subsequently underwent multiple failed extubation attempts, requiring two additional reintubations. He was finally extubated 18 days after his initial admission to the intensive care unit. After he was discharged, he reported a hoarse voice and was only able to whisper. His voice varied in timbre and volume, and it became hoarser with use. Otolaryngology evaluation, including laryngoscopy and video stroboscopy, showed immobility of the right vocal cord. He was referred for speech therapy, and a computed tomographic (CT) examination of the neck was ordered.

  10. GEM System: automatic prototyping of cell-wide metabolic pathway models from genomes.

    PubMed

    Arakawa, Kazuharu; Yamada, Yohei; Shinoda, Kosaku; Nakayama, Yoichi; Tomita, Masaru

    2006-03-23

    Successful realization of a "systems biology" approach to analyzing cells is a grand challenge for our understanding of life. However, current modeling approaches to cell simulation are labor-intensive, manual affairs, and therefore constitute a major bottleneck in the evolution of computational cell biology. We developed the Genome-based Modeling (GEM) System for the purpose of automatically prototyping simulation models of cell-wide metabolic pathways from genome sequences and other public biological information. Models generated by the GEM System include an entire Escherichia coli metabolism model comprising 968 reactions of 1195 metabolites, achieving 100% coverage when compared with the KEGG database, 92.38% with the EcoCyc database, and 95.06% with iJR904 genome-scale model. The GEM System prototypes qualitative models to reduce the labor-intensive tasks required for systems biology research. Models of over 90 bacterial genomes are available at our web site.

  11. Geocoronal imaging with Dynamics Explorer - A first look

    NASA Technical Reports Server (NTRS)

    Rairden, R. L.; Frank, L. A.; Craven, J. D.

    1983-01-01

    The ultraviolet photometer of the University of Iowa spin-scan auroral imaging instrumentation on board Dynamics Explorer-1 has returned numerous hydrogen Lyman alpha images of the geocorona from altitudes of 570 km to 23,300 km (1.09 R sub E to 4.66 R sub E geocentric radial distance). The hydrogen density gradient is shown by a plot of the zenith intensities throughout this range, which decrease to near celestial background values as the spacecraft approaches apogee. Characterizing the upper geocorona as optically thin (single-scattering), the zenith intensity is converted directly to vertical column density. This approximation loses its validity deeper in the geocorona, where the hydrogen is demonstrated to be optically thick in that there is no Lyman alpha limb brightening. Further study of the geocoronal hydrogen distribution will require computer modeling of the radiative transfer. Previously announced in STAR as N83-20889

  12. Calculation of stress intensity factors in an isotropic multicracked plate: Part 2: Symbolic/numeric implementation

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Binienda, W. K.; Tan, H. Q.; Xu, M. H.

    1992-01-01

    Analytical derivations of stress intensity factors (SIF's) of a multicracked plate can be complex and tedious. Recent advances, however, in intelligent application of symbolic computation can overcome these difficulties and provide the means to rigorously and efficiently analyze this class of problems. Here, the symbolic algorithm required to implement the methodology described in Part 1 is presented. The special problem-oriented symbolic functions to derive the fundamental kernels are described, and the associated automatically generated FORTRAN subroutines are given. As a result, a symbolic/FORTRAN package named SYMFRAC, capable of providing accurate SIF's at each crack tip, was developed and validated. Simple illustrative examples using SYMFRAC show the potential of the present approach for predicting the macrocrack propagation path due to existing microcracks in the vicinity of a macrocrack tip, when the influence of the microcrack's location, orientation, size, and interaction are taken into account.

  13. Transformations de phases dans les alliages forcés: Un concept utile pour les études d'usure

    NASA Astrophysics Data System (ADS)

    Chaffron, Laurent; Le Bouar, Yann; Martin, Georges

    2001-07-01

    The concept of driven alloys is introduced and examples are given for alloys under irradiation or under high energy ball milling. Both real and computer experiments show that the stationary configuration of alloys under external forcing depends on the overall temperature, on the ratio of the ballistic to the thermally activated atomic jump frequency, and on the space and time correlation of the ballistic jumps. As well as temperature, the description of driven phase transformations requires a new control parameter: the intensity of forcing. The latter is shown to be the irradiation flux for alloys under irradiation and the momentum transferred per unit time to an elementary volume of matter, under milling. We show how to use these concepts to address the wear rate of swift train wheels (TGV): it is found that the wear rate is proportional to the intensity of forcing.

  14. Improvement in the accuracy of flux measurement of radio sources by exploiting an arithmetic pattern in photon bunching noise

    NASA Astrophysics Data System (ADS)

    Lieu, Richard

    2018-01-01

    A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.

  15. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  16. Interactive 3D display simulator for autostereoscopic smart pad

    NASA Astrophysics Data System (ADS)

    Choe, Yeong-Seon; Lee, Ho-Dong; Park, Min-Chul; Son, Jung-Young; Park, Gwi-Tae

    2012-06-01

    There is growing interest of displaying 3D images on a smart pad for entertainments and information services. Designing and realizing various types of 3D displays on the smart pad is not easy for costs and given time. Software simulation can be an alternative method to save and shorten the development. In this paper, we propose a 3D display simulator for autostereoscopic smart pad. It simulates light intensity of each view and crosstalk for smart pad display panels. Designers of 3D display for smart pad can interactively simulate many kinds of autostereoscopic displays interactively by changing parameters required for panel design. Crosstalk to reduce leakage of one eye's image into the image of the other eye, and light intensity for computing visual comfort zone are important factors in designing autostereoscopic display for smart pad. Interaction enables intuitive designs. This paper describes an interactive 3D display simulator for autostereoscopic smart pad.

  17. clearScience: Infrastructure for Communicating Data-Intensive Science.

    PubMed

    Bot, Brian M; Burdick, David; Kellen, Michael; Huang, Erich S

    2013-01-01

    Progress in biomedical research requires effective scientific communication to one's peers and to the public. Current research routinely encompasses large datasets and complex analytic processes, and the constraints of traditional journal formats limit useful transmission of these elements. We are constructing a framework through which authors can not only provide the narrative of what was done, but the primary and derivative data, the source code, the compute environment, and web-accessible virtual machines. This infrastructure allows authors to "hand their machine"- prepopulated with libraries, data, and code-to those interested in reviewing or building off of their work. This project, "clearScience," seeks to provide an integrated system that accommodates the ad hoc nature of discovery in the data-intensive sciences and seamless transitions from working to reporting. We demonstrate that rather than merely describing the science being reported, one can deliver the science itself.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  19. Dose calculations using artificial neural networks: A feasibility study for photon beams

    NASA Astrophysics Data System (ADS)

    Vasseur, Aurélien; Makovicka, Libor; Martin, Éric; Sauget, Marc; Contassot-Vivier, Sylvain; Bahi, Jacques

    2008-04-01

    Direct dose calculations are a crucial requirement for Treatment Planning Systems. Some methods, such as Monte Carlo, explicitly model particle transport, others depend upon tabulated data or analytic formulae. However, their computation time is too lengthy for clinical use, or accuracy is insufficient, especially for recent techniques such as Intensity-Modulated Radiotherapy. Based on artificial neural networks (ANNs), a new solution is proposed and this work extends the properties of such an algorithm and is called NeuRad. Prior to any calculations, a first phase known as the learning process is necessary. Monte Carlo dose distributions in homogeneous media are used, and the ANN is then acquired. According to the training base, it can be used as a dose engine for either heterogeneous media or for an unknown material. In this report, two networks were created in order to compute dose distribution within a homogeneous phantom made of an unknown material and within an inhomogeneous phantom made of water and TA6V4 (titanium alloy corresponding to hip prosthesis). All NeuRad results were compared to Monte Carlo distributions. The latter required about 7 h on a dedicated cluster (10 nodes). NeuRad learning requires between 8 and 18 h (depending upon the size of the training base) on a single low-end computer. However, the results of dose computation with the ANN are available in less than 2 s, again using a low-end computer, for a 150×1×150 voxels phantom. In the case of homogeneous medium, the mean deviation in the high dose region was less than 1.7%. With a TA6V4 hip prosthesis bathed in water, the mean deviation in the high dose region was less than 4.1%. Further improvements in NeuRad will have to include full 3D calculations, inhomogeneity management and input definitions.

  20. Impedance computations and beam-based measurements: A problem of discrepancy

    NASA Astrophysics Data System (ADS)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  1. Unsteady thermal blooming of intense laser beams

    NASA Astrophysics Data System (ADS)

    Ulrich, J. T.; Ulrich, P. B.

    1980-01-01

    A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.

  2. MSFC crack growth analysis computer program, version 2 (users manual)

    NASA Technical Reports Server (NTRS)

    Creager, M.

    1976-01-01

    An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.

  3. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  4. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  5. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749

  6. A bioinformatics knowledge discovery in text application for grid computing.

    PubMed

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.

  7. An updated climatology of explosive cyclones using alternative measures of cyclone intensity

    NASA Astrophysics Data System (ADS)

    Hanley, J.; Caballero, R.

    2009-04-01

    Using a novel cyclone tracking and identification method, we compute a climatology of explosively intensifying cyclones or ‘bombs' using the ERA-40 and ERA-Interim datasets. Traditionally, ‘bombs' have been identified using a central pressure deepening rate criterion (Sanders and Gyakum, 1980). We investigate alternative methods of capturing such extreme cyclones. These methods include using the maximum wind contained within the cyclone, and using a potential vorticity column measure within such systems, as a measure of intensity. Using the different measures of cyclone intensity, we construct and intercompare maps of peak cyclone intensity. We also compute peak intensity probability distributions, and assess the evidence for the bi-modal distribution found by Roebber (1984). Finally, we address the question of the relationship between storm intensification rate and storm destructiveness: are ‘bombs' the most destructive storms?

  8. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  9. A model for the rapid assessment of the impact of aviation noise near airports.

    PubMed

    Torija, Antonio J; Self, Rod H; Flindell, Ian H

    2017-02-01

    This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.

  10. Peculiarities of high-overtone transition probabilities in carbon monoxide revealed by high-precision calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medvedev, Emile S., E-mail: esmedved@orc.ru; Meshkov, Vladimir V.; Stolyarov, Andrey V.

    In the recent work devoted to the calculation of the rovibrational line list of the CO molecule [G. Li et al., Astrophys. J., Suppl. Ser. 216, 15 (2015)], rigorous validation of the calculated parameters including intensities was carried out. In particular, the Normal Intensity Distribution Law (NIDL) [E. S. Medvedev, J. Chem. Phys. 137, 174307 (2012)] was employed for the validation purposes, and it was found that, in the original CO line list calculated for large changes of the vibrational quantum number up to Δn = 41, intensities with Δn > 11 were unphysical. Therefore, very high overtone transitions weremore » removed from the published list in Li et al. Here, we show how this type of validation is carried out and prove that the quadruple precision is indispensably required to predict the reliable intensities using the conventional 32-bit computers. Based on these calculations, the NIDL is shown to hold up for the 0 → n transitions till the dissociation limit around n = 83, covering 45 orders of magnitude in the intensity. The low-intensity 0 → n transition predicted in the work of Medvedev [Determination of a new molecular constant for diatomic systems. Normal intensity distribution law for overtone spectra of diatomic and polyatomic molecules and anomalies in overtone absorption spectra of diatomic molecules, Institute of Chemical Physics, Russian Academy of Sciences, Chernogolovka, 1984] at n = 5 is confirmed, and two additional “abnormal” intensities are found at n = 14 and 23. Criteria for the appearance of such “anomalies” are formulated. The results could be useful to revise the high-overtone molecular transition probabilities provided in spectroscopic databases.« less

  11. What is Data-Intensive Science?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, Terence J.; Kleese van Dam, Kerstin

    2013-06-03

    What is Data Intensive Science? Today we are living in a digital world, where scientists often no longer interact directly with the physical object of their research, but do so via digitally captured, reduced, calibrated, analyzed, synthesized and, at times, visualized data. Advances in experimental and computational technologies have lead to an exponential growth in the volumes, variety and complexity of this data and while the deluge is not happening everywhere in an absolute sense, it is in a relative one. Science today is data intensive. Data intensive science has the potential to transform not only how we do science,more » but how quickly we can translate scientific progress into complete solutions, policies, decisions and ultimately economic success. Critically, data intensive science touches some of the most important challenges we are facing. Consider a few of the grand challenges outlined by the U.S. National Academy of Engineering: make solar energy economical, provide energy from fusion, develop carbon sequestration methods, advance health informatics, engineer better medicines, secure cyberspace, and engineer the tools of scientific discovery. Arguably, meeting any of these challenges requires the collaborative effort of trans-disciplinary teams, but also significant contributions from enabling data intensive technologies. Indeed for many of them, advances in data intensive research will be the single most important factor in developing successful and timely solutions. Simple extrapolations of how we currently interact with and utilize data and knowledge are not sufficient to meet this need. Given the importance of these challenges, a new, bold vision for the role of data in science, and indeed how research will be conducted in a data intensive environment is evolving.« less

  12. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  13. Crack Turning and Arrest Mechanisms for Integral Structure

    NASA Technical Reports Server (NTRS)

    Pettit, Richard; Ingraffea, Anthony

    1999-01-01

    In the course of several years of research efforts to predict crack turning and flapping in aircraft fuselage structures and other problems related to crack turning, the 2nd order maximum tangential stress theory has been identified as the theory most capable of predicting the observed test results. This theory requires knowledge of a material specific characteristic length, and also a computation of the stress intensity factors and the T-stress, or second order term in the asymptotic stress field in the vicinity of the crack tip. A characteristic length, r(sub c), is proposed for ductile materials pertaining to the onset of plastic instability, as opposed to the void spacing theories espoused by previous investigators. For the plane stress case, an approximate estimate of r(sub c), is obtained from the asymptotic field for strain hardening materials given by Hutchinson, Rice and Rosengren (HRR). A previous study using of high order finite element methods to calculate T-stresses by contour integrals resulted in extremely high accuracy values obtained for selected test specimen geometries, and a theoretical error estimation parameter was defined. In the present study, it is shown that a large portion of the error in finite element computations of both K and T are systematic, and can be corrected after the initial solution if the finite element implementation utilizes a similar crack tip discretization scheme for all problems. This scheme is applied for two-dimensional problems to a both a p-version finite element code, showing that sufficiently accurate values of both K(sub I) and T can be obtained with fairly low order elements if correction is used. T-stress correction coefficients are also developed for the singular crack tip rosette utilized in the adaptive mesh finite element code FRANC2D, and shown to reduce the error in the computed T-stress significantly. Stress intensity factor correction was not attempted for FRANC2D because it employs a highly accurate quarter-point scheme to obtain stress intensity factors.

  14. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  15. Automation based on knowledge modeling theory and its applications in engine diagnostic systems using Space Shuttle Main Engine vibrational data. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, Jonnathan H.

    1995-01-01

    Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).

  16. Resolution enhancement using simultaneous couple illumination

    NASA Astrophysics Data System (ADS)

    Hussain, Anwar; Martínez Fuentes, José Luis

    2016-10-01

    A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.

  17. An analog retina model for detecting dim moving objects against a bright moving background

    NASA Technical Reports Server (NTRS)

    Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.

    1991-01-01

    We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.

  18. Exploring the Use of Isolated Expressions and Film Clips to Evaluate Emotion Recognition by People with Traumatic Brain Injury

    PubMed Central

    Zupan, Barbra; Neumann, Dawn

    2016-01-01

    The current study presented 60 people with traumatic brain injury (TBI) and 60 controls with isolated facial emotion expressions, isolated vocal emotion expressions, and multimodal (i.e., film clips) stimuli that included contextual cues. All stimuli were presented via computer. Participants were required to indicate how the person in each stimulus was feeling using a forced-choice format. Additionally, for the film clips, participants had to indicate how they felt in response to the stimulus, and the level of intensity with which they experienced that emotion. PMID:27213280

  19. Study of a programmable high speed processor for use on-board satellites

    NASA Astrophysics Data System (ADS)

    Degavre, J. Cl.; Okkes, R.; Gaillat, G.

    The availability of VLSI programmable devices will significantly enhance satellite on-board data processing capabilities. A case study is presented which indicates that computation-intensive processing applications requiring the execution of 100 megainstructions/sec are within the CD power constraints of satellites. It is noted that the current progress in semicustom design technique development and in achievable gate array densities, together with the recent announcement of improved monochip processors, are encouraging the development of an on-board programmable processor architecture able to associate the devices that will appear in communication and military markets.

  20. Computation of Feedback Aeroacoustic System by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    It is well known that due to vortex shedding in high speed flow over cutouts, cavities, and gaps, intense noise may be generated. Strong tonal oscillations occur in a feedback cycle in which the vortices shed from the upstream edge of the cavity convect downstream and impinge on the cavity lip, generating acoustic waves that propagate upstream to excite new vortices. Numerical simulation of such a complicated process requires a scheme that can: (1) resolve acoustic waves with low dispersion and numerical dissipation, (2) handle nonlinear and discontinuous waves (e.g. shocks), and (3) have an effective (near field) nonreflecting boundary condition (NRBC). The new space time conservation element and solution element method, or CE/SE for short, is a numerical method that meets the above requirements.

  1. Imaging simulation of active EO-camera

    NASA Astrophysics Data System (ADS)

    Pérez, José; Repasi, Endre

    2018-04-01

    A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.

  2. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    PubMed

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    'Visibility' is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this 'visibility' improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g. the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Intense cavitation at extreme static pressure.

    PubMed

    Pishchalnikov, Yuri A; Gutierrez, Joel; Dunbar, Wylene W; Philpott, Richard W

    2016-02-01

    Cavitation is usually performed at hydrostatic pressures at or near 0.1 MPa. Higher static pressure produces more intense cavitation, but requires an apparatus that can build high amplitude acoustic waves with rarefactions exceeding the cavitation threshold. The absence of such an apparatus has prevented the achievement of intense acoustic cavitation, hindering research and the development of new applications. Here we describe a new high-pressure spherical resonator system, as well as experimental and modeling results in water and liquid metal (gallium), for cavitation at hydrostatic pressures between 10 and 150 MPa. Our computational data, using HYADES plasma hydrodynamics code, show the formation of dense plasma that, under these conditions, reaches peak pressures of about three to four orders of magnitude greater than the hydrostatic pressure in the bulk liquid and temperatures in the range of 100,000 K. Passive cavitation detection (PCD) data validate both a linear increase in shock wave amplitude and the production of highly intense concentrations of mechanical energy in the collapsing bubbles. High-speed camera observations show the formation of bubble clusters from single bubbles. The increased shock wave amplitude produced by bubble clusters, measured using PCD and fiber optic probe hydrophone, was consistent with current understanding that bubble clusters enable amplification of energy produced. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Comparison of high intensity focused ultrasound (HIFU) exposures using empirical and backscatter attenuation estimation methods

    NASA Astrophysics Data System (ADS)

    Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff

    2005-09-01

    Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.

  5. A compact control system to achieve stable voltage and low jitter trigger for repetitive intense electron-beam accelerator based on resonant charging

    NASA Astrophysics Data System (ADS)

    Qiu, Yongfeng; Liu, Jinliang; Yang, Jianhua; Cheng, Xinbing; Yang, Xiao

    2017-08-01

    A compact control system based on Delphi and Field Programmable Gate Array(FPGA) is developed for a repetitive intense electron-beam accelerator(IEBA), whose output power is 10GW and pulse duration is 160ns. The system uses both hardware and software solutions. It comprises a host computer, a communication module and a main control unit. A device independent applications programming interface, devised using Delphi, is installed on the host computer. Stability theory of voltage in repetitive mode is analyzed and a detailed overview of the hardware and software configuration is presented. High voltage experiment showed that the control system fulfilled the requests of remote operation and data-acquisition. The control system based on a time-sequence control method is used to keep constant of the voltage of the primary capacitor in every shot, which ensured the stable and reliable operation of the electron beam accelerator in the repetitive mode during the experiment. Compared with the former control system based on Labview and PIC micro-controller developed in our laboratory, the present one is more compact, and with higher precision in the time dimension. It is particularly useful for automatic control of IEBA in the high power microwave effects research experiments where pulse-to-pulse reproducibility is required.

  6. Photonic band gap materials: towards an all-optical transistor

    NASA Astrophysics Data System (ADS)

    Florescu, Marian

    2002-05-01

    The transmission of information as optical signals encoded on light waves traveling through optical fibers and optical networks is increasingly moving to shorter and shorter distance scales. In the near future, optical networking is poised to supersede conventional transmission over electric wires and electronic networks for computer-to-computer communications, chip-to-chip communications, and even on-chip communications. The ever-increasing demand for faster and more reliable devices to process the optical signals offers new opportunities in developing all-optical signal processing systems (systems in which one optical signal controls another, thereby adding "intelligence" to the optical networks). All-optical switches, two-state and many-state all-optical memories, all-optical limiters, all-optical discriminators and all-optical transistors are only a few of the many devices proposed during the last two decades. The "all-optical" label is commonly used to distinguish the devices that do not involve dissipative electronic transport and require essentially no electrical communication of information. The all-optical transistor action was first observed in the context of optical bistability [1] and consists in a strong differential gain regime, in which, for small variations in the input intensity, the output intensity has a very strong variation. This analog operation is for all-optical input what transistor action is for electrical inputs.

  7. Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.

    PubMed

    Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R

    2013-11-01

    A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.

  8. Temperature, Velocity, and Mean Turbulence Structure in Stongly-Heated Internal Gas Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEligot, Donald Marinus; Mikielewicz, D. P.; Shehata, A. M.

    2002-10-01

    The main objective of the present study is to examine whether "simple" turbulence models (i.e., models requiring two partial differential equations or less for turbulent transport) are suitable for use under conditions of forced flow of gas at low Reynolds numbers in tubes with intense heating, leading to large variations of fluid properties and considerable modification of turbulence. Eleven representative models are considered. The ability of such models to handle such flows was assessed by means of computational simulations of the carefully designed experiments of Shehata and McEligot (IJHMT 41 (1998) 4297) at heating rates of q+in˜0.0018, 0.0035 and 0.0045,more » yielding flows ranging from essentially turbulent to laminarized. The resulting comparisons of computational results with experiments showed that the model by Launder and Sharma (Lett. Heat Transfer 1 (1974) 131) performed best in predicting axial wall temperature profiles. Overall, agreement between the measured velocity and temperature distributions and those calculated using the Launder–Sharma model is good, which gives confidence in the values forecast for the turbulence quantities produced. These have been used to assist in arriving at a better understanding of the influences of intense heating, and hence strong variation of fluid properties, on turbulent flow in tubes.« less

  9. The relationship between patient data and pooled clinical management decisions.

    PubMed

    Ludbrook, G I; O'Loughlin, E J; Corcoran, T B; Grant, C

    2013-01-01

    A strong relationship between patient data and preoperative clinical decisions could potentially be used to support clinical decisions in preoperative management. The aim of this exploratory study was to determine the relationship between key patient data and pooled clinical opinions on management. In a previous study, panels of anaesthetists compared the quality of computer-assisted patient health assessments with outpatient consultations and made decisions on the need for preoperative tests, no preoperative outpatient assessment, possible postoperative intensive care unit/high dependency unit requirements and aspiration prophylaxis. In the current study, the relationship between patient data and these decisions was examined using binomial logistic regression analysis. Backward stepwise regression was used to identify independent predictors of each decision (at P >0.15), which were then incorporated into a predictive model. The number of factors related to each decision varied: blood picture (four factors), biochemistry (six factors), coagulation studies (three factors), electrocardiography (eight factors), chest X-ray (seven factors), preoperative outpatient assessment (17 factors), intensive care unit requirement (eight factors) and aspiration prophylaxis (one factor). The factor types also varied, but included surgical complexity, age, gender, number of medications or comorbidities, body mass index, hypertension, central nervous system condition, heart disease, sleep apnoea, smoking, persistent pain and stroke. Models based on these relationships usually demonstrated good sensitivity and specificity, with receiver operating characteristics in the following areas under curve: blood picture (0.75), biochemistry (0.86), coagulation studies (0.71), electrocardiography (0.90), chest X-ray (0.85), outpatient assessment (0.85), postoperative intensive care unit requirement (0.88) and aspiration prophylaxis (0.85). These initial results suggest modelling of patient data may have utility supporting clinicians' preoperative decisions.

  10. Double-blind, randomized controlled clinical trial on analgesic efficacy of local anesthetics articaine and bupivacaine after impacted third molar extraction.

    PubMed

    Olmedo-Gaya, Maria Victoria; Manzano-Moreno, Francisco Javier; Muñoz-López, Jose Luis; Vallecillo-Capilla, Manuel Francisco; Reyes-Botella, Candela

    2018-02-15

    The objective of this randomized controlled clinical trial (RCT) was to compare the effect of bupivacaine and articaine at habitual doses on pain intensity and the need for analgesics after lower third molar extraction. The final study sample comprised 50 Caucasian volunteers (26 males and 24 females; age range, 18-30 years) undergoing scheduled surgical extraction of impacted lower third molar. A computer-generated random sequence was used to allocate participants to the articaine (4%) or bupivacaine (0.5%) group. Surgeons and patients were blinded by labeling the articaine and bupivacaine carpules with numbers (1 and 2, respectively). Postoperative pain intensity (primary outcome) was evaluated with a visual analogue scale (VAS), while the requirement for and timing of rescue medication and the quality of intraoperative anesthesia were also measured (secondary outcomes). VAS-measured pain intensity was significantly higher (p < 0.05) in the articaine group than in the bupivacaine group at all time points except for 8 h post-surgery (p = 0.052). Rescue medication was required by 13 (52%) patients in the articaine group and 8 (32%) patients in the bupivacaine group, although the difference did not reach statistical significance (p = 0.252). The groups did not significantly differ (p = 0.391) in the quality of the intraoperative anesthesia. Bupivacaine is a valid alternative to articaine in third molar surgery and may offer residual anesthesia as a means of reducing postoperative pain. However, further well-designed RCTs are required in larger study populations to verify the effectiveness of bupivacaine to achieve residual analgesia after oral surgery. These findings suggest that bupivacaine may be useful as a coadjuvant to control acute postoperative pain. ACTRN12617001138370.

  11. Computed tomography synthesis from magnetic resonance images in the pelvis using multiple random forests and auto-context features

    NASA Astrophysics Data System (ADS)

    Andreasen, Daniel; Edmund, Jens M.; Zografos, Vasileios; Menze, Bjoern H.; Van Leemput, Koen

    2016-03-01

    In radiotherapy treatment planning that is only based on magnetic resonance imaging (MRI), the electron density information usually obtained from computed tomography (CT) must be derived from the MRI by synthesizing a so-called pseudo CT (pCT). This is a non-trivial task since MRI intensities are neither uniquely nor quantitatively related to electron density. Typical approaches involve either a classification or regression model requiring specialized MRI sequences to solve intensity ambiguities, or an atlas-based model necessitating multiple registrations between atlases and subject scans. In this work, we explore a machine learning approach for creating a pCT of the pelvic region from conventional MRI sequences without using atlases. We use a random forest provided with information about local texture, edges and spatial features derived from the MRI. This helps to solve intensity ambiguities. Furthermore, we use the concept of auto-context by sequentially training a number of classification forests to create and improve context features, which are finally used to train a regression forest for pCT prediction. We evaluate the pCT quality in terms of the voxel-wise error and the radiologic accuracy as measured by water-equivalent path lengths. We compare the performance of our method against two baseline pCT strategies, which either set all MRI voxels in the subject equal to the CT value of water, or in addition transfer the bone volume from the real CT. We show an improved performance compared to both baseline pCTs suggesting that our method may be useful for MRI-only radiotherapy.

  12. Palaeomagnetic field intensity variations suggest Mesoproterozoic inner-core nucleation

    NASA Astrophysics Data System (ADS)

    Biggin, A. J.; Piispa, E. J.; Pesonen, L. J.; Holme, R.; Paterson, G. A.; Veikkolainen, T.; Tauxe, L.

    2015-10-01

    The Earth's inner core grows by the freezing of liquid iron at its surface. The point in history at which this process initiated marks a step-change in the thermal evolution of the planet. Recent computational and experimental studies have presented radically differing estimates of the thermal conductivity of the Earth's core, resulting in estimates of the timing of inner-core nucleation ranging from less than half a billion to nearly two billion years ago. Recent inner-core nucleation (high thermal conductivity) requires high outer-core temperatures in the early Earth that complicate models of thermal evolution. The nucleation of the core leads to a different convective regime and potentially different magnetic field structures that produce an observable signal in the palaeomagnetic record and allow the date of inner-core nucleation to be estimated directly. Previous studies searching for this signature have been hampered by the paucity of palaeomagnetic intensity measurements, by the lack of an effective means of assessing their reliability, and by shorter-timescale geomagnetic variations. Here we examine results from an expanded Precambrian database of palaeomagnetic intensity measurements selected using a new set of reliability criteria. Our analysis provides intensity-based support for the dominant dipolarity of the time-averaged Precambrian field, a crucial requirement for palaeomagnetic reconstructions of continents. We also present firm evidence for the existence of very long-term variations in geomagnetic strength. The most prominent and robust transition in the record is an increase in both average field strength and variability that is observed to occur between a billion and 1.5 billion years ago. This observation is most readily explained by the nucleation of the inner core occurring during this interval; the timing would tend to favour a modest value of core thermal conductivity and supports a simple thermal evolution model for the Earth.

  13. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  14. Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; García, Sebastián Gimeno

    2013-05-01

    Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.

  15. VERCE: a productive e-Infrastructure and e-Science environment for data-intensive seismology research

    NASA Astrophysics Data System (ADS)

    Vilotte, Jean-Pierre; Atkinson, Malcolm; Carpené, Michele; Casarotti, Emanuele; Frank, Anton; Igel, Heiner; Rietbrock, Andreas; Schwichtenberg, Horst; Spinuso, Alessandro

    2016-04-01

    Seismology pioneers global and open-data access -- with internationally approved data, metadata and exchange standards facilitated worldwide by the Federation of Digital Seismic Networks (FDSN) and in Europe the European Integrated Data Archives (EIDA). The growing wealth of data generated by dense observation and monitoring systems and recent advances in seismic wave simulation capabilities induces a change in paradigm. Data-intensive seismology research requires a new holistic approach combining scalable high-performance wave simulation codes and statistical data analysis methods, and integrating distributed data and computing resources. The European E-Infrastructure project "Virtual Earthquake and seismology Research Community e-science environment in Europe" (VERCE) pioneers the federation of autonomous organisations providing data and computing resources, together with a comprehensive, integrated and operational virtual research environment (VRE) and E-infrastructure devoted to the full path of data use in a research-driven context. VERCE delivers to a broad base of seismology researchers in Europe easily used high-performance full waveform simulations and misfit calculations, together with a data-intensive framework for the collaborative development of innovative statistical data analysis methods, all of which were previously only accessible to a small number of well-resourced groups. It balances flexibility with new integrated capabilities to provide a fluent path from research innovation to production. As such, VERCE is a major contribution to the implementation phase of the ``European Plate Observatory System'' (EPOS), the ESFRI initiative of the solid-Earth community. The VRE meets a range of seismic research needs by eliminating chores and technical difficulties to allow users to focus on their research questions. It empowers researchers to harvest the new opportunities provided by well-established and mature high-performance wave simulation codes of the community. It enables active researchers to invent and refine scalable methods for innovative statistical analysis of seismic waveforms in a wide range of application contexts. The VRE paves the way towards a flexible shared framework for seismic waveform inversion, lowering the barriers to uptake for the next generation of researchers. The VRE can be accessed through the science gateway that puts together computational and data-intensive research into the same framework, integrating multiple data sources and services. It provides a context for task-oriented and data-streaming workflows, and maps user actions to the full gamut of the federated platform resources and procurement policies, activating the necessary behind-the-scene automation and transformation. The platform manages and produces domain metadata, coupling them with the provenance information describing the relationships and the dependencies, which characterise the whole workflow process. This dynamic knowledge base, can be explored for validation purposes via a graphical interface and a web API. Moreover, it fosters the assisted selection and re-use of the data within each phase of the scientific analysis. These phases can be identified as Simulation, Data Access, Preprocessing, Misfit and data processing, and are presented to the users of the gateway as dedicated and interactive workspaces. By enabling researchers to share results and provenance information, VERCE steers open-science behaviour, allowing researchers to discover and build on prior work and thereby to progress faster. A key asset is the agile strategy that VERCE deployed in a multi-organisational context, engaging seismologists, data scientists, ICT researchers, HPC and data resource providers, system administrators into short-lived tasks each with a goal that is a seismology priority, and intimately coupling research thinking with technical innovation. This changes the focus from HPC production environments and community data services to user-focused scenario, avoiding wasteful bouts of technology centricity where technologists collect requirements and develop a system that is not used because the ideas of the planned users have moved on. As such the technologies and concepts developed in VERCE are relevant to many other disciplines in computational and data driven Earth Sciences and can provide the key technologies for a European wide computational and data intensive framework in Earth Sciences.

  16. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research

    PubMed Central

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C.

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. PMID:24904400

  17. CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.

    PubMed

    Sherif, Tarek; Rioux, Pierre; Rousseau, Marc-Etienne; Kassis, Nicolas; Beck, Natacha; Adalat, Reza; Das, Samir; Glatard, Tristan; Evans, Alan C

    2014-01-01

    The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction.

  18. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  19. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  20. Time-domain seismic modeling in viscoelastic media for full waveform inversion on heterogeneous computing platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard

    2017-03-01

    Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.

  1. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  2. Race, gender, and information technology use: the new digital divide.

    PubMed

    Jackson, Linda A; Zhao, Yong; Kolenic, Anthony; Fitzgerald, Hiram E; Harold, Rena; Von Eye, Alexander

    2008-08-01

    This research examined race and gender differences in the intensity and nature of IT use and whether IT use predicted academic performance. A sample of 515 children (172 African Americans and 343 Caucasian Americans), average age 12 years old, completed surveys as part of their participation in the Children and Technology Project. Findings indicated race and gender differences in the intensity of IT use; African American males were the least intense users of computers and the Internet, and African American females were the most intense users of the Internet. Males, regardless of race, were the most intense videogame players, and females, regardless of race, were the most intense cell phone users. IT use predicted children's academic performance. Length of time using computers and the Internet was a positive predictor of academic performance, whereas amount of time spent playing videogames was a negative predictor. Implications of the findings for bringing IT to African American males and bringing African American males to IT are discussed.

  3. Integrated light-guide plates that can control the illumination angle for liquid crystal display backlight system

    NASA Astrophysics Data System (ADS)

    Feng, Di; Yang, Xingpeng; Jin, Guofan; Yan, Yingbai; Fan, Shoushan

    2006-01-01

    Liquid crystal displays (LCDs) with edge-lit backlight systems offer several advantages, such as low energy consuming, low weight, and high uniformity of intensity, over traditional cathode-ray tube displays, and make them ideal for many applications including monitors in notebook personal computers, screens for TV, and many portable information terminals, such as mobile phones, personal digital assistants, etc. To satisfy market requirements for mobile and personal display panels, it is more and more necessary to modify the backlight system and make it thinner, lighter, and brighter all at once. In this paper, we have proposed a new integrated LGP based on periodic and aperiodic microprism structures by using polymethyl methacrylate material, which can be designed to control the illumination angle, and to get high uniformity of intensity. So the backlight system will be simplified to use only light sources and one LGP without using other optical sheets, such as reflection sheet, diffusion sheet and prism sheets. By using optimizing program and ray tracing method, the designed LGPs can achieve a uniformity of intensity better than 86%, and get a peak illumination angle from +400 to -200, without requiring other optical sheets. We have designed a backlight system with only one LED light source and one LGP, and other LGP design examples with different sizes (1.8 inches and 14.1 inches) and different light source (LED or CCFL), are performed also.

  4. Automatic segmentation for detecting uterine fibroid regions treated with MR-guided high intensity focused ultrasound (MR-HIFU).

    PubMed

    Antila, Kari; Nieminen, Heikki J; Sequeiros, Roberto Blanco; Ehnholm, Gösta

    2014-07-01

    Up to 25% of women suffer from uterine fibroids (UF) that cause infertility, pain, and discomfort. MR-guided high intensity focused ultrasound (MR-HIFU) is an emerging technique for noninvasive, computer-guided thermal ablation of UFs. The volume of induced necrosis is a predictor of the success of the treatment. However, accurate volume assessment by hand can be time consuming, and quick tools produce biased results. Therefore, fast and reliable tools are required in order to estimate the technical treatment outcome during the therapy event so as to predict symptom relief. A novel technique has been developed for the segmentation and volume assessment of the treated region. Conventional algorithms typically require user interaction ora priori knowledge of the target. The developed algorithm exploits the treatment plan, the coordinates of the intended ablation, for fully automatic segmentation with no user input. A good similarity to an expert-segmented manual reference was achieved (Dice similarity coefficient = 0.880 ± 0.074). The average automatic segmentation time was 1.6 ± 0.7 min per patient against an order of tens of minutes when done manually. The results suggest that the segmentation algorithm developed, requiring no user-input, provides a feasible and practical approach for the automatic evaluation of the boundary and volume of the HIFU-treated region.

  5. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  6. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  7. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  8. ASTROPULSE: A SEARCH FOR MICROSECOND TRANSIENT RADIO SIGNALS USING DISTRIBUTED COMPUTING. I. METHODOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Korff, J.; Heien, E.; Korpela, E.

    We are performing a transient, microsecond timescale radio sky survey, called 'Astropulse', using the Arecibo telescope. Astropulse searches for brief (0.4 {mu}s to 204.8 {mu}s ), wideband (relative to its 2.5 MHz bandwidth) radio pulses centered at 1420 MHz. Astropulse is a commensal (piggyback) survey, and scans the sky between declinations of -1. Degree-Sign 33 and 38. Degree-Sign 03. We obtained 1540 hr of data in each of seven beams of the ALFA receiver, with two polarizations per beam. The data are one-bit complex sampled at the Nyquist limit of 0.4 {mu}s per sample. Examination of timescales on the ordermore » of microseconds is possible because we used coherent dedispersion, a technique that has frequently been used for targeted observations, but has never been associated with a radio sky survey. The more usual technique, incoherent dedispersion, cannot resolve signals below a minimum timescale which depends on the signal's dispersion measure (DM) and frequency. However, coherent dedispersion requires more intensive computation than incoherent dedispersion. The required processing power was provided by BOINC, the Berkeley Open Infrastructure for Network Computing. BOINC is a distributed computing system, allowing us to utilize hundreds of thousands of volunteers' computers to perform the necessary calculations for coherent dedispersion. Astrophysical events that might produce brief radio pulses include giant pulses from pulsars, rotating radio transients, exploding primordial black holes, or new sources yet to be imagined. Radio frequency interference and noise contaminate the data; these are mitigated by a number of techniques including multi-polarization correlation, DM repetition detection, and frequency profiling.« less

  9. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  10. A scientific workflow framework for (13)C metabolic flux analysis.

    PubMed

    Dalman, Tolga; Wiechert, Wolfgang; Nöh, Katharina

    2016-08-20

    Metabolic flux analysis (MFA) with (13)C labeling data is a high-precision technique to quantify intracellular reaction rates (fluxes). One of the major challenges of (13)C MFA is the interactivity of the computational workflow according to which the fluxes are determined from the input data (metabolic network model, labeling data, and physiological rates). Here, the workflow assembly is inevitably determined by the scientist who has to consider interacting biological, experimental, and computational aspects. Decision-making is context dependent and requires expertise, rendering an automated evaluation process hardly possible. Here, we present a scientific workflow framework (SWF) for creating, executing, and controlling on demand (13)C MFA workflows. (13)C MFA-specific tools and libraries, such as the high-performance simulation toolbox 13CFLUX2, are wrapped as web services and thereby integrated into a service-oriented architecture. Besides workflow steering, the SWF features transparent provenance collection and enables full flexibility for ad hoc scripting solutions. To handle compute-intensive tasks, cloud computing is supported. We demonstrate how the challenges posed by (13)C MFA workflows can be solved with our approach on the basis of two proof-of-concept use cases. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  12. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    PubMed

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Ray tracing on the MPP

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Generating graphics to faithfully represent information can be a computationally intensive task. A way of using the Massively Parallel Processor to generate images by ray tracing is presented. This technique uses sort computation, a method of performing generalized routing interspersed with computation on a single-instruction-multiple-data (SIMD) computer.

  14. Deformable registration of CT and cone-beam CT with local intensity matching.

    PubMed

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-07

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  15. Deformable registration of CT and cone-beam CT with local intensity matching

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2017-02-01

    Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.

  16. A Note on Testing Mediated Effects in Structural Equation Models: Reconciling Past and Current Research on the Performance of the Test of Joint Significance

    ERIC Educational Resources Information Center

    Valente, Matthew J.; Gonzalez, Oscar; Miocevic, Milica; MacKinnon, David P.

    2016-01-01

    Methods to assess the significance of mediated effects in education and the social sciences are well studied and fall into two categories: single sample methods and computer-intensive methods. A popular single sample method to detect the significance of the mediated effect is the test of joint significance, and a popular computer-intensive method…

  17. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  18. Computer-automated evolution of an X-band antenna for NASA's Space Technology 5 mission.

    PubMed

    Hornby, Gregory S; Lohn, Jason D; Linden, Derek S

    2011-01-01

    Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present our work in using evolutionary algorithms to automatically design an X-band antenna for NASA's Space Technology 5 (ST5) spacecraft. Two evolutionary algorithms were used: the first uses a vector of real-valued parameters and the second uses a tree-structured generative representation for constructing the antenna. The highest-performance antennas from both algorithms were fabricated and tested and both outperformed a hand-designed antenna produced by the antenna contractor for the mission. Subsequent changes to the spacecraft orbit resulted in a change in requirements for the spacecraft antenna. By adjusting our fitness function we were able to rapidly evolve a new set of antennas for this mission in less than a month. One of these new antenna designs was built, tested, and approved for deployment on the three ST5 spacecraft, which were successfully launched into space on March 22, 2006. This evolved antenna design is the first computer-evolved antenna to be deployed for any application and is the first computer-evolved hardware in space.

  19. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  20. Combination of the discontinuous Galerkin method with finite differences for simulation of seismic wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir

    We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less

  1. Faster methods for estimating arc centre position during VAR and results from Ti-6Al-4V and INCONEL 718 alloys

    NASA Astrophysics Data System (ADS)

    Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.

    2016-07-01

    Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.

  2. Dynamic file-access characteristics of a production parallel scientific workload

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1994-01-01

    Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.

  3. Stress studies in EFG

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Experimental work in support of stress studies in high speed silicon sheet growth has been emphasized in this quarter. Creep experiments utilizing four-point bending have been made in the temperature range from 1000 C to 1360 C in CZ silicon as well as on EFG ribbon. A method to measure residual stress over large areas using laser interferometry to map strain distributions under load is under development. A fiber optics sensor to measure ribbon temperature profiles has been constructed and is being tested in a ribbon growth furnace environment. Stress and temperature field modeling work has been directed toward improving various aspects of the finite element computing schemes. Difficulties in computing stress distributions with a very high creep intensity and with non-zero interface stress have been encountered and additional development of the numerical schemes to cope with these problems is required. Temperature field modeling has been extended to include the study of heat transfer effects in the die and meniscus regions.

  4. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  5. Challenges of developing an electro-optical system for measuring man's operational envelope

    NASA Technical Reports Server (NTRS)

    Woolford, B.

    1985-01-01

    In designing work stations and restraint systems, and in planning tasks to be performed in space, a knowledge of the capabilities of the operator is essential. Answers to such questions as whether a specific control or work surface can be reached from a given restraint and how much force can be applied are of particular interest. A computer-aided design system has been developed for designing and evaluating work stations, etc., and the Anthropometric Measurement Laboratory (AML) has been charged with obtaining the data to be used in design and modeling. Traditional methods of measuring reach and force are very labor intensive and require bulky equipment. The AML has developed a series of electro-optical devices for collecting reach data easily, in computer readable form, with portable systems. The systems developed, their use, and data collected with them are described.

  6. Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error

    NASA Technical Reports Server (NTRS)

    Byrne, M. D.; Kirlik, Alex

    2003-01-01

    We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.

  7. A systematic and feasible method for computing nuclear contributions to electrical properties of polyatomic molecules

    NASA Astrophysics Data System (ADS)

    Luis, Josep M.; Duran, Miquel; Andrés, José L.

    1997-08-01

    An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values.

  8. Uses of megavoltage digital tomosynthesis in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sarkar, Vikren

    With the advent of intensity modulated radiotherapy, radiation treatment plans are becoming more conformal to the tumor with the decreasing margins. It is therefore of prime importance that the patient be positioned correctly prior to treatment. Therefore, image guided treatment is necessary for intensity modulated radiotherapy plans to be implemented successfully. Current advanced imaging devices require costly hardware and software upgrade, and radiation imaging solutions, such as cone beam computed tomography, may introduce extra radiation dose to the patient in order to acquire better quality images. Thus, there is a need to extend current existing imaging device ability and functions while reducing cost and radiation dose. Existing electronic portal imaging devices can be used to generate computed tomography-like tomograms through projection images acquired over a small angle using the technique of cone-beam digital tomosynthesis. Since it uses a fraction of the images required for computed tomography reconstruction, use of this technique correspondingly delivers only a fraction of the imaging dose to the patient. Furthermore, cone-beam digital tomosynthesis can be offered as a software-only solution as long as a portal imaging device is available. In this study, the feasibility of performing digital tomosynthesis using individually-acquired megavoltage images from a charge coupled device-based electronic portal imaging device was investigated. Three digital tomosynthesis reconstruction algorithms, the shift-and-add, filtered back-projection, and simultaneous algebraic reconstruction technique, were compared considering the final image quality and radiation dose during imaging. A software platform, DART, was created using a combination of the Matlab and C++ languages. The platform allows for the registration of a reference Cone Beam Digital Tomosynthesis (CBDT) image against a daily acquired set to determine how to shift the patient prior to treatment. Finally, the software was extended to investigate if the digital tomosynthesis dataset could be used in an adaptive radiotherapy regimen through the use of the Pinnacle treatment planning software to recalculate dose delivered. The feasibility study showed that the megavoltage CBDT visually agreed with corresponding megavoltage computed tomography images. The comparative study showed that the best compromise between imaging quality and imaging dose is obtained when 11 projection images, acquired over an imaging angle of 40°, are used with the filtered back-projection algorithm. DART was successfully used to register reference and daily image sets to within 1 mm in-plane and 2.5 mm out of plane. The DART platform was also effectively used to generate updated files that the Pinnacle treatment planning system used to calculate updated dose in a rigidly shifted patient. These doses were then used to calculate a cumulative dose distribution that could be used by a physician as reference to decide when the treatment plan should be updated. In conclusion, this study showed that a software solution is possible to extend existing electronic portal imaging devices to function as cone-beam digital tomosynthesis devices and achieve daily requirement for image guided intensity modulated radiotherapy treatments. The DART platform also has the potential to be used as a part of adaptive radiotherapy solution.

  9. Effectiveness of speech language therapy either alone or with add-on computer-based language therapy software (Malayalam version) for early post stroke aphasia: A feasibility study.

    PubMed

    Kesav, Praveen; Vrinda, S L; Sukumaran, Sajith; Sarma, P S; Sylaja, P N

    2017-09-15

    This study aimed to assess the feasibility of professional based conventional speech language therapy (SLT) either alone (Group A/less intensive) or assisted by novel computer based local language software (Group B/more intensive) for rehabilitation in early post stroke aphasia. Comprehensive Stroke Care Center of a tertiary health care institute situated in South India, with the study design being prospective open randomised controlled trial with blinded endpoint evaluation. This study recruited 24 right handed first ever acute ischemic stroke patients above 15years of age affecting middle cerebral artery territory within 90days of stroke onset with baseline Western Aphasia Battery (WAB) Aphasia Quotient (AQ) score of <93.8 between September 2013 and January 2016.The recruited subjects were block randomised into either Group A/less intensive or Group B/more intensive therapy arms, in order to receive 12 therapy sessions of conventional professional based SLT of 1h each in both groups, with an additional 12h of computer based language therapy in Group B over 4weeks on a thrice weekly basis, with a follow up WAB performed at four and twelve weeks after baseline assessment. The trial was registered with Clinical trials registry India [2016/08/0120121]. All the statistical analysis was carried out with IBM SPSS Statistics for Windows version 21. 20 subjects [14 (70%) Males; Mean age: 52.8years±SD12.04] completed the study (9 in the less intensive and 11 in the more intensive arm). The mean four weeks follow up AQ showed a significant improvement from the baseline in the total group (p value: 0.01). The rate of rise of AQ from the baseline to four weeks follow up (ΔAQ %) showed a significantly greater value for the less intensive treatment group as against the more intensive treatment group [155% (SD: 150; 95% CI: 34-275) versus 52% (SD: 42%; 95% CI: 24-80) respectively: p value: 0.053]. Even though the more intensive treatment arm incorporating combined professional based SLT and computer software based training fared poorer than the less intensive therapy group, this study nevertheless reinforces the feasibility of SLT in augmenting recovery of early post stroke aphasia. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Laboratory manual: mineral X-ray diffraction data retrieval/plot computer program

    USGS Publications Warehouse

    Hauff, Phoebe L.; VanTrump, George

    1976-01-01

    The Mineral X-Ray Diffraction Data Retrieval/Plot Computer Program--XRDPLT (VanTrump and Hauff, 1976a) is used to retrieve and plot mineral X-ray diffraction data. The program operates on a file of mineral powder diffraction data (VanTrump and Hauff, 1976b) which contains two-theta or 'd' values, and intensities, chemical formula, mineral name, identification number, and mineral group code. XRDPLT is a machine-independent Fortran program which operates in time-sharing mode on a DEC System i0 computer and the Gerber plotter (Evenden, 1974). The program prompts the user to respond from a time-sharing terminal in a conversational format with the required input information. The program offers two major options: retrieval only; retrieval and plot. The first option retrieves mineral names, formulas, and groups from the file by identification number, by the mineral group code (a classification by chemistry or structure), or by searches based on the formula components. For example, it enables the user to search for minerals by major groups (i.e., feldspars, micas, amphiboles, oxides, phosphates, carbonates) by elemental composition (i.e., Fe, Cu, AI, Zn), or by a combination of these (i.e., all copper-bearing arsenates). The second option retrieves as the first, but also plots the retrieved 2-theta and intensity values as diagrammatic X-ray powder patterns on mylar sheets or overlays. These plots can be made using scale combinations compatible with chart recorder diffractograms and 114.59 mm powder camera films. The overlays are then used to separate or sieve out unrelated minerals until unknowns are matched and identified.

  11. Electron transport theory in magnetic nanostructures

    NASA Astrophysics Data System (ADS)

    Choy, Tat-Sang

    Magnetic nanostructure has been a new trend because of its application in making magnetic sensors, magnetic memories, and magnetic reading heads in hard disks drives. Although a variety of nanostructures have been realized in experiments in recent years by innovative sample growth techniques, the theoretical study of these devices remain a challenge. On one hand, atomic scale modeling is often required for studying the magnetic nanostructures; on the other, these structures often have a dimension on the order of one micrometer, which makes the calculation numerically intensive. In this work, we have studied the electron transport theory in magnetic nanostructures, with special attention to the giant magnetoresistance (GMR) structure. We have developed a model that includes the details of the band structure and disorder, both of which are both important in obtaining the conductivity. We have also developed an efficient algorithm to compute the conductivity in magnetic nanostructures. The model and the algorithm are general and can be applied to complicated structures. We have applied the theory to current-perpendicular-to-plane GMR structures and the results agree with experiments. Finally, we have searched for the atomic configuration with the highest GMR using the simulated annealing algorithm. This method is computationally intensive because we have to compute the GMR for 103 to 104 configurations. However it is still very efficient because the number of steps it takes to find the maximum is much smaller than the number of all possible GMR structures. We found that ultra-thin NiCu superlattices have surprisingly large GMR even at the moderate disorder in experiments. This finding may be useful in improving the GMR technology.

  12. Computer aided detection of brain micro-bleeds in traumatic brain injury

    NASA Astrophysics Data System (ADS)

    van den Heuvel, T. L. A.; Ghafoorian, M.; van der Eerden, A. W.; Goraj, B. M.; Andriessen, T. M. J. C.; ter Haar Romeny, B. M.; Platel, B.

    2015-03-01

    Brain micro-bleeds (BMBs) are used as surrogate markers for detecting diffuse axonal injury in traumatic brain injury (TBI) patients. The location and number of BMBs have been shown to influence the long-term outcome of TBI. To further study the importance of BMBs for prognosis, accurate localization and quantification are required. The task of annotating BMBs is laborious, complex and prone to error, resulting in a high inter- and intra-reader variability. In this paper we propose a computer-aided detection (CAD) system to automatically detect BMBs in MRI scans of moderate to severe neuro-trauma patients. Our method consists of four steps. Step one: preprocessing of the data. Both susceptibility (SWI) and T1 weighted MRI scans are used. The images are co-registered, a brain-mask is generated, the bias field is corrected, and the image intensities are normalized. Step two: initial candidates for BMBs are selected as local minima in the processed SWI scans. Step three: feature extraction. BMBs appear as round or ovoid signal hypo-intensities on SWI. Twelve features are computed to capture these properties of a BMB. Step four: Classification. To identify BMBs from the set of local minima using their features, different classifiers are trained on a database of 33 expert annotated scans and 18 healthy subjects with no BMBs. Our system uses a leave-one-out strategy to analyze its performance. With a sensitivity of 90% and 1.3 false positives per BMB, our CAD system shows superior results compared to state-of-the-art BMB detection algorithms (developed for non-trauma patients).

  13. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE PAGES

    Smaluk, Victor

    2018-04-21

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  14. Impedance computations and beam-based measurements: A problem of discrepancy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smaluk, Victor

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictionsmore » based on the computed impedance budgets show a significant discrepancy. For this article, three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.« less

  15. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  16. Receive Mode Analysis and Design of Microstrip Reflectarrays

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2011-01-01

    Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.

  17. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  18. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  19. Advancing Capabilities for Understanding the Earth System Through Intelligent Systems, the NSF Perspective

    NASA Astrophysics Data System (ADS)

    Gil, Y.; Zanzerkia, E. E.; Munoz-Avila, H.

    2015-12-01

    The National Science Foundation (NSF) Directorate for Geosciences (GEO) and Directorate for Computer and Information Science (CISE) acknowledge the significant scientific challenges required to understand the fundamental processes of the Earth system, within the atmospheric and geospace, Earth, ocean and polar sciences, and across those boundaries. A broad view of the opportunities and directions for GEO are described in the report "Dynamic Earth: GEO imperative and Frontiers 2015-2020." Many of the aspects of geosciences research, highlighted both in this document and other community grand challenges, pose novel problems for researchers in intelligent systems. Geosciences research will require solutions for data-intensive science, advanced computational capabilities, and transformative concepts for visualizing, using, analyzing and understanding geo phenomena and data. Opportunities for the scientific community to engage in addressing these challenges are available and being developed through NSF's portfolio of investments and activities. The NSF-wide initiative, Cyberinfrastructure Framework for 21st Century Science and Engineering (CIF21), looks to accelerate research and education through new capabilities in data, computation, software and other aspects of cyberinfrastructure. EarthCube, a joint program between GEO and the Advanced Cyberinfrastructure Division, aims to create a well-connected and facile environment to share data and knowledge in an open, transparent, and inclusive manner, thus accelerating our ability to understand and predict the Earth system. EarthCube's mission opens an opportunity for collaborative research on novel information systems enhancing and supporting geosciences research efforts. NSF encourages true, collaborative partnerships between scientists in computer sciences and the geosciences to meet these challenges.

  20. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  1. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue component plays in the segmentation of color images.

  2. A parallel implementation of the network identification by multiple regression (NIR) algorithm to reverse-engineer regulatory gene networks.

    PubMed

    Gregoretti, Francesco; Belcastro, Vincenzo; di Bernardo, Diego; Oliva, Gennaro

    2010-04-21

    The reverse engineering of gene regulatory networks using gene expression profile data has become crucial to gain novel biological knowledge. Large amounts of data that need to be analyzed are currently being produced due to advances in microarray technologies. Using current reverse engineering algorithms to analyze large data sets can be very computational-intensive. These emerging computational requirements can be met using parallel computing techniques. It has been shown that the Network Identification by multiple Regression (NIR) algorithm performs better than the other ready-to-use reverse engineering software. However it cannot be used with large networks with thousands of nodes--as is the case in biological networks--due to the high time and space complexity. In this work we overcome this limitation by designing and developing a parallel version of the NIR algorithm. The new implementation of the algorithm reaches a very good accuracy even for large gene networks, improving our understanding of the gene regulatory networks that is crucial for a wide range of biomedical applications.

  3. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  4. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  5. Combining Acceleration and Displacement Dependent Modal Frequency Responses Using an MSC/NASTRAN DMAP Alter

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.

    1996-01-01

    Solving for dynamic responses of free-free launch vehicle/spacecraft systems acted upon by buffeting winds is commonly performed throughout the aerospace industry. Due to the unpredictable nature of this wind loading event, these problems are typically solved using frequency response random analysis techniques. To generate dynamic responses for spacecraft with statically-indeterminate interfaces, spacecraft contractors prefer to develop models which have response transformation matrices developed for mode acceleration data recovery. This method transforms spacecraft boundary accelerations and displacements into internal responses. Unfortunately, standard MSC/NASTRAN modal frequency response solution sequences cannot be used to combine acceleration- and displacement-dependent responses required for spacecraft mode acceleration data recovery. External user-written computer codes can be used with MSC/NASTRAN output to perform such combinations, but these methods can be labor and computer resource intensive. Taking advantage of the analytical and computer resource efficiencies inherent within MS C/NASTRAN, a DMAP Alter has been developed to combine acceleration- and displacement-dependent modal frequency responses for performing spacecraft mode acceleration data recovery. The Alter has been used successfully to efficiently solve a common aerospace buffeting wind analysis.

  6. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  7. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  8. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  9. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  11. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  12. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  13. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  14. Neural Network and Regression Approximations in High Speed Civil Transport Aircraft Design Optimization

    NASA Technical Reports Server (NTRS)

    Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    1998-01-01

    Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.

  15. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  16. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  17. Validation of coupled atmosphere-fire behavior models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexitymore » of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.« less

  18. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  19. A Distance Measure for Genome Phylogenetic Analysis

    NASA Astrophysics Data System (ADS)

    Cao, Minh Duc; Allison, Lloyd; Dix, Trevor

    Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.

  20. An Example of Economic Value in Rapid Prototyping

    NASA Technical Reports Server (NTRS)

    Hauer, R. L.; Braunscheidel, E. P.

    2001-01-01

    Today's modern machining projects are composed more and more of complicated and intricate structure due to a variety of reasons including the ability to computer model complex surfaces and forms. The cost of producing these forms can be extremely high not only in dollars but in time to complete. Changes are even more difficult to incorporate. The subject blade shown is an excellent example. Its complex form would have required hundreds of hours in fabrication for just a simple prototype. The procurement would have taken in the neighborhood of six weeks to complete. The actual fabrication would have been an equal amount of time to complete. An alternative to this process would have been a wood model. Although cheaper than a metal fabrication, it would be extremely time intensive and require in the neighborhood of a month to produce in-house.

Top