Science.gov

Sample records for accelerated processing map

  1. 77 FR 21991 - Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)-Lender and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HOUSING AND URBAN DEVELOPMENT Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)--Lender and Underwriter Eligibility Criteria and Credit Watch for MAP Lenders AGENCY: Office of the...

  2. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  3. Asymmetric neighborhood functions accelerate ordering process of self-organizing maps

    SciTech Connect

    Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji; Aoyagi, Toshio

    2011-02-15

    A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensional stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.

  4. A hybrid short read mapping accelerator

    PubMed Central

    2013-01-01

    Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance

  5. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, Nathan

    2016-04-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high

  6. Accelerator simulation of astrophysical processes

    NASA Technical Reports Server (NTRS)

    Tombrello, T. A.

    1983-01-01

    Phenomena that involve accelerated ions in stellar processes that can be simulated with laboratory accelerators are described. Stellar evolutionary phases, such as the CNO cycle, have been partially explored with accelerators, up to the consumption of He by alpha particle radiative capture reactions. Further experimentation is indicated on reactions featuring N-13(p,gamma)O-14, O-15(alpha, gamma)Ne-19, and O-14(alpha,p)F-17. Accelerated beams interacting with thin foils produce reaction products that permit a determination of possible elemental abundances in stellar objects. Additionally, isotopic ratios observed in chondrites can be duplicated with accelerator beam interactions and thus constraints can be set on the conditions producing the meteorites. Data from isotopic fractionation from sputtering, i.e., blasting surface atoms from a material using a low energy ion beam, leads to possible models for processes occurring in supernova explosions. Finally, molecules can be synthesized with accelerators and compared with spectroscopic observations of stellar winds.

  7. Diffusive Shock Acceleration and Reconnection Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; Le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.; Cummings, A.; Stone, E.; Decker, R.

    2015-12-01

    Shock waves, as shown by simulations and observations, can generate high levels of downstream vortical turbulence, including magnetic islands. We consider a combination of diffusive shock acceleration (DSA) and downstream magnetic-island-reconnection-related processes as an energization mechanism for charged particles. Observations of electron and ion distributions downstream of interplanetary shocks and the heliospheric termination shock (HTS) are frequently inconsistent with the predictions of classical DSA. We utilize a recently developed transport theory for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets. Particle energization associated with the anti-reconnection electric field, a consequence of magnetic island merging, and magnetic island contraction, are considered. For the former only, we find that (i) the spectrum is a hard power law in particle speed, and (ii) the downstream solution is constant. For downstream plasmoid contraction only, (i) the accelerated spectrum is a hard power law in particle speed; (ii) the particle intensity for a given energy peaks downstream of the shock, and the distance to the peak location increases with increasing particle energy, and (iii) the particle intensity amplification for a particular particle energy, f(x,c/{c}0)/f(0,c/{c}0), is not 1, as predicted by DSA, but increases with increasing particle energy. The general solution combines both the reconnection-induced electric field and plasmoid contraction. The observed energetic particle intensity profile observed by Voyager 2 downstream of the HTS appears to support a particle acceleration mechanism that combines both DSA and magnetic-island-reconnection-related processes.

  8. The US Muon Accelerator Program (MAP)

    SciTech Connect

    Bross, Alan D.; /Fermilab

    2010-12-01

    The US Department of Energy Office of High Energy Physics has recently approved a Muon Accelerator Program (MAP). The primary goal of this effort is to deliver a Design Feasibility Study for a Muon Collider after a 7 year R&D program. This paper presents a brief physics motivation for, and the description of, a Muon Collider facility and then gives an overview of the program. I will then describe in some detail the primary components of the effort.

  9. Accelerated stochastic diffusion processes

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr

    1990-07-01

    We give a purely probabilistic demonstration that all effects of non-random (external, conservative) forces on the diffusion process can be encoded in the Nelson ansatz for the second Newton law. Each random path of the process together with a probabilistic weight carries a phase accumulation (complex valued) weight. Random path summation (integration) of these weights leads to the transition probability density and transition amplitude respectively between two spatial points in a given time interval. The Bohm-Vigier, Fenyes-Nelson-Guerra and Feynman descriptions of the quantum particle behaviours are in fact equivalent.

  10. ESS Accelerator Cryoplant Process Design

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Arnold, P.; Hees, W.; Hildenbeutel, J.; Weisend, J. G., II

    2015-12-01

    The European Spallation Source (ESS) is a neutron-scattering facility being built with extensive international collaboration in Lund, Sweden. The ESS accelerator will deliver protons with 5 MW of power to the target at 2.0 GeV, with a nominal current of 62.5 mA. The superconducting part of the accelerator is about 300 meters long and contains 43 cryomodules. The ESS accelerator cryoplant (ACCP) will provide the cooling for the cryomodules and the cryogenic distribution system that delivers the helium to the cryomodules. The ACCP will cover three cryogenic circuits: Bath cooling for the cavities at 2 K, the thermal shields at around 40 K and the power couplers thermalisation with 4.5 K forced helium cooling. The open competitive bid for the ACCP took place in 2014 with Linde Kryotechnik AG being selected as the vendor. This paper summarizes the progress in the ACCP development and engineering. Current status including final cooling requirements, preliminary process design, system configuration, machine concept and layout, main parameters and features, solution for the acceptance tests, exergy analysis and efficiency is presented.

  11. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap

  12. Maximal acceleration and radiative processes

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2015-08-01

    We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.

  13. One map policy (OMP) implementation strategy to accelerate mapping of regional spatial planing (RTRW) in Indonesia

    NASA Astrophysics Data System (ADS)

    Hasyim, Fuad; Subagio, Habib; Darmawan, Mulyanto

    2016-06-01

    A preparation of spatial planning documents require basic geospatial information and thematic accuracies. Recently these issues become important because spatial planning maps are impartial attachment of the regional act draft on spatial planning (PERDA). The needs of geospatial information in the preparation of spatial planning maps preparation can be divided into two major groups: (i). basic geospatial information (IGD), consist of of Indonesia Topographic maps (RBI), coastal and marine environmental maps (LPI), and geodetic control network and (ii). Thematic Geospatial Information (IGT). Currently, mostly local goverment in Indonesia have not finished their regulation draft on spatial planning due to some constrain including technical aspect. Some constrain in mapping of spatial planning are as follows: the availability of large scale ofbasic geospatial information, the availability of mapping guidelines, and human resources. Ideal conditions to be achieved for spatial planning maps are: (i) the availability of updated geospatial information in accordance with the scale needed for spatial planning maps, (ii) the guideline of mapping for spatial planning to support local government in completion their PERDA, and (iii) capacity building of local goverment human resources to completed spatial planning maps. The OMP strategies formulated to achieve these conditions are: (i) accelerating of IGD at scale of 1:50,000, 1: 25,000 and 1: 5,000, (ii) to accelerate mapping and integration of Thematic Geospatial Information (IGT) through stocktaking availability and mapping guidelines, (iii) the development of mapping guidelines and dissemination of spatial utilization and (iv) training of human resource on mapping technology.

  14. Experiment specific processing of residual acceleration data

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. I. D.

    1992-01-01

    To date, most Spacelab residual acceleration data collection projects have resulted in data bases that are overwhelming to the investigator of low-gravity experiments. This paper introduces a simple passive accelerometer system to measure low-frequency accelerations. Model responses for experiments using actual acceleration data are produced and correlations are made between experiment response and the accelerometer time history in order to test the idea that recorded acceleration data and experimental responses can be usefully correlated. Spacelab 3 accelerometer data are used as input to a variety of experiment models, and sensitivity limits are obtained for particular experiment classes. The modeling results are being used to create experiment-specific residual acceleration data processing schemes for interested investigators.

  15. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Bjorkman, Gerry; Kooney, Alex; Russell, Carolyn

    2003-01-01

    The weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  16. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Kooney, Alex; Bjorkman, Gerry; Russell, Carolyn; Smelser, Jerry (Technical Monitor)

    2002-01-01

    In FSW (friction stir welding), the weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule. The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  17. Symplectic maps and chromatic optics in particle accelerators

    NASA Astrophysics Data System (ADS)

    Cai, Yunhai

    2015-10-01

    We have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. As a result, the cell becomes nearly perfect up to the third-order of the momentum deviation.

  18. Symplectic maps and chromatic optics in particle accelerators

    DOE PAGES

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less

  19. Symplectic maps and chromatic optics in particle accelerators

    SciTech Connect

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.

  20. Process mapping in screening mammography.

    PubMed

    Whitman, G J; Venable, S L; Downs, R L; Garza, D; Levy, S; Ophir, K J; Spears, K F; Sprinkle-Vincent, S K; Stelling, C B

    1999-05-01

    Successful screening mammography programs aim to screen large numbers of women efficiently and inexpensively. Development of an effective screening mammography program requires skilled personnel, solid infrastructure, and a robust computer system. A group of physicians, technologists, computer support personnel, and administrators carefully analyzed a growing screening mammography program as a series of steps, starting with the request for the examination and ending with the receipt of a hard-copy consultation. The analysis involved a detailed examination of every step and every possible outcome in the screening process. The information gained through process mapping may be used for identification of systemic and personnel problems, allocation of resources, modification of workplace architecture, and design of computer networks. Process mapping is helpful for those involved in designing and improving screening mammography programs. Viewing a process (i.e., obtaining a screening mammogram) as a series of steps may allow for the identification of inefficient components that may limit growth. PMID:10342216

  1. Process in high energy heavy ion acceleration

    NASA Astrophysics Data System (ADS)

    Dinev, D.

    2009-03-01

    A review of processes that occur in high energy heavy ion acceleration by synchrotrons and colliders and that are essential for the accelerator performance is presented. Interactions of ions with the residual gas molecules/atoms and with stripping foils that deliberately intercept the ion trajectories are described in details. These interactions limit both the beam intensity and the beam quality. The processes of electron loss and capture lie at the root of heavy ion charge exchange injection. The review pays special attention to the ion induced vacuum pressure instability which is one of the main factors limiting the beam intensity. The intrabeam scattering phenomena which restricts the average luminosity of ion colliders is discussed. Some processes in nuclear interactions of ultra-relativistic heavy ions that could be dangerous for the performance of ion colliders are represented in the last chapter.

  2. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  3. Mapping of acceleration field in FSA configuration of a LIS

    NASA Astrophysics Data System (ADS)

    Nassisi, V.; Delle Side, D.; Monteduro, L.; Giuffreda, E.

    2016-05-01

    The Front Surface Acceleration (FSA) obtained in Laser Ion Source (LIS) systems is one of the most interesting methods to produce accelerated protons and ions. We implemented a LIS to study the ion acceleration mechanisms. In this device, the plasma is generated by a KrF excimer laser operating at 248 nm, focused on an aluminum target mounted inside a vacuum chamber. The laser energy was varied from 28 to 56 mJ/pulse and focused onto the target by a 15 cm focal lens forming a spot of 0.05 cm in diameter. A high impedance resistive probe was used to map the electric potential inside the chamber, near the target. In order to avoid the effect of plasma particles investing the probe, a PVC shield was realized. Particles inevitably streaked the shield but their influence on the probe was negligible. We detected the time resolved profiles of the electric potential moving the probe from 4.7 cm to 6.2 cm with respect to the main target axis, while the height of the shield from the surface normal on the target symmetry center was about 3 cm. The corresponding electric field can be very important to elucidate the phenomenon responsible of the accelerating field formation. The behavior of the field depends on the distance x as 1/x1.85 with 28 mJ laser energy, 1/x1.77 with 49 mJ and 1/x1.74 with 56 mJ. The dependence of the field changes slightly for our three cases, the power degree decreases at increasing laser energy. It is possible to hypothesize that the electric field strength stems from the contribution of an electrostatic and an induced field. Considering exclusively the induced field at the center of the created plasma, a strength of some tenth kV/m could be reached, which could deliver ions up to 1 keV of energy. These values were justified by measurement performed with an electrostatic barrier.

  4. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1985-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  5. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1983-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  6. Process Mapping: Tools, Techniques, & Critical Success Factors.

    ERIC Educational Resources Information Center

    Kalman, Howard K.

    2002-01-01

    Explains process mapping as an analytical tool and a process intervention that performance technologists can use to improve human performance by reducing error variance. Highlights include benefits of process mapping; and critical success factors, including organizational readiness, time commitment by participants, and the availability of a…

  7. Details and justifications for the MAP concept specification for acceleration above 63 GeV

    SciTech Connect

    Berg, J. Scott

    2014-02-28

    The Muon Accelerator Program (MAP) requires a concept specification for each of the accelerator systems. The Muon accelerators will bring the beam energy from a total energy of 63 GeV to the maximum energy that will fit on the Fermilab site. Justifications and supporting references are included, providing more detail than will appear in the concept specification itself.

  8. Interstellar Mapping and Acceleration Probe (IMAP) - Its Time Has Come!

    NASA Astrophysics Data System (ADS)

    Schwadron, N.; Kasper, J. C.; Mewaldt, R. A.; Moebius, E.; Opher, M.; Spence, H. E.; Zurbuchen, T.

    2014-12-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence -- an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. The enigmatic IBEX ribbon is an unanticipated discovery demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the vicinity of the IBEX ribbon and its plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive cosmic ray, energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment and understand the fundamental origins of particle acceleration. Thus, IMAP is a mission whose time has come. IMAP is the highest ranked next Solar Terrestrial Probe in the Decadal

  9. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  10. Speech processing using maximum likelihood continuity mapping

    SciTech Connect

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  11. Accelerating sparse linear algebra using graphics processing units

    NASA Astrophysics Data System (ADS)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  12. Topographic mapping for stereo and motion processing

    NASA Astrophysics Data System (ADS)

    Mallot, Hanspeter A.; Zielke, Thomas; Storjohann, Kai; von Seelen, Werner

    1991-02-01

    Topographic mappings are neigbourhood preserving transformations between twodimensional data structures. Mappings of this type are a general means of information processing in the vertebrate visual system. In this paper we present an application of a special topographic mapping termed the inverse perspective mapping for the computation of stereo and motion. More specifically we study a class of algorithms for the detection of deviations from an expected " normal" situation. These expectations concern the global spacevariance of certain image parameters (e. g. disparity or speed of feature motion) and can thus be implemented in the mapping rule. The resulting algorithms are minimal in the sense that no irrelevant information is extracted from the scene. In a technical application we use topographic mappings for a stereo obstacle detection system. The implementation has been tested on an automatically guided vehicle (AGV) in an industrial environment. 1

  13. Mapping the Collaborative Research Process

    ERIC Educational Resources Information Center

    Kochanek, Julie Reed; Scholz, Carrie; Garcia, Alicia N.

    2015-01-01

    Despite significant federal investments in the production of high-quality education research, the direct use of that research in policy and practice is not evident. Some education researchers are increasingly employing collaborative research models that use structures and processes to integrate practitioners into the research process in an effort…

  14. From electron maps to acceleration models in the physics of flare

    NASA Astrophysics Data System (ADS)

    Massone, Anna Maria

    Electron maps reconstructed from RHESSI visibilities represent a powerful source of information for constraining models of electron acceleration in solar plasma physics during flaring events. In this talk I will describe how and to which extent electron maps can be utilized to estimate local electron spectral indices, the evolution of centroid position at different energies in the electron space and the compatibility of RHESSI observations with different theoretical models for the acceleration mechanisms.

  15. Standard map in magnetized relativistic systems: fixed points and regular acceleration.

    PubMed

    de Sousa, M C; Steffens, F M; Pakter, R; Rizzato, F B

    2010-08-01

    We investigate the concept of a standard map for the interaction of relativistic particles and electrostatic waves of arbitrary amplitudes, under the action of external magnetic fields. The map is adequate for physical settings where waves and particles interact impulsively, and allows for a series of analytical result to be exactly obtained. Unlike the traditional form of the standard map, the present map is nonlinear in the wave amplitude and displays a series of peculiar properties. Among these properties we discuss the relation involving fixed points of the maps and accelerator regimes.

  16. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes. PMID:26986337

  17. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  18. Ultrasonic acceleration of enzymatic processing of cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. It has been found that the intr...

  19. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    NASA Astrophysics Data System (ADS)

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-08-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within +/-3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m-1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.

  20. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  1. A CLASSIFICATION SCHEME FOR TURBULENT ACCELERATION PROCESSES IN SOLAR FLARES

    SciTech Connect

    Bian, Nicolas; Kontar, Eduard P.; Emslie, A. Gordon E-mail: eduard@astro.gla.ac.uk

    2012-08-01

    We establish a classification scheme for stochastic acceleration models involving low-frequency plasma turbulence in a strongly magnetized plasma. This classification takes into account both the properties of the accelerating electromagnetic field, and the nature of the transport of charged particles in the acceleration region. We group the acceleration processes as either resonant, non-resonant, or resonant-broadened, depending on whether the particle motion is free-streaming along the magnetic field, diffusive, or a combination of the two. Stochastic acceleration by moving magnetic mirrors and adiabatic compressions are addressed as illustrative examples. We obtain expressions for the momentum-dependent diffusion coefficient D(p), both for general forms of the accelerating force and for the situation when the electromagnetic force is wave-like, with a specified dispersion relation {omega} = {omega}(k). Finally, for models considered, we calculate the energy-dependent acceleration time, a quantity that can be directly compared with observations of the time profile of the radiation field produced by the accelerated particles, such as those occuring during solar flares.

  2. Electron accelerators for industrial processing--a review

    SciTech Connect

    Scharf, Waldemar; Wieszczycka, Wioletta

    1999-06-10

    The applications of over 1000 electron beam (EB) accelerator processors used recently worldwide span technological fields from material modification to medical sterilization and food processing. The performance level achieved by the main manufacturers is demonstrated by some selected parameters of processors in the energy range from 0.1 MeV to 10 MeV. The design of the new generation of low cost compact in-line and stand-alone accelerators is discussed.

  3. Field size dependent mapping of medical linear accelerator radiation leakage.

    PubMed

    Bezin, Jérémi Vũ; Veres, Attila; Lefkopoulos, Dimitri; Chavaudra, Jean; Deutsch, Eric; de Vathaire, Florent; Diallo, Ibrahima

    2015-03-01

    The purpose of this study was to investigate the suitability of a graphics library based model for the assessment of linear accelerator radiation leakage. Transmission through the shielding elements was evaluated using the build-up factor corrected exponential attenuation law and the contribution from the electron guide was estimated using the approximation of a linear isotropic radioactive source. Model parameters were estimated by a fitting series of thermoluminescent dosimeter leakage measurements, achieved up to 100 cm from the beam central axis along three directions. The distribution of leakage data at the patient plane reflected the architecture of the shielding elements. Thus, the maximum leakage dose was found under the collimator when only one jaw shielded the primary beam and was about 0.08% of the dose at isocentre. Overall, we observe that the main contributor to leakage dose according to our model was the electron beam guide. Concerning the discrepancies between the measurements used to calibrate the model and the calculations from the model, the average difference was about 7%. Finally, graphics library modelling is a readily and suitable way to estimate leakage dose distribution on a personal computer. Such data could be useful for dosimetric evaluations in late effect studies.

  4. Recent Advances in Understanding Particle Acceleration Processes in Solar Flares

    NASA Astrophysics Data System (ADS)

    Zharkova, V. V.; Arzner, K.; Benz, A. O.; Browning, P.; Dauphin, C.; Emslie, A. G.; Fletcher, L.; Kontar, E. P.; Mann, G.; Onofri, M.; Petrosian, V.; Turkmani, R.; Vilmer, N.; Vlahos, L.

    2011-09-01

    We review basic theoretical concepts in particle acceleration, with particular emphasis on processes likely to occur in regions of magnetic reconnection. Several new developments are discussed, including detailed studies of reconnection in three-dimensional magnetic field configurations (e.g., current sheets, collapsing traps, separatrix regions) and stochastic acceleration in a turbulent environment. Fluid, test-particle, and particle-in-cell approaches are used and results compared. While these studies show considerable promise in accounting for the various observational manifestations of solar flares, they are limited by a number of factors, mostly relating to available computational power. Not the least of these issues is the need to explicitly incorporate the electrodynamic feedback of the accelerated particles themselves on the environment in which they are accelerated. A brief prognosis for future advancement is offered.

  5. Plasma acceleration processes in an ablative pulsed plasma thruster

    SciTech Connect

    Koizumi, Hiroyuki; Noji, Ryosuke; Komurasaki, Kimiya; Arakawa, Yoshihiro

    2007-03-15

    Plasma acceleration processes in an ablative pulsed plasma thruster (APPT) were investigated. APPTs are space propulsion options suitable for microspacecraft, and have recently attracted much attention because of their low electric power requirements and simple, compact propellant system. The plasma acceleration mechanism, however, has not been well understood. In the present work, emission spectroscopy, high speed photography, and magnetic field measurements are conducted inside the electrode channel of an APPT with rectangular geometry. The successive images of neutral particles and ions give us a comprehensive understanding of their behavior under electromagnetic acceleration. The magnetic field profile clarifies the location where the electromagnetic force takes effect. As a result, it is shown that high density, ablated neutral gas stays near the propellant surface, and only a fraction of the neutrals is converted into plasma and electromagnetically accelerated, leaving the residual neutrals behind.

  6. Accelerated MR Parameter Mapping with Low-Rank and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Lu, Wenmiao; Hitchens, T. Kevin; Lam, Fan; Ho, Chien; Liang, Zhi-Pei

    2014-01-01

    Purpose: To enable accurate MR parameter mapping with accelerated data acquisition, utilizing recent advances in constrained imaging with sparse sampling. Theory and Methods: A new constrained reconstruction method based on low-rank and sparsity constraints is proposed to accelerate MR parameter mapping. More specifically, the proposed method simultaneously imposes low-rank and joint sparse structures on contrast-weighted image sequences within a unified mathematical formulation. With a pre-estimated subspace, this formulation results in a convex optimization problem, which is solved using an efficient numerical algorithm based on the alternating direction method of multipliers. Results: To evaluate the performance of the proposed method, two application examples were considered: i) T2 mapping of the human brain, and ii) T1 mapping of the rat brain. For each application, the proposed method was evaluated at both moderate and high acceleration levels. Additionally, the proposed method was compared with two state-of-the-art methods that only use a single low-rank or joint sparsity constraint. The results demonstrate that the proposed method can achieve accurate parameter estimation with both moderately and highly undersampled data. Although all methods performed fairly well with moderately undersampled data, the proposed method achieved much better performance (e.g., more accurate parameter values) than the other two methods with highly undersampled data. Conclusions: Simultaneously imposing low-rank and sparsity constraints can effectively improve the accuracy of fast MR parameter mapping with sparse sampling. PMID:25163720

  7. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  8. Probabilistic earthquake acceleration and velocity maps for the United States and Puerto Rico

    USGS Publications Warehouse

    Algermissen, S.T.; Perkins, D.M.; Thenhaus, P.C.; Hanson, S.L.; Bender, B.L.

    1990-01-01

    The ground-motion maps presented here (maps A-D) show the expected seismic induced or earthquake caused maximum horizontal acceleration and velocity in rock in the contiguous United States, Alaska, Hawaii, and Puerto Rico.  There is a 90 percent probability that the maximum horizontal acceleration and velocity shown on the maps will not be exceeded in the time periods of 50 and 250 years (average return period for the expected ground motion of 474 and 2,372 years).  Rock is taken here to mean material having a shear-wave velocity of between 0.75 and 0.90 kilometers per second. (Algermissen and Perkins, 1976).  

  9. Value Stream Mapping: Foam Collection and Processing.

    SciTech Connect

    Sorensen, Christian

    2015-07-01

    The effort to collect and process foam for the purpose of recycling performed by the Material Sustainability and Pollution Prevention (MSP2) team at Sandia National Laboratories is an incredible one, but in order to make it run more efficiently it needed some tweaking. This project started in June of 2015. We used the Value Stream Mapping process to allow us to look at the current state of the foam collection and processing operation. We then thought of all the possible ways the process could be improved. Soon after that we discussed which of the "dreams" were feasible. And finally, we assigned action items to members of the team so as to ensure that the improvements actually occur. These improvements will then, due to varying factors, continue to occur over the next couple years.

  10. Observation of laser multiple filamentation process and multiple electron beams acceleration in a laser wakefield accelerator

    SciTech Connect

    Li, Wentao; Liu, Jiansheng; Wang, Wentao; Chen, Qiang; Zhang, Hui; Tian, Ye; Zhang, Zhijun; Qi, Rong; Wang, Cheng; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2013-11-15

    The multiple filaments formation process in the laser wakefield accelerator (LWFA) was observed by imaging the transmitted laser beam after propagating in the plasma of different density. During propagation, the laser first self-focused into a single filament. After that, it began to defocus with energy spreading in the transverse direction. Two filaments then formed from it and began to propagate independently, moving away from each other. We have also demonstrated that the laser multiple filamentation would lead to the multiple electron beams acceleration in the LWFA via ionization-induced injection scheme. Besides, its influences on the accelerated electron beams were also analyzed both in the single-stage LWFA and cascaded LWFA.

  11. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    NASA Astrophysics Data System (ADS)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  12. Secondary electron emission from plasma processed accelerating cavity grade niobium

    NASA Astrophysics Data System (ADS)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for higher energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were treated

  13. Processing map for hot working of powder

    NASA Astrophysics Data System (ADS)

    Radhakrishna Bhat, B. V.; Mahajan, Y. R.; Roshan, H. Md.; Prasad, Yvrk

    1992-08-01

    The constitutive flow behavior of a metal matrix composite (MMC) with 2124 aluminum containing 20 vol pct silicon carbide particulates under hot-working conditions in the temperature range of 300 °C to 550 °C and strain-rate range of 0.001 to 1 s-1 has been studied using hot compression testing. Processing maps depicting the variation of the efficiency of power dissipation given by [2m/(m + 1)] (where m is the strain-rate sensitivity of flow stress) with temperature and strain rate have been established for the MMC as well as for the matrix material. The maps have been interpreted on the basis of the Dynamic Materials Model (DMM). [3] The MMC exhibited a domain of superplasticity in the temperature range of 450 °C to 550 °C and at strain rates less than 0.1 s-1. At 500 °C and 1 s-1 strain rate, the MMC undergoes dynamic recrystallization (DRX), resulting in a reconstitution of microstructure. In comparison with the map for the matrix material, the DRX domain occurred at a strain rate higher by three orders of magnitude. At temperatures lower than 400 °C, the MMC exhibited dynamic recovery, while at 550 °C and 1 s-1, cracking occurred at the prior particle boundaries (representing surfaces of the initial powder particles). The optimum temperature and strain-rate combination for billet conditioning of the MMC is 500 °C and 1 s-1, while secondary metalworking may be done in the super- plasticity domain. The MMC undergoes microstructural instability at temperatures lower than 400 °C and strain rates higher than 0.1 s-1.

  14. Detecting chaos in particle accelerators through the frequency map analysis method.

    PubMed

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  15. Detecting chaos in particle accelerators through the frequency map analysis method

    SciTech Connect

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  16. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity.

    PubMed

    Clayton, C E; Adli, E; Allen, J; An, W; Clarke, C I; Corde, S; Frederico, J; Gessner, S; Green, S Z; Hogan, M J; Joshi, C; Litos, M; Lu, W; Marsh, K A; Mori, W B; Vafaei-Najafabadi, N; Xu, X; Yakimenko, V

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m(-1) to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  17. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    DOE PAGES

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; et al

    2016-08-16

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.).more » Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m–1 to a similar degree of accuracy. Lastly, these results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.« less

  18. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    PubMed Central

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m−1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  19. Mapping stochastic processes onto complex networks

    NASA Astrophysics Data System (ADS)

    Shirazi, A. H.; Reza Jafari, G.; Davoudi, J.; Peinke, J.; Reza Rahimi Tabar, M.; Sahimi, Muhammad

    2009-07-01

    We introduce a method by which stochastic processes are mapped onto complex networks. As examples, we construct the networks for such time series as those for free-jet and low-temperature helium turbulence, the German stock market index (the DAX), and white noise. The networks are further studied by contrasting their geometrical properties, such as the mean length, diameter, clustering, and average number of connections per node. By comparing the network properties of the original time series investigated with those for the shuffled and surrogate series, we are able to quantify the effect of the long-range correlations and the fatness of the probability distribution functions of the series on the networks constructed. Most importantly, we demonstrate that the time series can be reconstructed with high precision by means of a simple random walk on their corresponding networks.

  20. Induction linear accelerators for commercial photon irradiation processing

    SciTech Connect

    Matthews, S.M.

    1989-01-13

    A number of proposed irradiation processes requires bulk rather than surface exposure with intense applications of ionizing radiation. Typical examples are irradiation of food packaged into pallet size containers, processing of sewer sludge for recycling as landfill and fertilizer, sterilization of prepackaged medical disposals, treatment of municipal water supplies for pathogen reduction, etc. Volumetric processing of dense, bulky products with ionizing radiation requires high energy photon sources because electrons are not penetrating enough to provide uniform bulk dose deposition in thick, dense samples. Induction Linear Accelerator (ILA) technology developed at the Lawrence Livermore National Laboratory promises to play a key role in providing solutions to this problem. This is discussed in this paper.

  1. Enzyme clustering accelerates processing of intermediates through metabolic channeling

    PubMed Central

    Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.

    2015-01-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  2. Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications

    SciTech Connect

    Brown, Michael R.

    2006-11-16

    Project Title: Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications PI: Michael R. Brown, Swarthmore College The purpose of the project was to provide theoretical and modeling support to the Swarthmore Spheromak Experiment (SSX). Accordingly, the theoretical effort was tightly integrated into the SSX experimental effort. During the grant period, Michael Brown and his experimental collaborators at Swarthmore, with assistance from W. Matthaeus as appropriate, made substantial progress in understanding the physics SSX plasmas.

  3. Modeling the Acceleration Process of Dust in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Jia, Y. D.; Lai, H.; Russell, C. T.; Wei, H.

    2015-12-01

    In previous studies we have identified structures created by nano-dust in the solar wind, and we have observed the expected draping and diverting signatures of such structures using well-spaced multi-spacecraft observations. In this study, we reproduce such an interaction event with our multi-fluid MHD model, modeling the dust particles as a fluid. When the number density of dust particles is comparable to the solar wind ions, a significant draping in the IMF is created, with amplitude larger than the ambient fluctuations. We note that such a density is well above several nano dust particles per Debye sphere and a dusty fluid is appropriate for modeling the dust-solar wind interaction. We assume a spherical cloud of dust travelling with 90% solar wind speed. In addition to reproducing the IMF response to the nano-dust at the end-stage of dust acceleration, we model the entire process of such acceleration in the gravity field of the inner heliosphere. It takes hours for the smallest dust with 3000 amu per proton charge to reach the solar wind speed. We find the dust cloud stretched along the solar wind flow. Such stretching enhances the draping of IMF, compared to the spherical cloud we used in an earlier stage of this study. This model will be further used to examine magnetic perturbations at an earlier stage of dust cloud acceleration, and then determine the size, density, and total mass of dust cloud, as well as its creation and acceleration.

  4. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P. ); Elliott, A. )

    1992-01-01

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  5. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P.; Elliott, A.

    1992-12-31

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  6. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations. PMID:26406070

  7. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  8. Effects of Map Processing upon Text Comprehension.

    ERIC Educational Resources Information Center

    Kirby, John R.; And Others

    A study investigated the effects of a spatial adjunct aid--maps--upon probed comprehension and free recall with respect to a text in which map-related information (macropropositions) could be clearly distinguished from more abstract information (micropropositions). Forty-eight tenth grade students were randomly assigned to either a control group…

  9. Particle Acceleration via Reconnection Processes in the Supersonic Solar Wind

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-01

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = -(3 + MA )/2, where MA is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index -3(1 + τ c /(8τdiff)), where τ c /τdiff is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τdiff/τ c . Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c -5 (c particle speed) spectra observed by Fisk & Gloeckler and Mewaldt et

  10. Acceleration of Topographic Map Production Using Semi-Automatic DTM from Dsm Radar Data

    NASA Astrophysics Data System (ADS)

    Rizaldy, Aldino; Mayasari, Ratna

    2016-06-01

    Badan Informasi Geospasial (BIG) is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using "stereo-mate", the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image), and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  11. Graphics processing unit accelerated computation of digital holograms.

    PubMed

    Kang, Hoonjong; Yaraş, Fahri; Onural, Levent

    2009-12-01

    An approximation for fast digital hologram generation is implemented on a central processing unit (CPU), a graphics processing unit (GPU), and a multi-GPU computational platform. The computational performance of the method on each platform is measured and compared. The computational speed on the GPU platform is much faster than on a CPU, and the algorithm could be further accelerated on a multi-GPU platform. In addition, the accuracy of the algorithm for single- and double-precision arithmetic is evaluated. The quality of the reconstruction from the algorithm using single-precision arithmetic is comparable with the quality from the double-precision arithmetic, and thus the implementation using single-precision arithmetic on a multi-GPU platform can be used for holographic video displays.

  12. The spinning disc: studying radial acceleration and its damping process with smartphone acceleration sensors

    NASA Astrophysics Data System (ADS)

    Hochberg, K.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-03-01

    Here, we show the possibility of analysing circular motion and acceleration using the acceleration sensors of smartphones. For instance, the known linear dependence of the radial acceleration on the distance to the centre (a constant angular frequency) can be shown using multiple smartphones attached to a revolving disc. As a second example, the decrease of the radial acceleration and the rotation frequency due to friction can be measured and fitted with a quadratic function, in accordance with theory. Finally, because the disc is not set up exactly horizontal, each smartphone measures a component of the gravitational acceleration that adds to the radial acceleration during one half of the period and subtracts from the radial acceleration during the other half. Hence, every graph shows a small modulation, which can be used to determine the rotation frequency, thus converting a ‘nuisance effect’ into a source of useful information, making additional measurements with stopwatches or the like unnecessary.

  13. Uav Data Processing for Rapid Mapping Activities

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2015-08-01

    During disaster and emergency situations, geospatial data plays an important role to serve as a framework for decision support system. As one component of basic geospatial data, large scale topographical maps are mandatory in order to enable geospatial analysis within quite a number of societal challenges. The increasing role of geo-information in disaster management nowadays consequently needs to include geospatial aspects on its analysis. Therefore different geospatial datasets can be combined in order to produce reliable geospatial analysis especially in the context of disaster preparedness and emergency response. A very well-known issue in this context is the fast delivery of geospatial relevant data which is expressed by the term "Rapid Mapping". Unmanned Aerial Vehicle (UAV) is the rising geospatial data platform nowadays that can be attractive for modelling and monitoring the disaster area with a low cost and timely acquisition in such critical period of time. Disaster-related object extraction is of special interest for many applications. In this paper, UAV-borne data has been used for supporting rapid mapping activities in combination with high resolution airborne Interferometric Synthetic Aperture Radar (IFSAR) data. A real disaster instance from 2013 in conjunction with Mount Sinabung eruption, Northern Sumatra, Indonesia, is used as the benchmark test for the rapid mapping activities presented in this paper. On this context, the reliable IFSAR dataset from airborne data acquisition in 2011 has been used as a comparable dataset for accuracy investigation and assessment purpose in 3 D reconstructions. After all, this paper presents a proper geo-referencing and feature extraction method of UAV data to support rapid mapping activities.

  14. Engineering functionality gradients by dip coating process in acceleration mode.

    PubMed

    Faustini, Marco; Ceratti, Davide R; Louis, Benjamin; Boudot, Mickael; Albouy, Pierre-Antoine; Boissière, Cédric; Grosso, David

    2014-10-01

    In this work, unique functional devices exhibiting controlled gradients of properties are fabricated by dip-coating process in acceleration mode. Through this new approach, thin films with "on-demand" thickness graded profiles at the submillimeter scale are prepared in an easy and versatile way, compatible for large-scale production. The technique is adapted to several relevant materials, including sol-gel dense and mesoporous metal oxides, block copolymers, metal-organic framework colloids, and commercial photoresists. In the first part of the Article, an investigation on the effect of the dip coating speed variation on the thickness profiles is reported together with the critical roles played by the evaporation rate and by the viscosity on the fluid draining-induced film formation. In the second part, dip-coating in acceleration mode is used to induce controlled variation of functionalities by playing on structural, chemical, or dimensional variations in nano- and microsystems. In order to demonstrate the full potentiality and versatility of the technique, original graded functional devices are made including optical interferometry mirrors with bidirectional gradients, one-dimensional photonic crystals with a stop-band gradient, graded microfluidic channels, and wetting gradient to induce droplet motion.

  15. Radiation mapping inside the bunkers of medium energy accelerators using a robotic carrier.

    PubMed

    Ravishankar, R; Bhaumik, T K; Bandyopadhyay, T; Purkait, M; Jena, S C; Mishra, S K; Sharma, S; Agashe, V; Datta, K; Sarkar, B; Datta, C; Sarkar, D; Pal, P K

    2013-10-01

    The knowledge of ambient and peak radiation levels prevailing inside the bunkers of the accelerator facilities is essential in assessing the accidental human exposure inside the bunkers and in protecting sensitive electronic equipments by minimizing the exposure to high intensity mixed radiation fields. Radiation field mapping dynamically, inside bunkers are rare, though generally dose-rate data are available in every particle accelerator facilities at specific locations. Taking into account of the fact that the existing neutron fields with a spread of energy from thermal up to the energy of the accelerated charged projectiles, prompt photons and other particles prevailing during cyclotron operation inside the bunkers, neutron and gamma survey meters with extended energy ranges attached to a robotic carrier have been used. The robotic carrier movement was controlled remotely from the control room with the help of multiple visible range optical cameras provided inside the bunkers and the wireless and wired protocols of communication helped its movement and data acquisition from the survey meters. Variable Energy Cyclotron Centre, Kolkata has positive ion accelerating facilities such as K-130 room Temperature Cyclotron, K-500 Super Conducting Cyclotron and a forthcoming 30 MeV Proton Medical Cyclotron with high beam current. The dose rates data for K-130 Room Temperature Cyclotron, VECC were collected for various energies of alpha and proton beams losing their total energy at different stages on different materials at various strategic locations of radiological importance inside the bunkers. The measurements established that radiation levels inside the machine bunker dynamically change depending upon the beam type, beam energy, machine operation parameters, deflector condition, slit placement and central region beam tuning. The obtained inference from the association of dose rates with the parameters like beam intensity, type and energy of projectiles, helped in

  16. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    NASA Astrophysics Data System (ADS)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching

  17. Particle acceleration via reconnection processes in the supersonic solar wind

    SciTech Connect

    Zank, G. P.; Le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-10

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = –(3 + M{sub A} )/2, where M{sub A} is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index –3(1 + τ {sub c}/(8τ{sub diff})), where τ {sub c}/τ{sub diff} is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τ{sub diff}/τ {sub c}. Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c {sup –5} (c particle

  18. GPU accelerated processing of astronomical high frame-rate videosequences

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  19. Mapping individual logical processes in information searching

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.

    1974-01-01

    An interactive dialog with a computerized information collection was recorded and plotted in the form of a flow chart. The process permits one to identify the logical processes employed in considerable detail and is therefore suggested as a tool for measuring individual thought processes in a variety of situations. A sample of an actual test case is given.

  20. Scanning probe acceleration microscopy (SPAM) in fluids: mapping mechanical properties of surfaces at the nanoscale.

    PubMed

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-03-28

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by "comb" filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach. PMID:16551751

  1. Scanning probe acceleration microscopy (SPAM) in fluids: Mapping mechanical properties of surfaces at the nanoscale

    NASA Astrophysics Data System (ADS)

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-03-01

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by “comb” filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach.

  2. A practical protocol to accelerate the breeding process of rice in semitropical and tropical regions

    PubMed Central

    Li, Jun; Hou, Xianhui; Liu, Jindi; Qian, Changgen; Gao, Rongcun; Li, Linchuan; Li, Jinjun

    2015-01-01

    Breeding of excellent rice varieties is essential for modern rice production. Typical breeding procedures to introduce and maintain valuable agricultural traits require at least 8 generations from crossing to stabilization, always taking more than 4–5 years of work. This long and tedious process is the rate-limiting step in the development of new varieties, and therefore fast culturing methods are in urgent need. Taking advantage of early flowering characteristics of light-sensitive rice under short-day conditions, we have developed a practical protocol to accelerate the breeding cycle of rice, which we have termed the “1 + 2”, “2 + 2”, “1 + 3”, and “0 + 5” methods according to the different rice varieties and different breeding purposes. We have also incorporated several techniques, including glume cutting, seed desiccation at 50°C in a drier seed dormancy breakage with low concentration of HNO3, and direct seeding. Using the above strategy, we have shortened the life cycle of light-sensitive rice varieties to about 70 days, making it possible for several rice cultivars to proliferate 4–5 generations in a single calendar year. This protocol greatly accelerates the process of new variety breeding, and can be used in rice research for shortening the process of genetic analysis and the construction of mapping populations. PMID:26175620

  3. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries.

  4. Accelerating the Next Generation Long Read Mapping with the FPGA-Based System.

    PubMed

    Chen, Peng; Wang, Chao; Li, Xi; Zhou, Xuehai

    2014-01-01

    To compare the newly determined sequences against the subject sequences stored in the databases is a critical job in the bioinformatics. Fortunately, recent survey reports that the state-of-the-art aligners are already fast enough to handle the ultra amount of short sequence reads in the reasonable time. However, for aligning the long sequence reads (>400 bp) generated by the next generation sequencing (NGS) technology, it is still quite inefficient with present aligners. Furthermore, the challenge becomes more and more serious as the lengths and the amounts of the sequence reads are both keeping increasing with the improvement of the sequencing technology. Thus, it is extremely urgent for the researchers to enhance the performance of the long read alignment. In this paper, we propose a novel FPGA-based system to improve the efficiency of the long read mapping. Compared to the state-of-the-art long read aligner BWA-SW, our accelerating platform could achieve a high performance with almost the same sensitivity. Experiments demonstrate that, for reads with lengths ranging from 512 up to 4,096 base pairs, the described system obtains a 10x -48x speedup for the bottleneck of the software. As to the whole mapping procedure, the FPGA-based platform could achieve a 1.8x -3:3x speedup versus the BWA-SW aligner, reducing the alignment cycles from weeks to days.

  5. Accelerating the Next Generation Long Read Mapping with the FPGA-Based System.

    PubMed

    Chen, Peng; Wang, Chao; Li, Xi; Zhou, Xuehai

    2014-01-01

    To compare the newly determined sequences against the subject sequences stored in the databases is a critical job in the bioinformatics. Fortunately, recent survey reports that the state-of-the-art aligners are already fast enough to handle the ultra amount of short sequence reads in the reasonable time. However, for aligning the long sequence reads (>400 bp) generated by the next generation sequencing (NGS) technology, it is still quite inefficient with present aligners. Furthermore, the challenge becomes more and more serious as the lengths and the amounts of the sequence reads are both keeping increasing with the improvement of the sequencing technology. Thus, it is extremely urgent for the researchers to enhance the performance of the long read alignment. In this paper, we propose a novel FPGA-based system to improve the efficiency of the long read mapping. Compared to the state-of-the-art long read aligner BWA-SW, our accelerating platform could achieve a high performance with almost the same sensitivity. Experiments demonstrate that, for reads with lengths ranging from 512 up to 4,096 base pairs, the described system obtains a 10x -48x speedup for the bottleneck of the software. As to the whole mapping procedure, the FPGA-based platform could achieve a 1.8x -3:3x speedup versus the BWA-SW aligner, reducing the alignment cycles from weeks to days. PMID:26356857

  6. Preliminary map of peak horizontal ground acceleration for the Hanshin-Awaji earthquake of January 17, 1995, Japan - Description of Mapped Data Sets

    USGS Publications Warehouse

    Borcherdt, R.D.; Mark, R.K.

    1995-01-01

    The Hanshin-Awaji earthquake (also known as the Hyogo-ken Nanbu and the Great Hanshin earthquake) provided an unprecedented set of measurements of strong ground shaking. The measurements constitute the most comprehensive set of strong- motion recordings yet obtained for sites underlain by soft soil deposits of Holocene age within a few kilometers of the crustal rupture zone. The recordings, obtained on or near many important structures, provide an important new empirical data set for evaluating input ground motion levels and site amplification factors for codes and site-specific design procedures world wide. This report describes the data used to prepare a preliminary map summarizing the strong motion data in relation to seismicity and underlying geology (Wentworth, Borcherdt, and Mark., 1995; Figure 1, hereafter referred to as Figure 1/I). The map shows station locations, peak acceleration values, and generalized acceleration contours superimposed on pertinent seismicity and the geologic map of Japan. The map (Figure 1/I) indicates a zone of high acceleration with ground motions throughout the zone greater than 400 gal and locally greater than 800 gal. This zone encompasses the area of most intense damage mapped as JMA intensity level 7, which extends through Kobe City. The zone of most intense damage is parallel, but displaced slightly from the surface projection of the crustal rupture zone implied by aftershock locations. The zone is underlain by soft-soil deposits of Holocene age.

  7. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    PubMed

    Wang, Yu; Du, Haixiao; Xia, Mingrui; Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong

    2013-01-01

    Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  8. A Hybrid CPU-GPU Accelerated Framework for Fast Mapping of High-Resolution Human Brain Connectome

    PubMed Central

    Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong

    2013-01-01

    Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson’s Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states. PMID:23675425

  9. Analyzing Collision Processes with the Smartphone Acceleration Sensor

    ERIC Educational Resources Information Center

    Vogt, Patrik; Kuhn, Jochen

    2014-01-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1 ). The physical issues in that case are manifold and apply, for example, to free fall, radial acceleration, several pendula, or the exploitation…

  10. An Unusual Process of Accelerated Weathering of a Marly Limestone

    NASA Astrophysics Data System (ADS)

    Ercoli, L.; Rizzo, G.; Algozzini, G.

    2003-04-01

    This work deals with a singular case of stone deterioration, which occurred during the restoration of the Cathedral of Cefalù. In particular, a significant process of stone decohesion started after a consolidation treatment on ashlars of the external face of the cloister portico. A study was carried out to characterize the stone and to investigate the deterioration process. Petrographical, chemical and physical analyses were performed on samples taken from the wall. The results indicate that the medieval monument was built using a Pliocene marly limestone, called "trubo", quarried from outcrops of the environs of Cefalù. The rock is soft and uniformely cemented. The carbonatic fraction of the rock is due to foraminifera shells; the rock also contains detritic quartz, feldspate and glauconite. The clay minerals, mainly illite and montmorillonite, are widespread in the rock in the form of thin layers. The use of such a stone in a building of relevant artistic value is definitely unusual. In fact, the "trubo" is a rock subjected to natural decay because of its mineralogical composition and fabric; as effect of natural weathering, in the outcrops the rock disaggregates uniformely, producing silt. In the cloister this effect was magnified by extreme environmental conditions (marine spray, severe excursions of both relative humidity and temperature). Furthermore, after soluble salts removing and subsequent consolidation with ethyl silicate, a significant acceleration of the decay process was observed, producing friable scales detach for a depth of about 3 cm into the ashlars. The stone appeared corroded and uneven. Experimental tests were performed in laboratory in order to evidence any origin of incompatibility between such stone composition and the treatments carried out, which on the other hand are the most generally adopted in restoration interventions.

  11. Monitoring oil displacement processes with k-t accelerated spin echo SPI.

    PubMed

    Li, Ming; Xiao, Dan; Romero-Zerón, Laura; Balcom, Bruce J

    2016-03-01

    Magnetic resonance imaging (MRI) is a robust tool to monitor oil displacement processes in porous media. Conventional MRI measurement times can be lengthy, which hinders monitoring time-dependent displacements. Knowledge of the oil and water microscopic distribution is important because their pore scale behavior reflects the oil trapping mechanisms. The oil and water pore scale distribution is reflected in the magnetic resonance T2 signal lifetime distribution. In this work, a pure phase-encoding MRI technique, spin echo SPI (SE-SPI), was employed to monitor oil displacement during water flooding and polymer flooding. A k-t acceleration method, with low-rank matrix completion, was employed to improve the temporal resolution of the SE-SPI MRI measurements. Comparison to conventional SE-SPI T2 mapping measurements revealed that the k-t accelerated measurement was more sensitive and provided higher-quality results. It was demonstrated that the k-t acceleration decreased the average measurement time from 66.7 to 20.3 min in this work. A perfluorinated oil, containing no (1) H, and H2 O brine were employed to distinguish oil and water phases in model flooding experiments. High-quality 1D water saturation profiles were acquired from the k-t accelerated SE-SPI measurements. Spatially and temporally resolved T2 distributions were extracted from the profile data. The shift in the (1) H T2 distribution of water in the pore space to longer lifetimes during water flooding and polymer flooding is consistent with increased water content in the pore space. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Monitoring oil displacement processes with k-t accelerated spin echo SPI.

    PubMed

    Li, Ming; Xiao, Dan; Romero-Zerón, Laura; Balcom, Bruce J

    2016-03-01

    Magnetic resonance imaging (MRI) is a robust tool to monitor oil displacement processes in porous media. Conventional MRI measurement times can be lengthy, which hinders monitoring time-dependent displacements. Knowledge of the oil and water microscopic distribution is important because their pore scale behavior reflects the oil trapping mechanisms. The oil and water pore scale distribution is reflected in the magnetic resonance T2 signal lifetime distribution. In this work, a pure phase-encoding MRI technique, spin echo SPI (SE-SPI), was employed to monitor oil displacement during water flooding and polymer flooding. A k-t acceleration method, with low-rank matrix completion, was employed to improve the temporal resolution of the SE-SPI MRI measurements. Comparison to conventional SE-SPI T2 mapping measurements revealed that the k-t accelerated measurement was more sensitive and provided higher-quality results. It was demonstrated that the k-t acceleration decreased the average measurement time from 66.7 to 20.3 min in this work. A perfluorinated oil, containing no (1) H, and H2 O brine were employed to distinguish oil and water phases in model flooding experiments. High-quality 1D water saturation profiles were acquired from the k-t accelerated SE-SPI measurements. Spatially and temporally resolved T2 distributions were extracted from the profile data. The shift in the (1) H T2 distribution of water in the pore space to longer lifetimes during water flooding and polymer flooding is consistent with increased water content in the pore space. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26626141

  13. Anomalous/Fractional Diffusion in Particle Acceleration Processes.

    NASA Astrophysics Data System (ADS)

    Bian, Nicolas

    2016-07-01

    This talk is aimed at reviewing a certain number of theoretical aspects concerning the relation between stochastic acceleration and anomalous/fractional transport of particles. As a matter of fact, anomalous velocity-space diffusion is required within any stochastic acceleration scenario to explain the formation of the ubiquitous power-law tail of non-thermal particles, as observed e.g. in the accelerated distribution of electrons during solar flares. I will establish a classification scheme for stochastic acceleration models involving turbulence in magnetized plasmas. This classification takes into account both the properties of the accelerating electromagnetic field, and the nature of the spatial transport (possibly fractional) of charged particles in the acceleration region. I will also discuss recent attempts to obtain spatially non-local and fractional diffusion equations directly from first principles, starting either from the Fokker-Planck equation in the large mean free-path regime or the Boltzmann equation involving velocity-space relaxation toward the kappa distribution instead of the standard Maxwellian distribution.

  14. Signal processing for imaging and mapping ladar

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Tolt, Gustav

    2011-11-01

    The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy. By gathering range information from several frames the geometrical information of the target can be obtained. We also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we illustrate how range data enables target classification in near real-time and that the results can be improved if several frames are co-registered. Examples using data from forest and maritime scenes are shown.

  15. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  16. Accelerating molecular docking calculations using graphics processing units.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2011-04-25

    The generation of molecular conformations and the evaluation of interaction potentials are common tasks in molecular modeling applications, particularly in protein-ligand or protein-protein docking programs. In this work, we present a GPU-accelerated approach capable of speeding up these tasks considerably. For the evaluation of interaction potentials in the context of rigid protein-protein docking, the GPU-accelerated approach reached speedup factors of up to over 50 compared to an optimized CPU-based implementation. Treating the ligand and donor groups in the protein binding site as flexible, speedup factors of up to 16 can be observed in the evaluation of protein-ligand interaction potentials. Additionally, we introduce a parallel version of our protein-ligand docking algorithm PLANTS that can take advantage of this GPU-accelerated scoring function evaluation. We compared the GPU-accelerated parallel version to the same algorithm running on the CPU and also to the highly optimized sequential CPU-based version. In terms of dependence of the ligand size and the number of rotatable bonds, speedup factors of up to 10 and 7, respectively, can be observed. Finally, a fitness landscape analysis in the context of rigid protein-protein docking was performed. Using a systematic grid-based search methodology, the GPU-accelerated version outperformed the CPU-based version with speedup factors of up to 60. PMID:21434638

  17. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    SciTech Connect

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Serianni, G.; Nakano, H.; Takeiri, Y.; Tsumori, K.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  18. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  19. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  20. Heralded processes on continuous-variable spaces as quantum maps

    SciTech Connect

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi; Barbieri, Marco; Tualle-Brouri, Rosa

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  1. Quantum stochastic processes for maps on Hilbert C*-modules

    SciTech Connect

    Heo, Jaeseong; Ji, Un Cig

    2011-05-15

    We discuss pairs ({phi}, {Phi}) of maps, where {phi} is a map between C*-algebras and {Phi} is a {phi}-module map between Hilbert C*-modules, which are generalization of representations of Hilbert C*-modules. A covariant version of Stinespring's theorem for such a pair ({phi}, {Phi}) is established, and quantum stochastic processes constructed from pairs ({l_brace}{phi}{sub t{r_brace}}, {l_brace}{Phi}{sub t{r_brace}}) of families of such maps are studied. We prove that the quantum stochastic process J={l_brace}J{sub t{r_brace}} constructed from a {phi}-quantum dynamical semigroup {Phi}={l_brace}{Phi}{sub t{r_brace}} is a j-map for the quantum stochastic process j={l_brace}j{sub t{r_brace}} constructed from the given quantum dynamical semigroup {phi}={l_brace}{phi}{sub t{r_brace}}, and that J is covariant if the {phi}-quantum dynamical semigroup {Phi} is covariant.

  2. Accelerators for E-beam and X-ray processing

    NASA Astrophysics Data System (ADS)

    Auslender, V. L.; Bryazgin, A. A.; Faktorovich, B. L.; Gorbunov, V. A.; Kokin, E. N.; Korobeinikov, M. V.; Krainov, G. S.; Lukin, A. N.; Maximov, S. A.; Nekhaev, V. E.; Panfilov, A. D.; Radchenko, V. N.; Tkachenko, V. O.; Tuvik, A. A.; Voronin, L. A.

    2002-03-01

    During last years the demand for pasteurization and desinsection of various food products (meat, chicken, sea products, vegetables, fruits, etc.) had increased. The treatment of these products in industrial scale requires the usage of powerful electron accelerators with energy 5-10 MeV and beam power at least 50 kW or more. The report describes the ILU accelerators with energy range up to 10 MeV and beam power up to 150 kW.The different irradiation schemes in electron beam and X-ray modes for various products are described. The design of the X-ray converter and 90° beam bending system are also given.

  3. New high-current Dynamitron accelerators for electron beam processing

    NASA Astrophysics Data System (ADS)

    Cleland, M. R.; Thompson, C. C.; Saito, H.; Lisanti, T. F.; Burgess, R. G.; Malone, H. F.; Loby, R. J.; Galloway, R. A.

    1993-06-01

    The material throughput capabilities of RDI's new 550 keV and 800 keV Dynamitron R accelerators have been enhanced by increasing their beam current ratings from 100 mA to 160 mA. Future requirements up to 200 mA have been anticipated in the designs. The high-voltage power supply, beam scanner and beam window have all been modified to accommodate the higher current ratings. A new programmable control system has also been developed. The basic design concepts are described and performance data are presented in this paper.

  4. Next generation tools to accelerate the synthetic biology process.

    PubMed

    Shih, Steve C C; Moraes, Christopher

    2016-05-16

    Synthetic biology follows the traditional engineering paradigm of designing, building, testing and learning to create new biological systems. While such approaches have enormous potential, major challenges still exist in this field including increasing the speed at which this workflow can be performed. Here, we present recently developed microfluidic tools that can be used to automate the synthetic biology workflow with the goal of advancing the likelihood of producing desired functionalities. With the potential for programmability, automation, and robustness, the integration of microfluidics and synthetic biology has the potential to accelerate advances in areas such as bioenergy, health, and biomaterials. PMID:27146265

  5. Mapping Perinatal Nursing Process Measurement Concepts to Standard Terminologies.

    PubMed

    Ivory, Catherine H

    2016-07-01

    The use of standard terminologies is an essential component for using data to inform practice and conduct research; perinatal nursing data standardization is needed. This study explored whether 76 distinct process elements important for perinatal nursing were present in four American Nurses Association-recognized standard terminologies. The 76 process elements were taken from a valid paper-based perinatal nursing process measurement tool. Using terminology-supported browsers, the elements were manually mapped to the selected terminologies by the researcher. A five-member expert panel validated 100% of the mapping findings. The majority of the process elements (n = 63, 83%) were present in SNOMED-CT, 28% (n = 21) in LOINC, 34% (n = 26) in ICNP, and 15% (n = 11) in CCC. SNOMED-CT and LOINC are terminologies currently recommended for use to facilitate interoperability in the capture of assessment and problem data in certified electronic medical records. Study results suggest that SNOMED-CT and LOINC contain perinatal nursing process elements and are useful standard terminologies to support perinatal nursing practice in electronic health records. Terminology mapping is the first step toward incorporating traditional paper-based tools into electronic systems. PMID:27081756

  6. Mapping Perinatal Nursing Process Measurement Concepts to Standard Terminologies.

    PubMed

    Ivory, Catherine H

    2016-07-01

    The use of standard terminologies is an essential component for using data to inform practice and conduct research; perinatal nursing data standardization is needed. This study explored whether 76 distinct process elements important for perinatal nursing were present in four American Nurses Association-recognized standard terminologies. The 76 process elements were taken from a valid paper-based perinatal nursing process measurement tool. Using terminology-supported browsers, the elements were manually mapped to the selected terminologies by the researcher. A five-member expert panel validated 100% of the mapping findings. The majority of the process elements (n = 63, 83%) were present in SNOMED-CT, 28% (n = 21) in LOINC, 34% (n = 26) in ICNP, and 15% (n = 11) in CCC. SNOMED-CT and LOINC are terminologies currently recommended for use to facilitate interoperability in the capture of assessment and problem data in certified electronic medical records. Study results suggest that SNOMED-CT and LOINC contain perinatal nursing process elements and are useful standard terminologies to support perinatal nursing practice in electronic health records. Terminology mapping is the first step toward incorporating traditional paper-based tools into electronic systems.

  7. Subcortical mapping of calculation processing in the right parietal lobe.

    PubMed

    Della Puppa, Alessandro; De Pellegrin, Serena; Lazzarini, Anna; Gioffrè, Giorgio; Rustemi, Oriela; Cagnin, Annachiara; Scienza, Renato; Semenza, Carlo

    2015-05-01

    Preservation of calculation processing in brain surgery is crucial for patients' quality of life. Over the last decade, surgical electrostimulation was used to identify and preserve the cortical areas involved in such processing. Conversely, subcortical connectivity among different areas implicated in this function remains unclear, and the role of surgery in this domain has not been explored so far. The authors present the first 2 cases in which the subcortical functional sites involved in calculation were identified during right parietal lobe surgery. Two patients affected by a glioma located in the right parietal lobe underwent surgery with the aid of MRI neuronavigation. No calculation deficits were detected during preoperative assessment. Cortical and subcortical mapping were performed using a bipolar stimulator. The current intensity was determined by progressively increasing the amplitude by 0.5-mA increments (from a baseline of 1 mA) until a sensorimotor response was elicited. Then, addition and multiplication calculation tasks were administered. Corticectomy was performed according to both the MRI neuronavigation data and the functional findings obtained through cortical mapping. Direct subcortical electrostimulation was repeatedly performed during tumor resection. Subcortical functional sites for multiplication and addition were detected in both patients. Electrostimulation interfered with calculation processing during cortical mapping as well. Functional sites were spared during tumor removal. The postoperative course was uneventful, and calculation processing was preserved. Postoperative MRI showed complete resection of the tumor. The present preliminary study shows for the first time how functional mapping can be a promising method to intraoperatively identify the subcortical functional sites involved in calculation processing. This report therefore supports direct electrical stimulation as a promising tool to improve the current knowledge on

  8. Accelerate!

    PubMed

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves. PMID:23155997

  9. Accelerate!

    PubMed

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves.

  10. Novel process windows for enabling, accelerating, and uplifting flow chemistry.

    PubMed

    Hessel, Volker; Kralisch, Dana; Kockmann, Norbert; Noël, Timothy; Wang, Qi

    2013-05-01

    Novel Process Windows make use of process conditions that are far from conventional practices. This involves the use of high temperatures, high pressures, high concentrations (solvent-free), new chemical transformations, explosive conditions, and process simplification and integration to boost synthetic chemistry on both the laboratory and production scale. Such harsh reaction conditions can be safely reached in microstructured reactors due to their excellent transport intensification properties. This Review discusses the different routes towards Novel Process Windows and provides several examples for each route grouped into different classes of chemical and process-design intensification.

  11. Process mapping as a tool for home health network analysis.

    PubMed

    Pluto, Delores M; Hirshorn, Barbara A

    2003-01-01

    Process mapping is a qualitative tool that allows service providers, policy makers, researchers, and other concerned stakeholders to get a "bird's eye view" of a home health care organizational network or a very focused, in-depth view of a component of such a network. It can be used to share knowledge about community resources directed at the older population, identify gaps in resource availability and access, and promote on-going collaborative interactions that encourage systemic policy reassessment and programmatic refinement. This article is a methodological description of process mapping, which explores its utility as a practice and research tool, illustrates its use in describing service-providing networks, and discusses some of the issues that are key to successfully using this methodology.

  12. Conceptual Framework for the Mapping of Management Process with Information Technology in a Business Process

    PubMed Central

    Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. PMID:25861688

  13. Conceptual framework for the mapping of management process with information technology in a business process.

    PubMed

    Rajarathinam, Vetrickarthick; Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified.

  14. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  15. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  16. Biotherapeutics in orthopaedic medicine: accelerating the healing process?

    PubMed

    Puleo, David

    2003-01-01

    Musculoskeletal injuries have a significant human and financial impact on society. In particular, fractures that lead to delayed union or even nonunion represent a serious clinical challenge for which few treatment options are available. The multiple surgical procedures often needed are associated with patient morbidity and reduced quality of life. Biotechnological advances have made possible a host of potential treatments for enhancing and accelerating the repair of bone. By stimulating the body's own healing mechanisms, clinical outcomes may be improved while also containing procedural costs. Biotherapeutics may take the form of proteins, genes or cells that can be used to treat the injury. Protein biotherapeutics have received the greatest attention. Using recombinant DNA techniques, growth factors that play important roles in bone development and repair are being produced. By delivering exogenous growth factors to the site of injury in an appropriate manner, bone formation can be stimulated. Although individual proteins have been the primary focus of investigation, combinations of biomolecules can have additive, and perhaps synergistic, effects. Alternatively, genes coding for osteotropic growth factors can be delivered to the site of injury. Expression of the gene effectively results in localised delivery of the growth factor. Delivery of cells having osteogenic potential can also result in bone formation. Furthermore, it may be possible to obtain additional benefits by combining biotherapeutic approaches, such as by introducing cells genetically modified to overexpress therapeutic proteins of interest. Although biotherapeutics have great potential for stimulating bone repair, only a limited number of treatments have been approved by governmental regulatory agencies for clinical use. Bone morphogenetic activity was initially described in 1965, but not until 2001 and 2002 did two protein biotherapeutics, utilising bone morphogenetic proteins 2 and 7, receive

  17. Estimating and mapping ecological processes influencing microbial community assembly

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.

  18. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  19. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  20. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU. PMID:21541963

  1. Using pattern enumeration to accelerate process development and ramp yield

    NASA Astrophysics Data System (ADS)

    Zhuang, Linda; Pang, Jenny; Xu, Jessy; Tsai, Mengfeng; Wang, Amy; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua

    2016-03-01

    During a new technology node process setup phase, foundries do not initially have enough product chip designs to conduct exhaustive process development. Different operational teams use manually designed simple test keys to set up their process flows and recipes. When the very first version of the design rule manual (DRM) is ready, foundries enter the process development phase where new experiment design data is manually created based on these design rules. However, these IP/test keys contain very uniform or simple design structures. This kind of design normally does not contain critical design structures or process unfriendly design patterns that pass design rule checks but are found to be less manufacturable. It is desired to have a method to generate exhaustive test patterns allowed by design rules at development stage to verify the gap of design rule and process. This paper presents a novel method of how to generate test key patterns which contain known problematic patterns as well as any constructs which designers could possibly draw based on current design rules. The enumerated test key patterns will contain the most critical design structures which are allowed by any particular design rule. A layout profiling method is used to do design chip analysis in order to find potential weak points on new incoming products so fab can take preemptive action to avoid yield loss. It can be achieved by comparing different products and leveraging the knowledge learned from previous manufactured chips to find possible yield detractors.

  2. Refining each process step to accelerate the development of biorefineries

    DOE PAGES

    Chandra, Richard P.; Ragauskas, Art J.

    2016-06-21

    Research over the past decade has been mainly focused on overcoming hurdles in the pretreatment, enzymatic hydrolysis, and fermentation steps of biochemical processing. Pretreatments have improved significantly in their ability to fractionate and recover the cellulose, hemicellulose, and lignin components of biomass while producing substrates containing carbohydrates that can be easily broken down by hydrolytic enzymes. There is a rapid movement towards pretreatment processes that incorporate mechanical treatments that make use of existing infrastructure in the pulp and paper industry, which has experienced a downturn in its traditional markets. Enzyme performance has also made great strides with breakthrough developments inmore » nonhydrolytic protein components, such as lytic polysaccharide monooxygenases, as well as the improvement of enzyme cocktails.The fermentability of pretreated and hydrolyzed sugar streams has been improved through strategies such as the use of reducing agents for detoxification, strain selection, and strain improvements. Although significant progress has been made, tremendous challenges still remain to advance each step of biochemical conversion, especially when processing woody biomass. In addition to technical and scale-up issues within each step of the bioconversion process, biomass feedstock supply and logistics challenges still remain at the forefront of biorefinery research.« less

  3. Accelerated space object tracking via graphic processing unit

    NASA Astrophysics Data System (ADS)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  4. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    SciTech Connect

    Snopok, Pavel

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  5. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  6. In-situ diagnostics and degradation mapping of a mixed-mode accelerated stress test for proton exchange membranes

    NASA Astrophysics Data System (ADS)

    Lai, Yeh-Hung; Fly, Gerald W.

    2015-01-01

    With increasing availability of more durable membrane materials for proton exchange membrane fuel cells, there is a need for a more stressful test that combines chemical and mechanical stressors to enable accelerated screening of promising membrane candidates. Equally important is the need for in-situ diagnostic methods with sufficient spatial resolution that can provide insights into how membranes degrade to facilitate the development of durable fuel cell systems. In this article, we report an accelerated membrane stress test and a degradation diagnostic method that satisfy both needs. By applying high-amplitude cycles of electrical load to a fuel cell fed with low-RH reactant gases, a wide range of mechanical and chemical stressful conditions can be created within the cell which leads to rapid degradation of a mechanically robust Ion Power™ N111-IP membrane. Using an in-situ shorting/crossover diagnostic method on a segmented fuel cell fixture that provides 100 local current measurements, we are able to monitor the progression and map the degradation modes of shorting, thinning, and crossover leak over the entire membrane. Results from this test method have been validated by conventional metrics of fluoride release rates, physical crossover leak rates, pinhole mapping, and cross-sectional measurements.

  7. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  8. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  9. Graphics processing units accelerated semiclassical initial value representation molecular dynamics.

    PubMed

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly. PMID:24811627

  10. Accelerating the scientific exploration process with scientific workflows

    NASA Astrophysics Data System (ADS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-09-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, ''smart'' re-running of different versions of workflow instances, on-the-fly updateable parameters

  11. Using Qualitative Observation To Document Group Processes in Accelerated Schools Training: Techniques and Results.

    ERIC Educational Resources Information Center

    McFarland, Katherine; Batten, Constance

    This paper describes the use of qualitative observation techniques for gathering and analyzing data related to group processes during an Accelerated Schools Model training session. The purposes for this research were to observe the training process in order better to facilitate present continuation and future training, to develop questions for…

  12. Accelerator Production of Tritium project process waste assessment

    SciTech Connect

    Carson, S.D.; Peterson, P.K.

    1995-09-01

    DOE has made a commitment to compliance with all applicable environmental regulatory requirements. In this respect, it is important to consider and design all tritium supply alternatives so that they can comply with these requirements. The management of waste is an integral part of this activity and it is therefore necessary to estimate the quantities and specific wastes that will be generated by all tritium supply alternatives. A thorough assessment of waste streams includes waste characterization, quantification, and the identification of treatment and disposal options. The waste assessment for APT has been covered in two reports. The first report was a process waste assessment (PWA) that identified and quantified waste streams associated with both target designs and fulfilled the requirements of APT Work Breakdown Structure (WBS) Item 5.5.2.1. This second report is an expanded version of the first that includes all of the data of the first report, plus an assessment of treatment and disposal options for each waste stream identified in the initial report. The latter information was initially planned to be issued as a separate Waste Treatment and Disposal Options Assessment Report (WBS Item 5.5.2.2).

  13. Alginate-hyaluronan composite hydrogels accelerate wound healing process.

    PubMed

    Catanzano, O; D'Esposito, V; Acierno, S; Ambrosio, M R; De Caro, C; Avagliano, C; Russo, P; Russo, R; Miro, A; Ungaro, F; Calignano, A; Formisano, P; Quaglia, F

    2015-10-20

    In this paper we propose polysaccharide hydrogels combining alginate (ALG) and hyaluronan (HA) as biofunctional platform for dermal wound repair. Hydrogels produced by internal gelation were homogeneous and easy to handle. Rheological evaluation of gelation kinetics of ALG/HA mixtures at different ratios allowed understanding the HA effect on ALG cross-linking process. Disk-shaped hydrogels, at different ALG/HA ratio, were characterized for morphology, homogeneity and mechanical properties. Results suggest that, although the presence of HA does significantly slow down gelation kinetics, the concentration of cross-links reached at the end of gelation is scarcely affected. The in vitro activity of ALG/HA dressings was tested on adipose derived multipotent adult stem cells (Ad-MSC) and an immortalized keratinocyte cell line (HaCaT). Hydrogels did not interfere with cell viability in both cells lines, but significantly promoted gap closure in a scratch assay at early (1 day) and late (5 days) stages as compared to hydrogels made of ALG alone (p<0.01 and 0.001 for Ad-MSC and HaCaT, respectively). In vivo wound healing studies, conducted on a rat model of excised wound indicated that after 5 days ALG/HA hydrogels significantly promoted wound closure as compared to ALG ones (p<0.001). Overall results demonstrate that the integration of HA in a physically cross-linked ALG hydrogel can be a versatile strategy to promote wound healing that can be easily translated in a clinical setting.

  14. Gaussian process style transfer mapping for historical Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Feng, Jixiong; Peng, Liangrui; Lebourgeois, Franck

    2015-01-01

    Historical Chinese character recognition is very important to larger scale historical document digitalization, but is a very challenging problem due to lack of labeled training samples. This paper proposes a novel non-linear transfer learning method, namely Gaussian Process Style Transfer Mapping (GP-STM). The GP-STM extends traditional linear Style Transfer Mapping (STM) by using Gaussian process and kernel methods. With GP-STM, existing printed Chinese character samples are used to help the recognition of historical Chinese characters. To demonstrate this framework, we compare feature extraction methods, train a modified quadratic discriminant function (MQDF) classifier on printed Chinese character samples, and implement the GP-STM model on Dunhuang historical documents. Various kernels and parameters are explored, and the impact of the number of training samples is evaluated. Experimental results show that accuracy increases by nearly 15 percentage points (from 42.8% to 57.5%) using GP-STM, with an improvement of more than 8 percentage points (from 49.2% to 57.5%) compared to the STM approach.

  15. Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference

    NASA Astrophysics Data System (ADS)

    Marzouk, Y.; Parno, M.

    2014-12-01

    We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.

  16. Mapping

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1978-01-01

    Geologic mapping in the United States increased by about one-quarter in the past year. Examinations of mapping trends were in the following categories: (1) Mapping at scales of 1:100, 000; (2) Metric-scale base maps; (3) International mapping, and (4) Planetary mapping. (MA)

  17. Swarm accelerometer data processing from raw accelerations to thermospheric neutral densities

    NASA Astrophysics Data System (ADS)

    Siemes, Christian; de Teixeira da Encarnação, João; Doornbos, Eelco; van den IJssel, Jose; Kraus, Jiří; Pereštý, Radek; Grunwaldt, Ludwig; Apelbaum, Guy; Flury, Jakob; Holmdahl Olsen, Poul Erik

    2016-05-01

    The Swarm satellites were launched on November 22, 2013, and carry accelerometers and GPS receivers as part of their scientific payload. The GPS receivers do not only provide the position and time for the magnetic field measurements, but are also used for determining non-gravitational forces like drag and radiation pressure acting on the spacecraft. The accelerometers measure these forces directly, at much finer resolution than the GPS receivers, from which thermospheric neutral densities can be derived. Unfortunately, the acceleration measurements suffer from a variety of disturbances, the most prominent being slow temperature-induced bias variations and sudden bias changes. In this paper, we describe the new, improved four-stage processing that is applied for transforming the disturbed acceleration measurements into scientifically valuable thermospheric neutral densities. In the first stage, the sudden bias changes in the acceleration measurements are manually removed using a dedicated software tool. The second stage is the calibration of the accelerometer measurements against the non-gravitational accelerations derived from the GPS receiver, which includes the correction for the slow temperature-induced bias variations. The identification of validity periods for calibration and correction parameters is part of the second stage. In the third stage, the calibrated and corrected accelerations are merged with the non-gravitational accelerations derived from the observations of the GPS receiver by a weighted average in the spectral domain, where the weights depend on the frequency. The fourth stage consists of transforming the corrected and calibrated accelerations into thermospheric neutral densities. We present the first results of the processing of Swarm C acceleration measurements from June 2014 to May 2015. We started with Swarm C because its acceleration measurements contain much less disturbances than those of Swarm A and have a higher signal-to-noise ratio

  18. FAST Observations of Acceleration Processes in the Cusp--Evidence for Parallel Electric Fields

    NASA Technical Reports Server (NTRS)

    Pfaff, R. F.. Jr.; Carlson, C.; McFadden, J.; Ergun, R.; Clemmons, J.; Klumpar D.; Strangeway, R.

    1999-01-01

    The existence of precipitating keV ions in the Earth's cusp originating at the magnetosheath provide unique means to test our understanding of particle acceleration and parallel electric fields in the lower altitude acceleration region. On numerous occasions, the FAST (The Fast Auroral Snapshot) spacecraft has encountered the Earth's cusp regions near its apogee of 4175 km which are characterized by their signatures of dispersed keV ion injections. The FAST instruments also reveal a complex microphysics inherent to many, but not all, of the cusp regions encountered by the spacecraft, that include upgoing ion beams and conics, inverted-V electrons, upgoing electron beams, and spikey DC-coupled electric fields and plasma waves. Detailed inspection of the FAST data often show clear modulation of the precipitating magnetosheath ions that indicate that they are affected by local electric potentials. For example, the magnetosheath ion precipitation is sometimes abruptly shut off precisely in regions where downgoing localized inverted-V electrons are observed. Such observations support the existence of a localized process, such as parallel electric fields, above the spacecraft which accelerate the electrons downward and consequently impede the precipitating ion precipitation. Other acceleration events in the cusp are sometimes organized with an apparent cellular structure that suggests Alfven waves or other large-scale phenomena are controlling the localized potentials. We examine several cusp encounters by the FAST satellite where the modulation of energetic session on acceleration particle populations reveals evidence of localized acceleration, most likely by parallel electric fields.

  19. Quasi-steady stages in the process of premixed flame acceleration in narrow channels

    NASA Astrophysics Data System (ADS)

    Valiev, D. M.; Bychkov, V.; Akkerman, V.; Eriksson, L.-E.; Law, C. K.

    2013-09-01

    The present paper addresses the phenomenon of spontaneous acceleration of a premixed flame front propagating in micro-channels, with subsequent deflagration-to-detonation transition. It has recently been shown experimentally [M. Wu, M. Burke, S. Son, and R. Yetter, Proc. Combust. Inst. 31, 2429 (2007)], 10.1016/j.proci.2006.08.098, computationally [D. Valiev, V. Bychkov, V. Akkerman, and L.-E. Eriksson, Phys. Rev. E 80, 036317 (2009)], 10.1103/PhysRevE.80.036317, and analytically [V. Bychkov, V. Akkerman, D. Valiev, and C. K. Law, Phys. Rev. E 81, 026309 (2010)], 10.1103/PhysRevE.81.026309 that the flame acceleration undergoes different stages, from an initial exponential regime to quasi-steady fast deflagration with saturated velocity. The present work focuses on the final saturation stages in the process of flame acceleration, when the flame propagates with supersonic velocity with respect to the channel walls. It is shown that an intermediate stage may occur during acceleration with quasi-steady velocity, noticeably below the Chapman-Jouguet deflagration speed. The intermediate stage is followed by additional flame acceleration and subsequent saturation to the Chapman-Jouguet deflagration regime. We elucidate the intermediate stage by the joint effect of gas pre-compression ahead of the flame front and the hydraulic resistance. The additional acceleration is related to viscous heating at the channel walls, being of key importance at the final stages. The possibility of explosion triggering is also demonstrated.

  20. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    PubMed Central

    Wang, Xin; Zhang, Bin; Cao, Xu; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-01-01

    Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (Gd and W modules) were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly. PMID:23606899

  1. Electron studies of acceleration processes in the corona. [solar probe mission planning

    NASA Technical Reports Server (NTRS)

    Lin, R. P.

    1978-01-01

    The solar probe mission can obtain unique and crucially important measurements of electron acceleration, storage, and propagation processes in the corona and can probe the magnetic field structure of the corona below the spacecraft. The various energetic electron phenomena which will be sampled by the Solar Probe are described and some new techniques to probe coronal structures are suggested.

  2. Nanomanufacturing Portfolio: Manufacturing Processes and Applications to Accelerate Commercial Use of Nanomaterials

    SciTech Connect

    Industrial Technologies Program

    2011-01-05

    This brochure describes the 31 R&D projects that AMO supports to accelerate the commercial manufacture and use of nanomaterials for enhanced energy efficiency. These cost-shared projects seek to exploit the unique properties of nanomaterials to improve the functionality of industrial processes and products.

  3. Sampling frequency affects the processing of Actigraph raw acceleration data to activity counts.

    PubMed

    Brønd, Jan Christian; Arvidsson, Daniel

    2016-02-01

    ActiGraph acceleration data are processed through several steps (including band-pass filtering to attenuate unwanted signal frequencies) to generate the activity counts commonly used in physical activity research. We performed three experiments to investigate the effect of sampling frequency on the generation of activity counts. Ideal acceleration signals were produced in the MATLAB software. Thereafter, ActiGraph GT3X+ monitors were spun in a mechanical setup. Finally, 20 subjects performed walking and running wearing GT3X+ monitors. Acceleration data from all experiments were collected with different sampling frequencies, and activity counts were generated with the ActiLife software. With the default 30-Hz (or 60-Hz, 90-Hz) sampling frequency, the generation of activity counts was performed as intended with 50% attenuation of acceleration signals with a frequency of 2.5 Hz by the signal frequency band-pass filter. Frequencies above 5 Hz were eliminated totally. However, with other sampling frequencies, acceleration signals above 5 Hz escaped the band-pass filter to a varied degree and contributed to additional activity counts. Similar results were found for the spinning of the GT3X+ monitors, although the amount of activity counts generated was less, indicating that raw data stored in the GT3X+ monitor is processed. Between 600 and 1,600 more counts per minute were generated with the sampling frequencies 40 and 100 Hz compared with 30 Hz during running. Sampling frequency affects the processing of ActiGraph acceleration data to activity counts. Researchers need to be aware of this error when selecting sampling frequencies other than the default 30 Hz.

  4. Graphics processing unit-accelerated double random phase encoding for fast image encryption

    NASA Astrophysics Data System (ADS)

    Lee, Jieun; Yi, Faliu; Saifullah, Rao; Moon, Inkyu

    2014-11-01

    We propose a fast double random phase encoding (DRPE) algorithm using a graphics processing unit (GPU)-based stream-processing model. A performance analysis of the accelerated DRPE implementation that employs the Compute Unified Device Architecture programming environment is presented. We show that the proposed methodology executed on a GPU can dramatically increase encryption speed compared with central processing unit sequential computing. Our experimental results demonstrate that in encryption data of an image with a pixel size of 1000×1000, where one pixel has a 32-bit depth, our GPU version of the DRPE scheme can be approximately two times faster than the advanced encryption standard algorithm implemented on a GPU. In addition, the quality of parallel processing on the presented DRPE acceleration method is evaluated with performance parameters, such as speedup, efficiency, and redundancy.

  5. Mapping seafloor volcanism and its record of tectonic processes

    NASA Astrophysics Data System (ADS)

    Kalnins, L. M.; Valentine, A. P.; Trampert, J.

    2013-12-01

    One relatively obvious surface reflection of certain types of tectonic and mantle processes is volcanic activity. Ocean covers two thirds of our planet, so naturally much of this evidence will be marine, yet the evidence of volcanic activity in the oceans remains very incompletely mapped. Many seamounts, the products of 'excess' volcanism, have been identified (10,000--20,000 over 1 km in height, depending on the study), but it is estimated that up to 60% of seamounts in this height range remain unmapped. Given the scale of the task, identification of probable seamounts is a process that clearly needs to be automated, but identifying naturally occurring features such as these is difficult because of the degree of inherent variation. A very promising avenue for these questions lies in the use of learning algorithms, such as neural networks, designed to have complex pattern recognition capabilities. Building on the work of Valentine et al. (2013), we present preliminary results of a new global seamount study based on neural network methods. Advantages of this approach include an intrinsic measure of confidence in the seamount identification and full automation, allowing easy re-picking to suit the requirements of different types of studies. Here, we examine the resulting spatial and temporal distribution of marine volcanism and consider what insights this offers into the shifting patterns of plate tectonics and mantle activity. We also consider the size distribution of the seamounts and explore possible classes based on shape and their distributions, potentially reflecting both differing formational processes and later erosional processes. Valentine, A. P., L. M. Kalnins, and J. Trampert (2013), Discovery and analysis of topographic features using learning algorithms: A seamount case study, Geophysical Research Letters, 40(12), p. 3048--3054.

  6. Risk-Based Decision Process for Accelerated Closure of a Nuclear Weapons Facility

    SciTech Connect

    Butler, L.; Norland, R. L.; DiSalvo, R.; Anderson, M.

    2003-02-25

    Nearly 40 years of nuclear weapons production at the Rocky Flats Environmental Technology Site (RFETS or Site) resulted in contamination of soil and underground systems and structures with hazardous substances, including plutonium, uranium and hazardous waste constituents. The Site was placed on the National Priority List in 1989. There are more than 370 Individual Hazardous Substance Sites (IHSSs) at RFETS. Accelerated cleanup and closure of RFETS is being achieved through implementation and refinement of a regulatory framework that fosters programmatic and technical innovations: (1) extensive use of ''accelerated actions'' to remediate IHSSs, (2) development of a risk-based screening process that triggers and helps define the scope of accelerated actions consistent with the final remedial action objectives for the Site, (3) use of field instrumentation for real time data collection, (4) a data management system that renders near real time field data assessment, and (5) a regulatory agency consultative process to facilitate timely decisions. This paper presents the process and interim results for these aspects of the accelerated closure program applied to Environmental Restoration activities at the Site.

  7. Recent developments in the application of electron accelerators for polymer processing

    NASA Astrophysics Data System (ADS)

    Chmielewski, A. G.; Al-Sheikhly, M.; Berejka, A. J.; Cleland, M. R.; Antoniak, M.

    2014-01-01

    There are now over 1700 high current, electron beam (EB) accelerators being used world-wide in industrial applications, most of which involve polymer processing. In contrast to the use of heat, which transfers only about 5-10% of input energy into energy useful for materials modification, radiation processing is very energy efficient, with 60% or more of the input energy to an accelerator being available for affecting materials. Historic markets, such as the crosslinking of wire and cable jacketing, of heat shrinkable tubings and films, of partial crosslinking of tire components and of low-energy EB to cure or dry inks and coatings remain strong. Accelerator manufacturers have made equipment more affordable by down-sizing units while maintaining high beam currents. Very powerful accelerators with 700 kW output have made X-ray conversion a practical alternative to the historic use of radioisotopes, mainly cobalt-60, for applications as medical device sterilization. New EB end-uses are emerging, such as the development of nano-composites and nano-gels and the use of EB processing to facilitate biofuel production. These present opportunities for future research and development.

  8. Acceleration processes in the quasi-steady magnetoplasmadynamic discharge. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Boyle, M. J.

    1974-01-01

    The flow field characteristics within the discharge chamber and exhaust of a quasi-steady magnetoplasmadynamic (MPD) arcjet were examined to clarify the nature of the plasma acceleration process. The observation of discharge characteristics unperturbed by insulator ablation and terminal voltage fluctuations, first requires the satisfaction of three criteria: the use of refractory insulator materials; a mass injection geometry tailored to provide propellant to both electrode regions of the discharge; and a cathode of sufficient surface area to permit nominal MPD arcjet operation for given combinations of arc current and total mass flow. The axial velocity profile and electromagnetic discharge structure were measured for an arcjet configuration which functions nominally at 15.3 kA and 6 g/sec argon mass flow. An empirical two-flow plasma acceleration model is advanced which delineates inner and outer flow regions and accounts for the observed velocity profile and calculated thrust of the accelerator.

  9. Accelerating the cosmic microwave background map-making procedure through preconditioning

    NASA Astrophysics Data System (ADS)

    Szydlarski, M.; Grigori, L.; Stompor, R.

    2014-12-01

    Estimation of the sky signal from sequences of time ordered data is one of the key steps in cosmic microwave background (CMB) data analysis, commonly referred to as the map-making problem. Some of the most popular and general methods proposed for this problem involve solving generalised least-squares (GLS) equations with non-diagonal noise weights given by a block-diagonal matrix with Toeplitz blocks. In this work, we study new map-making solvers potentially suitable for applications to the largest anticipated data sets. They are based on iterative conjugate gradient (CG) approaches enhanced with novel, parallel, two-level preconditioners. We apply the proposed solvers to examples of simulated non-polarised and polarised CMB observations and a set of idealised scanning strategies with sky coverage ranging from a nearly full sky down to small sky patches. We discuss their implementation for massively parallel computational platforms and their performance for a broad range of parameters that characterise the simulated data sets in detail. We find that our best new solver can outperform carefully optimised standard solvers used today by a factor of as much as five in terms of the convergence rate and a factor of up to four in terms of the time to solution, without significantly increasing the memory consumption and the volume of inter-processor communication. The performance of the new algorithms is also found to be more stable and robust and less dependent on specific characteristics of the analysed data set. We therefore conclude that the proposed approaches are well suited to address successfully challenges posed by new and forthcoming CMB data sets.

  10. General description of electromagnetic radiation processes based on instantaneous charge acceleration in ''endpoints''

    SciTech Connect

    James, Clancy W.; Falcke, Heino; Huege, Tim; Ludwig, Marianne

    2011-11-15

    We present a methodology for calculating the electromagnetic radiation from accelerated charged particles. Our formulation - the 'endpoint formulation' - combines numerous results developed in the literature in relation to radiation arising from particle acceleration using a complete, and completely general, treatment. We do this by describing particle motion via a series of discrete, instantaneous acceleration events, or 'endpoints', with each such event being treated as a source of emission. This method implicitly allows for particle creation and destruction, and is suited to direct numerical implementation in either the time or frequency domains. In this paper we demonstrate the complete generality of our method for calculating the radiated field from charged particle acceleration, and show how it reduces to the classical named radiation processes such as synchrotron, Tamm's description of Vavilov-Cherenkov, and transition radiation under appropriate limits. Using this formulation, we are immediately able to answer outstanding questions regarding the phenomenology of radio emission from ultra-high-energy particle interactions in both the earth's atmosphere and the moon. In particular, our formulation makes it apparent that the dominant emission component of the Askaryan effect (coherent radio-wave radiation from high-energy particle cascades in dense media) comes from coherent 'bremsstrahlung' from particle acceleration, rather than coherent Vavilov-Cherenkov radiation.

  11. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  12. Process maps for plasma spray. Part II: Deposition and properties

    SciTech Connect

    XIANGYANG,JIANG; MATEJICEK,JIRI; KULKARNI,ANAND; HERMAN,HERBERT; SAMPATH,SANJAY; GILMORE,DELWYN L.; NEISER JR.,RICHARD A

    2000-03-28

    This is the second paper of a two part series based on an integrated study carried out at the State University of New York at Stony Brook and Sandia National Laboratories. The goal of the study is the fundamental understanding of the plasma-particle interaction, droplet/substrate interaction, deposit formation dynamics and microstructure development as well as the deposit property. The outcome is science-based relationships, which can be used to link processing to performance. Molybdenum splats and coatings produced at 3 plasma conditions and three substrate temperatures were characterized. It was found that there is a strong mechanical/thermal interaction between droplet and substrate, which builds up the coatings/substrate adhesion. Hardness, thermal conductivity, and modulus increase, while oxygen content and porosity decrease with increasing particle velocity. Increasing deposition temperature resulted in dramatic improvement in coating thermal conductivity and hardness as well as increase in coating oxygen content. Indentation reveals improved fracture resistance for the coatings prepared at higher deposition temperature. Residual stress was significantly affected by deposition temperature, although not significant by particle energy within the investigated parameter range. Coatings prepared at high deposition temperature with high-energy particles suffered considerably less damage in wear tests. Possible mechanisms behind these changes are discussed within the context of relational maps which are under development.

  13. Challenges Encountered during the Processing of the BNL ERL 5 Cell Accelerating Cavity

    SciTech Connect

    A. Burrill; I. Ben-Zvi; R. Calaga; H. Hahn; V. Litvinenko; G. T. McIntyre; P. Kneisel; J. Mammosser; J. P. Preble; C. E. Reece; R. A. Rimmer; J. Saunders

    2007-08-01

    One of the key components for the Energy Recovery Linac being built by the Electron cooling group in the Collider Accelerator Department is the 5 cell accelerating cavity which is designed to accelerate 2 MeV electrons from the gun up to 15-20 MeV, allow them to make one pass through the ring and then decelerate them back down to 2 MeV prior to sending them to the dump. This cavity was designed by BNL and fabricated by AES in Medford, NY. Following fabrication it was sent to Thomas Jefferson Lab in VA for chemical processing, testing and assembly into a string assembly suitable for shipment back to BNL and integration into the ERL. The steps involved in this processing sequence will be reviewed and the deviations from processing of similar SRF cavities will be discussed. The lessons learned from this process are documented to help future projects where the scope is different from that normally encountered.

  14. Distinguishing Between Quasi-static and Alfvénic Auroral Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Lysak, R. L.; Song, Y.

    2013-12-01

    Models for the acceleration of auroral particles fall into two general classes. Quasi-static processes, such as double layers or magnetic mirror supported potential drops, produce a nearly monoenergetic beam of precipitating electrons and upward flowing ion beams. Time-dependent acceleration processes, often associated with kinetic Alfvén waves, can produce a broader range of energies and often have a strongly field-aligned pitch angle distribution. Both processes are associated with strong perpendicular electric fields as well as the parallel electric fields that are largely responsible for the particle acceleration. These electric fields and the related magnetic perturbations can be characterized by the ratio of the electric field to a perpendicular magnetic perturbation, which is related to the Pedersen conductivity in the static case and the Alfvén velocity in the time-dependent case. However, these considerations can be complicated by the interaction between upward and downward propagating waves. The relevant time and space scales of these processes will be assessed and the consequences for observation by orbiting spacecraft and ground-based instrumentation will be determined. These features will be illustrated by numerical simulations of the magnetosphere-ionosphere coupling with emphasis on what a virtual spacecraft passing through the simulation would be expected to observe.

  15. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  16. A whole body vibration perception map and associated acceleration loads at the lower leg, hip and head.

    PubMed

    Sonza, Anelise; Völkel, Nina; Zaro, Milton A; Achaval, Matilde; Hennig, Ewald M

    2015-07-01

    Whole-body vibration (WBV) training has become popular in recent years. However, WBV may be harmful to the human body. The goal of this study was to determine the acceleration magnitudes at different body segments for different frequencies of WBV. Additionally, vibration sensation ratings by subjects served to create perception vibration magnitude and discomfort maps of the human body. In the first of two experiments, 65 young adults mean (± SD) age range of 23 (± 3.0) years, participated in WBV severity perception ratings, based on a Borg scale. Measurements were performed at 12 different frequencies, two intensities (3 and 5 mm amplitudes) of rotational mode WBV. On a separate day, a second experiment (n = 40) included vertical accelerometry of the head, hip and lower leg with the same WBV settings. The highest lower limb vibration magnitude perception based on the Borg scale was extremely intense for the frequencies between 21 and 25 Hz; somewhat hard for the trunk region (11-25 Hz) and fairly light for the head (13-25 Hz). The highest vertical accelerations were found at a frequency of 23 Hz at the tibia, 9 Hz at the hip and 13 Hz at the head. At 5 mm amplitude, 61.5% of the subjects reported discomfort in the foot region (21-25 Hz), 46.2% for the lower back (17, 19 and 21 Hz) and 23% for the abdominal region (9-13 Hz). The range of 3-7 Hz represents the safest frequency range with magnitudes less than 1 g(*)sec for all studied regions. PMID:25962379

  17. A whole body vibration perception map and associated acceleration loads at the lower leg, hip and head.

    PubMed

    Sonza, Anelise; Völkel, Nina; Zaro, Milton A; Achaval, Matilde; Hennig, Ewald M

    2015-07-01

    Whole-body vibration (WBV) training has become popular in recent years. However, WBV may be harmful to the human body. The goal of this study was to determine the acceleration magnitudes at different body segments for different frequencies of WBV. Additionally, vibration sensation ratings by subjects served to create perception vibration magnitude and discomfort maps of the human body. In the first of two experiments, 65 young adults mean (± SD) age range of 23 (± 3.0) years, participated in WBV severity perception ratings, based on a Borg scale. Measurements were performed at 12 different frequencies, two intensities (3 and 5 mm amplitudes) of rotational mode WBV. On a separate day, a second experiment (n = 40) included vertical accelerometry of the head, hip and lower leg with the same WBV settings. The highest lower limb vibration magnitude perception based on the Borg scale was extremely intense for the frequencies between 21 and 25 Hz; somewhat hard for the trunk region (11-25 Hz) and fairly light for the head (13-25 Hz). The highest vertical accelerations were found at a frequency of 23 Hz at the tibia, 9 Hz at the hip and 13 Hz at the head. At 5 mm amplitude, 61.5% of the subjects reported discomfort in the foot region (21-25 Hz), 46.2% for the lower back (17, 19 and 21 Hz) and 23% for the abdominal region (9-13 Hz). The range of 3-7 Hz represents the safest frequency range with magnitudes less than 1 g(*)sec for all studied regions.

  18. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  19. Accelerating frequency-domain diffuse optical tomographic image reconstruction using graphics processing units.

    PubMed

    Prakash, Jaya; Chandrasekharan, Venkittarayan; Upendra, Vishwajith; Yalavarthy, Phaneendra K

    2010-01-01

    Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13,377.

  20. Real-time process monitoring and temperature mapping of a 3D polymer printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Love, Lonnie J.; Rowe, John C.

    2013-05-01

    An extended-range IR camera was used to make temperature measurements of samples as they are being manufactured. The objective is to quantify the temperature variation of the parts as they are being fabricated. The IR camera was also used to map the temperature within the build volume of the oven. The development of the temperature map of the oven provides insight into the global temperature variation within the oven that may lead to understanding variations in the properties of parts as a function of build location within the oven. The observation of the temperature variation of a part during construction provides insight into how the deposition process itself creates temperature distributions, which can lead to failure.

  1. Real-time Process Monitoring and Temperature Mapping of the 3D Polymer Printing Process

    SciTech Connect

    Dinwiddie, Ralph Barton; Love, Lonnie J; Rowe, John C

    2013-01-01

    An extended range IR camera was used to make temperature measurements of samples as they are being manufactured. The objective is to quantify the temperature variation inside the system as parts are being fabricated, as well as quantify the temperature of a part during fabrication. The IR camera was used to map the temperature within the build volume of the oven and surface temperature measurement of a part as it was being manufactured. The development of the temperature map of the oven provides insight into the global temperature variation within the oven that may lead to understanding variations in the properties of parts as a function of location. The observation of the temperature variation of a part that fails during construction provides insight into how the deposition process itself impacts temperature distribution within a single part leading to failure.

  2. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  3. Plastic Deformation Behavior and Processing Maps of 35CrMo Steel

    NASA Astrophysics Data System (ADS)

    Xiao, Zheng-bing; Huang, Yuan-chun; Liu, Yu

    2016-03-01

    Hot deformation behavior of 35CrMo steel was investigated by compression tests in the temperature range of 850 to 1150 °C and strain rate range of 0.01 to 20 s-1 on a Gleeble-3810 thermal simulator. According to processing maps constructed based on the experimental data and using the principle of dynamic materials modeling (DMM), when the strain is 0.8, three safe regions with comparatively high efficiency of power dissipation were identified: (850 to 920) °C/(0.01 to 0.02) s-1, (850 to 900) °C/(10 to 20) s-1, and (1050 to 1150) °C/(0.01 to 1) s-1. And the domain of (920 to 1150) °C/(2.7 to 20) s-1 is within the instability range, whose efficiency of power dissipation is around 0.05. The deformed optical microstructure indicated that the combination of low deformation temperature (850 °C) and a relatively high strain rate (20 s-1) resulted in the smallest dynamic recrystallized grains, but coarser grains were obtained when a much higher strain rate was employed (50 s-1). A lower strain rate or a higher temperature will accelerate the growth of grains, and both high temperature and high strain rate can cause microcracks in the deformed steel. Integration of the processing map into the optical microstructure identified the region of (850 to 900) °C/(10 to 20) s-1 as the ideal condition for the hot deformation of 35CrMo steel.

  4. Draw-in Map - A Road Map for Simulation-Guided Die Tryout and Stamping Process Control

    SciTech Connect

    Wang Chuantao; Zhang, Jimmy J.; Goan, Norman

    2005-08-05

    Sheet metal forming is a displacement or draw-in controlled manufacturing process in which a flat blank is drawn into die cavity to form an automotive body panel. Draw-in amount is the single most important stamping manufacturing index that controls all forming characteristics (strains, stresses, thinning, etc.), stamping failures (splits, wrinkles, surface distortion, etc.) and line die operations and automations. Draw-in Map is engineered for math-based die developments via advanced stamping simulation technology. Then the Draw-in Map is provided to die makers in plants as a road map for math-guided die tryout in which the die tryout workers follow the engineered tryout conditions and matches the engineered draw-in amount so that the tryout time and cost are greatly reduced, and quality is ensured. The Map can also be used as a math-based trouble-shooting tool to identify the causes of formability problems in stamping production. The engineered Draw-in Map has been applied to all draw die tryout for all GM vehicle programs since 1998. A minimum 50% reduction in both lead-time and cost and significant improvement in panel quality in tryout have been reported. This paper presents the concept and process to apply the engineered Draw-in Map in die tryout.

  5. Accelerating Correlated Quantum Chemistry Calculations Using Graphical Processing Units and a Mixed Precision Matrix Multiplication Library.

    PubMed

    Olivares-Amaya, Roberto; Watson, Mark A; Edgar, Richard G; Vogt, Leslie; Shao, Yihan; Aspuru-Guzik, Alán

    2010-01-12

    Two new tools for the acceleration of computational chemistry codes using graphical processing units (GPUs) are presented. First, we propose a general black-box approach for the efficient GPU acceleration of matrix-matrix multiplications where the matrix size is too large for the whole computation to be held in the GPU's onboard memory. Second, we show how to improve the accuracy of matrix multiplications when using only single-precision GPU devices by proposing a heterogeneous computing model, whereby single- and double-precision operations are evaluated in a mixed fashion on the GPU and central processing unit, respectively. The utility of the library is illustrated for quantum chemistry with application to the acceleration of resolution-of-the-identity second-order Møller-Plesset perturbation theory calculations for molecules, which we were previously unable to treat. In particular, for the 168-atom valinomycin molecule in a cc-pVDZ basis set, we observed speedups of 13.8, 7.8, and 10.1 times for single-, double- and mixed-precision general matrix multiply (SGEMM, DGEMM, and MGEMM), respectively. The corresponding errors in the correlation energy were reduced from -10.0 to -1.2 kcal mol(-1) for SGEMM and MGEMM, respectively, while higher accuracy can be easily achieved with a different choice of cutoff parameter.

  6. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    DOE PAGES

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; Degraff, Brian D.; Gold, Steven W.; Hannah, Brian S.; Howell, Matthew P.; Kim, Sang-Ho; Mammosser, John; McMahan, Christopher J.; et al

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipactingmore » issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.« less

  7. In-situ plasma processing to increase the accelerating gradients of superconducting radio-frequency cavities

    NASA Astrophysics Data System (ADS)

    Doleans, M.; Tyagi, P. V.; Afanador, R.; McMahan, C. J.; Ball, J. A.; Barnhart, D. L.; Blokland, W.; Crofford, M. T.; Degraff, B. D.; Gold, S. W.; Hannah, B. S.; Howell, M. P.; Kim, S.-H.; Lee, S.-W.; Mammosser, J.; Neustadt, T. S.; Saunders, J. W.; Stewart, S.; Strong, W. H.; Vandygriff, D. J.; Vandygriff, D. M.

    2016-03-01

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipacting issues. In this article, the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus are discussed.

  8. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    SciTech Connect

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; Degraff, Brian D.; Gold, Steven W.; Hannah, Brian S.; Howell, Matthew P.; Kim, Sang-Ho; Mammosser, John; McMahan, Christopher J.; Neustadt, Thomas S.; Saunders, Jeffrey W.; Tyagi, Puneet V.; Vandygriff, Daniel J.; Vandygriff, David M.; Ball, Jeffrey Allen; Blokland, Willem; Crofford, Mark T.; Lee, Sung-Woo; Stewart, Stephen; Strong, William Herb

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipacting issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.

  9. Optimization of process parameters for the manufacturing of rocket casings: A study using processing maps

    NASA Astrophysics Data System (ADS)

    Avadhani, G. S.

    2003-12-01

    Maraging steels possess ultrahigh strength combined with ductility and toughness and could be easily fabricated and heat-treated. Bulk metalworking of maraging steels is an important step in the component manufacture. To optimize the hot-working parameters (temperature and strain rate) for the ring rolling process of maraging steel used for the manufacture of rocket casings, a systematic study was conducted to characterize the hot working behavior by developing processing maps for γ-iron and an indigenous 250 grade maraging steel. The hot deformation behavior of binary alloys of iron with Ni, Co, and Mo, which are major constituents of maraging steel, is also studied. Results from the investigation suggest that all the materials tested exhibit a domain of dynamic recrystallization (DRX). From the instability maps, it was revealed that strain rates above 10 s-1 are not suitable for hot working of these materials. An important result from the stress-strain behavior is that while Co strengthens γ-iron, Ni and Mo cause flow softening. Temperatures around 1125 °C and strain rate range between 0.001 and 0.1 s-1 are suitable for the hot working of maraging steel in the DRX domain. Also, higher strain rates may be used in the meta-dynamic recrystallization domain above 1075 °C for high strain rate applications such as ring rolling. The microstructural mechanisms identified from the processing maps along with grain size analyses and hot ductility measurements could be used to design hot-working schedules for maraging steel.

  10. Plasma Processing of SRF Cavities for the next Generation Of Particle Accelerators

    SciTech Connect

    Vuskovic, Leposava

    2015-11-23

    The cost-effective production of high frequency accelerating fields are the foundation for the next generation of particle accelerators. The Ar/Cl2 plasma etching technology holds the promise to yield a major reduction in cavity preparation costs. Plasma-based dry niobium surface treatment provides an excellent opportunity to remove bulk niobium, eliminate surface imperfections, increase cavity quality factor, and bring accelerating fields to higher levels. At the same time, the developed technology will be more environmentally friendly than the hydrogen fluoride-based wet etching technology. Plasma etching of inner surfaces of standard multi-cell SRF cavities is the main goal of this research in order to eliminate contaminants, including niobium oxides, in the penetration depth region. Successful plasma processing of multi-cell cavities will establish this method as a viable technique in the quest for more efficient components of next generation particle accelerators. In this project the single-cell pill box cavity plasma etching system is developed and etching conditions are determined. An actual single cell SRF cavity (1497 MHz) is plasma etched based on the pill box cavity results. The first RF test of this plasma etched cavity at cryogenic temperature is obtained. The system can also be used for other surface modifications, including tailoring niobium surface properties, surface passivation or nitriding for better performance of SRF cavities. The results of this plasma processing technology may be applied to most of the current SRF cavity fabrication projects. In the course of this project it has been demonstrated that a capacitively coupled radio-frequency discharge can be successfully used for etching curved niobium surfaces, in particular the inner walls of SRF cavities. The results could also be applicable to the inner or concave surfaces of any 3D structure other than an SRF cavity.

  11. ACCELERATED PROCESSING OF SB4 AND PREPARATION FOR SB5 PROCESSING AT DWPF

    SciTech Connect

    Herman, C

    2008-12-01

    The Defense Waste Processing Facility (DWPF) initiated processing of Sludge Batch 4 (SB4) in May 2007. SB4 was the first DWPF sludge batch to contain significant quantities of HM or high Al sludge. Initial testing with SB4 simulants showed potential negative impacts to DWPF processing; therefore, Savannah River National Laboratory (SRNL) performed extensive testing in an attempt to optimize processing. SRNL's testing has resulted in the highest DWPF production rates since start-up. During SB4 processing, DWPF also began incorporating waste streams from the interim salt processing facilities to initiate coupled operations. While DWPF has been processing SB4, the Liquid Waste Organization (LWO) and the SRNL have been preparing Sludge Batch 5 (SB5). SB5 has undergone low-temperature aluminum dissolution to reduce the mass of sludge for vitrification and will contain a small fraction of Purex sludge. A high-level review of SB4 processing and the SB5 preparation studies will be provided.

  12. Effects of accelerated reading rate on syntactic processing of Hebrew sentences: electrophysiological evidence.

    PubMed

    Leikin, M; Breznitz, Z

    2001-05-01

    The present study was designed to investigate whether accelerated reading rate influences the way adult readers process sentence components with different grammatical functions. Participants were 20 male native Hebrew-speaking college students aged 18-27 years. The processing of normal word strings was examined during word-by-word reading of sentences having subject-verb-object (SVO) syntactic structure in self-paced and fast-paced conditions. In both reading conditions, the N100 (late positive) and P300 (late negative) event-related potential (ERP) components were sensitive to such internal processes as recognition of words' syntactic functions. However, an accelerated reading rate influenced the way in which readers processed these sentence elements. In the self-paced condition, the predicate-centered (morphologically based) strategy was used, whereas in the fast-paced condition an approach that was more like the word-order strategy was used. This new pattern was correlated with findings on the shortening of latency and the increasing of amplitudes in both N100 and P300 ERP components for most sentence elements. These changes seemed to be related to improved working memory functioning and maximized attention.

  13. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  14. Open-source graphics processing unit-accelerated ray tracer for optical simulation

    NASA Astrophysics Data System (ADS)

    Mauch, Florian; Gronle, Marc; Lyda, Wolfram; Osten, Wolfgang

    2013-05-01

    Ray tracing still is the workhorse in optical design and simulation. Its basic principle, propagating light as a set of mutually independent rays, implies a linear dependency of the computational effort and the number of rays involved in the problem. At the same time, the mutual independence of the light rays bears a huge potential for parallelization of the computational load. This potential has recently been recognized in the visualization community, where graphics processing unit (GPU)-accelerated ray tracing is used to render photorealistic images. However, precision requirements in optical simulation are substantially higher than in visualization, and therefore performance results known from visualization cannot be expected to transfer to optical simulation one-to-one. In this contribution, we present an open-source implementation of a GPU-accelerated ray tracer, based on nVidias acceleration engine OptiX, that traces in double precision and exploits the massively parallel architecture of modern graphics cards. We compare its performance to a CPU-based tracer that has been developed in parallel.

  15. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    NASA Astrophysics Data System (ADS)

    Beck, A.; Kalmykov, S. Y.; Davoine, X.; Lifschitz, A.; Shadwick, B. A.; Malka, V.; Specka, A.

    2014-03-01

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 1018 cm-3. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  16. Ground Test of the Urine Processing Assembly for Accelerations and Transfer Functions

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Almond, Deborah F. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of the ground test of the urine processing assembly for accelerations and transfer functions. Details are given on the test setup, test data, data analysis, analytical results, and microgravity assessment. The conclusions of the tests include the following: (1) the single input/multiple output method is useful if the data is acquired by tri-axial accelerometers and inputs can be considered uncorrelated; (2) tying coherence with the matrix yields higher confidence in results; (3) the WRS#2 rack ORUs need to be isolated; (4) and future work includes a plan for characterizing performance of isolation materials.

  17. Study of the near-electrode processes in quasi-steady plasma accelerators with impenetrable electrodes

    SciTech Connect

    Kozlov, A. N.

    2012-01-15

    Near-electrode processes in a coaxial plasma accelerator with equipotential impenetrable electrodes are simulated using a two-dimensional (generally, time-dependent) two-fluid MHD model with allowance for the Hall effect and the plasma conductivity tensor. The simulations confirm the theoretically predicted mechanism of the so-called 'crisis of current' caused by the Hall effect. The simulation results are compared with available experimental data. The influence of both the method of plasma supply to the channel and an additional longitudinal magnetic field on the development of near-electrode instabilities preceding the crisis of current is studied.

  18. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    NASA Astrophysics Data System (ADS)

    Curtis, J. H.; Michelotti, M. D.; Riemer, N.; Heath, M. T.; West, M.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removal rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.

  19. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion.

    PubMed

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  20. Novel mapping in non-equilibrium stochastic processes

    NASA Astrophysics Data System (ADS)

    Heseltine, James; Kim, Eun-jin

    2016-04-01

    We investigate the time-evolution of a non-equilibrium system in view of the change in information and provide a novel mapping relation which quantifies the change in information far from equilibrium and the proximity of a non-equilibrium state to the attractor. Specifically, we utilize a nonlinear stochastic model where the stochastic noise plays the role of incoherent regulation of the dynamical variable x and analytically compute the rate of change in information (information velocity) from the time-dependent probability distribution function. From this, we quantify the total change in information in terms of information length { L } and the associated action { J }, where { L } represents the distance that the system travels in the fluctuation-based, statistical metric space parameterized by time. As the initial probability density function’s mean position (μ) is decreased from the final equilibrium value {μ }* (the carrying capacity), { L } and { J } increase monotonically with interesting power-law mapping relations. In comparison, as μ is increased from {μ }*,{ L } and { J } increase slowly until they level off to a constant value. This manifests the proximity of the state to the attractor caused by a strong correlation for large μ through large fluctuations. Our proposed mapping relation provides a new way of understanding the progression of the complexity in non-equilibrium system in view of information change and the structure of underlying attractor.

  1. Spatiotemporal processing of linear acceleration: primary afferent and central vestibular neuron responses

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Dickman, J. D.

    2000-01-01

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements

  2. UV Irradiation Accelerates Amyloid Precursor Protein (APP) Processing and Disrupts APP Axonal Transport

    PubMed Central

    Almenar-Queralt, Angels; Falzone, Tomas L.; Shen, Zhouxin; Lillo, Concepcion; Killian, Rhiannon L.; Arreola, Angela S.; Niederst, Emily D.; Ng, Kheng S.; Kim, Sonia N.; Briggs, Steven P.; Williams, David S.

    2014-01-01

    Overexpression and/or abnormal cleavage of amyloid precursor protein (APP) are linked to Alzheimer's disease (AD) development and progression. However, the molecular mechanisms regulating cellular levels of APP or its processing, and the physiological and pathological consequences of altered processing are not well understood. Here, using mouse and human cells, we found that neuronal damage induced by UV irradiation leads to specific APP, APLP1, and APLP2 decline by accelerating their secretase-dependent processing. Pharmacological inhibition of endosomal/lysosomal activity partially protects UV-induced APP processing implying contribution of the endosomal and/or lysosomal compartments in this process. We found that a biological consequence of UV-induced γ-secretase processing of APP is impairment of APP axonal transport. To probe the functional consequences of impaired APP axonal transport, we isolated and analyzed presumptive APP-containing axonal transport vesicles from mouse cortical synaptosomes using electron microscopy, biochemical, and mass spectrometry analyses. We identified a population of morphologically heterogeneous organelles that contains APP, the secretase machinery, molecular motors, and previously proposed and new residents of APP vesicles. These possible cargoes are enriched in proteins whose dysfunction could contribute to neuronal malfunction and diseases of the nervous system including AD. Together, these results suggest that damage-induced APP processing might impair APP axonal transport, which could result in failure of synaptic maintenance and neuronal dysfunction. PMID:24573290

  3. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  4. Use of Networked Collaborative Concept Mapping To Measure Team Processes and Team Outcomes.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; O'Neil, Harold F., Jr.; Herl, Howard E.; Dennis, Robert A.

    The feasibility of using a computer-based networked collaborative concept mapping system to measure teamwork skills was studied. A concept map is a node-link-node representation of content, where the nodes represent concepts and links represent relationships between connected concepts. Teamwork processes were examined for a group concept mapping…

  5. Modeling the Communication Process: The Map Is not the Territory.

    ERIC Educational Resources Information Center

    Bowman, Joel P.; Targowski, Andrew S.

    1987-01-01

    Presents a brief overview of the most significant models of the communication process, evaluates the communication models of the greatest relevance to business communication, and establishes a foundation for a new conception of that process. (JC)

  6. Venus and the Earth's Archean: Geological mapping and process comparisons

    NASA Astrophysics Data System (ADS)

    Head, J. W.; Hurwitz, D. M.; Ivanov, M. A.; Basilevsky, A. T.; Senthil Kumar, P.

    2008-09-01

    Introduction. The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities [1-3] and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new perspectives on the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record [4] provides important insight into high heat-flux tectonic and magmatic environments and structures [5] and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Here we address the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. The Earth's Archean and its Relation to Venus. The Archean period of Earth's history extends from accretion/initial crust formation (sometimes called the Hadean) to 2.5 Ga and is thought of by most workers as being a transitional period between the earliest Earth and later periods largely dominated by plate tectonics (Proterozoic and Phanerozoic) [2, 4]. Thus the Archean is viewed as recording a critical period in Earth's history in which a transition took place from the types of primary and early secondary crusts seen on the Moon, Mars and Mercury [6] (and largely missing in the record of the Earth), to the style of crustal accretion and plate tectonics characterizing later Earth history. The Archean is also characterized by enhanced crustal and mantle temperatures leading to differences in deformation style and volcanism (e.g., komatiites) [2]. The preserved Archean crust is exposed in ~36 different cratons [4], forming the cores of most continental regions, and is composed of gneisses, plutons and

  7. Surface damage correction, and atomic level smoothing of optics by Accelerated Neutral Atom Beam (ANAB) Processing

    NASA Astrophysics Data System (ADS)

    Walsh, M.; Chau, K.; Kirkpatrick, S.; Svrluga, R.

    2014-10-01

    Surface damage and surface contamination of optics has long been a source of problems for laser, lithography and other industries. Nano-sized surface defects may present significant performance issues in optical materials for deep UV and EUV applications. The effects of nanometer sized surface damage (scratches, pits, and organics) on the surface of optics made of traditional materials and new more exotic materials is a limiting factor to high end performance. Angstrom level smoothing of materials such as calcium fluoride, spinel, zinc sulfide, BK7 and others presents a unique set of challenges. Exogenesis Corporation, using its proprietary Accelerated Neutral Atom Beam (ANAB) technology, is able to remove nano-scale surface damage and contamination and leaves many material surfaces with roughness typically around one angstrom. This process technology has been demonstrated on nonlinear crystals, and various other high-end optical materials. This paper describes the ANAB technology and summarizes smoothing results for various materials that have been processed with ANAB. All surface measurement data for the paper was produced via AFM analysis. Exogenesis Corporation's ANAB processing technology is a new and unique surface modification technique that has demonstrated to be highly effective at correcting nano-scale surface defects. ANAB is a non-contact vacuum process comprised of an intense beam of accelerated, electrically neutral gas atoms with average energies of a few tens of electron volts. The ANAB process does not apply normal forces associated with traditional polishing techniques. ANAB efficiently removes surface contaminants, nano-scale scratches, bumps and other asperities under low energy physical sputtering conditions as the removal action proceeds. ANAB may be used to remove a precisely controlled, uniform thickness of material without any increase of surface roughness, regardless of the total amount of material removed. The ANAB process does not

  8. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units.

    PubMed

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew

    2013-11-12

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.

  9. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  10. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  11. On-site installation and shielding of a mobile electron accelerator for radiation processing

    NASA Astrophysics Data System (ADS)

    Catana, Dumitru; Panaitescu, Julian; Axinescu, Silviu; Manolache, Dumitru; Matei, Constantin; Corcodel, Calin; Ulmeanu, Magdalena; Bestea, Virgil

    1995-05-01

    The development of radiation processing of some bulk products, e.g. grains or potatoes, would be sustained if the irradiation had been carried out at the place of storage, i.e. silo. A promising solution is proposed consisting of a mobile electron accelerator, installed on a couple of trucks and traveling from one customer to another. The energy of the accelerated electrons was chosen at 5 MeV, with 10 to 50 kW beam power. The irradiation is possible either with electrons or with bremsstrahlung. A major problem of the above solution is the provision of adequate shielding at the customer, with a minimum investment cost. Plans for a bunker are presented, which houses the truck carrying the radiation head. The beam is vertical downwards, through the truck floor, through a transport pipe and a scanning horn. The irradiation takes place in a pit, where the products are transported through a belt. The belt path is so chosen as to minimize openings in the shielding. Shielding calculations are presented supposing a working regime with 5 MeV bremsstrahlung. Leakage and scattered radiation are taken into account.

  12. Revealing the flux: Using processed Husimi maps to visualize dynamics of bound systems and mesoscopic transport

    NASA Astrophysics Data System (ADS)

    Mason, Douglas J.; Borunda, Mario F.; Heller, Eric J.

    2015-04-01

    We elaborate upon the "processed Husimi map" representation for visualizing quantum wave functions using coherent states as a measurement of the local phase space to produce a vector field related to the probability flux. Adapted from the Husimi projection, the processed Husimi map is mathematically related to the flux operator under certain limits but offers a robust and flexible alternative since it can operate away from these limits and in systems that exhibit zero flux. The processed Husimi map is further capable of revealing the full classical dynamics underlying a quantum wave function since it reverse engineers the wave function to yield the underlying classical ray structure. We demonstrate the capabilities of processed Husimi maps on bound systems with and without electromagnetic fields, as well as on open systems on and off resonance, to examine the relationship between closed system eigenstates and mesoscopic transport.

  13. DSN microwave antenna holography. Part 2: Data processing and display of high-resolution effective maps

    NASA Technical Reports Server (NTRS)

    Rochblatt, D. J.; Rahmat-Samii, Y.; Mumford, J. H.

    1986-01-01

    The results of a recently completed computer graphic package for the process and display of holographically recorded data into effective aperture maps are presented. The term effective maps (labelled provisional on the holograms) signifies that the maps include contributions of surface mechanical errors as well as other electromagnetic factors (phase error due to feed/subreflector misalignment, linear phase error contribution due to pointing errors, subreflector flange diffraction effects, and strut diffraction shadows). While these maps do not show the true mechanical surface errors, they nevertheless show the equivalent errors, which are effective in determining overall antenna performance. Final steps to remove electromagnetic pointing and misalignment factors are now in progress. The processing and display of high-resolution effective maps of a 64m antenna (DSS 63) are presented.

  14. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use

  15. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, M.; Buss, R.; Scherrer, S.; Margreth, M.; Zappa, M.

    2015-12-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexity of automatic DRP mapping approaches affects hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison and a deviation map were derived. The automatically derived DRP-maps were used in synthetic runoff simulations with an adapted version of the hydrological model PREVAH, and simulation results compared with those from simulations using the reference maps. The DRP-maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP-maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. We therefore recommend not only using expert

  16. High-Speed Digital Signal Processing Method for Detection of Repeating Earthquakes Using GPGPU-Acceleration

    NASA Astrophysics Data System (ADS)

    Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru

    2013-04-01

    Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a

  17. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  18. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  19. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  20. Acceleration of Electron Repulsion Integral Evaluation on Graphics Processing Units via Use of Recurrence Relations.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2013-02-12

    Electron repulsion integral (ERI) calculation on graphical processing units (GPUs) can significantly accelerate quantum chemical calculations. Herein, the ab initio self-consistent-field (SCF) calculation is implemented on GPUs using recurrence relations, which is one of the fastest ERI evaluation algorithms currently available. A direct-SCF scheme to assemble the Fock matrix efficiently is presented, wherein ERIs are evaluated on-the-fly to avoid CPU-GPU data transfer, a well-known architectural bottleneck in GPU specific computation. Realized speedups on GPUs reach 10-100 times relative to traditional CPU nodes, with accuracies of better than 1 × 10(-7) for systems with more than 4000 basis functions. PMID:26588740

  1. Business process mapping techniques for ISO 9001 and 14001 certifications

    SciTech Connect

    Klement, R.E.; Richardson, G.D.

    1997-11-01

    AlliedSignal Federal Manufacturing and Technologies/Kansas City (FM and T/KC) produces nonnuclear components for nuclear weapons. The company has operated the plant for the US Department of Energy (DOE) since 1949. Throughout the history of the plant, procedures have been written to reflect the nuclear weapons industry best practices, and the facility has built a reputation for producing high quality products. The purpose of this presentation is to demonstrate how Total Quality principles were used at FM and T/KC to document processes for ISO 9001 and 14001 certifications. The information presented to the reader will lead to a better understanding of business administration by aligning procedures to key business processes within a business model; converting functional-based procedures to process-based procedures for total integrated resource management; and assigning ownership, validation, and metrics to procedures/processes, adding value to a company`s profitability.

  2. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    USGS Publications Warehouse

    Hellweg, M.; Boatwright, J.

    1999-01-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  3. Terahertz digital holography image processing based on MAP algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guang-Hao; Li, Qi

    2015-04-01

    Terahertz digital holography combines the terahertz technology and digital holography technology at present, fully exploits the advantages in both of them. Unfortunately, the quality of terahertz digital holography reconstruction images is gravely harmed by speckle noise which hinders the popularization of this technology. In this paper, the maximum a posterior estimation (MAP) filter is harnessed for the restoration of the digital reconstruction images. The filtering results are compared with images filtered by Wiener Filter and conventional frequency-domain filters from both subjective and objective perspectives. As for objective assessment, we adopted speckle index (SPKI) and edge preserving index (EPI) to quantitate the quality of images. In this paper, Canny edge detector is also used to outline the target in original and reconstruction images, which then act as an important role in the evaluation of filter performance. All the analysis indicate that maximum a posterior estimation filtering algorithm performs superiorly compared with the other two competitors in this paper and has enhanced the terahertz digital holography reconstruction images to a certain degree, allowing for a more accurate boundary identification.

  4. Extracting Process and Mapping Management for Heterogennous Systems

    NASA Astrophysics Data System (ADS)

    Hagara, Igor; Tanuška, Pavol; Duchovičová, Soňa

    2013-12-01

    A lot of papers describe three common methods of data selection from primary systems. This paper defines how to select the correct method or combinations of methods for minimizing the impact of production system and common operation. Before using any method, it is necessary to know the primary system and its databases structures for the optimal use of the actual data structure setup and the best design for ETL process. Databases structures are usually categorized into groups, which characterize their quality. The classification helps to find the ideal method for each group and thus design a solution of ETL process with the minimal impact on the data warehouse and production system.

  5. Supporting the Learning Process with Collaborative Concept Mapping Using Computer-based Communication Tools and Processes.

    ERIC Educational Resources Information Center

    De Simone, Christina; Schmid, Richard F.; McEwen, Laura A.

    2001-01-01

    Studied the effects of a combination of student collaboration, concept mapping, and electronic technologies with 26 students in a graduate level learning theories class. Findings suggest that concept mapping and collaborative learning techniques complement each other, and that students found the combined approach useful. (SLD)

  6. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion

    PubMed Central

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  7. Analysis of Organization of Production Process on the Basis of Value Stream Mapping

    NASA Astrophysics Data System (ADS)

    Sattarova, K. T.; Kokareva, V. V.; Pronichev, N. D.

    2016-08-01

    This article discusses the process of identifying the problem areas of the product cycle by the value stream. Mapping value stream mapping allowed the development of a number of management solutions to increase productivity, optimize the process and improve the competitiveness of products. For the study a product manufactured by one of the industrial enterprises of the city of Samara was selected. The production process as repeatedly optimized by services of the plant, but its cycle is still and unstable. To solve these problems the pull scheme of production was proposed. The proposed method for the improvement of the production process on the basis of value stream mapping allows optimizing the production process, reducing the production cycle, improving the quality and efficiency of production. The final results were expressed in value terms. The final result showed that the use of this method allows reducing the duration of the production cycle for 42.28%, the cost of products - by 57.71%.

  8. Variability of Mass Dependence of Auroral Acceleration Processes with Solar Activity

    NASA Technical Reports Server (NTRS)

    Ghielmetti, Arthur G.

    1997-01-01

    The objectives of this investigation are to improve understanding of the mass dependent variability of the auroral acceleration processes and so to clarify apparent discrepancies regarding the altitude and local time variations with solar cycle by investigating: (1) the global morphological relationships between auroral electric field structures and the related particle signatures under varying conditions of solar activity, and (2) the relationships between the electric field structures and particle signatures in selected events that are representative of the different conditions occurring during a solar cycle. The investigation is based in part on the Lockheed UFI data base of UpFlowing Ion (UFI) events in the 5OO eV to 16keV energy range and associated electrons in the energy range 7O eV to 24 keV. This data base was constructed from data acquired by the ion mass spectrometer on the S3-3 satellite in the altitude range of I to 1.3 Re. The launch of the POLAR spacecraft in early 1996 and successful operation of its TIMAS ion mass spectrometer has provided us with data from within the auroral acceleration regions during the current solar minimum. The perigee of POLAR is at about 1 Re, comparable to that of S3-3. The higher sensitivity and time resolution of TIMAS compared to the ion mass spectrometer on S3-3 together with its wider energy range, 15 eV to 33 keV, facilitate more detailed studies of upflowing ions.

  9. Demystifying process mapping: a key step in neurosurgical quality improvement initiatives.

    PubMed

    McLaughlin, Nancy; Rodstein, Jennifer; Burke, Michael A; Martin, Neil A

    2014-08-01

    Reliable delivery of optimal care can be challenging for care providers. Health care leaders have integrated various business tools to assist them and their teams in ensuring consistent delivery of safe and top-quality care. The cornerstone to all quality improvement strategies is the detailed understanding of the current state of a process, captured by process mapping. Process mapping empowers caregivers to audit how they are currently delivering care to subsequently strategically plan improvement initiatives. As a community, neurosurgery has clearly shown dedication to enhancing patient safety and delivering quality care. A care redesign strategy named NERVS (Neurosurgery Enhanced Recovery after surgery, Value, and Safety) is currently being developed and piloted within our department. Through this initiative, a multidisciplinary team led by a clinician neurosurgeon has process mapped the way care is currently being delivered throughout the entire episode of care. Neurosurgeons are becoming leaders in quality programs, and their education on the quality improvement strategies and tools is essential. The authors present a comprehensive review of process mapping, demystifying its planning, its building, and its analysis. The particularities of using process maps, initially a business tool, in the health care arena are discussed, and their specific use in an academic neurosurgical department is presented.

  10. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    SciTech Connect

    Wang, J; Templeton, D C; Harris, D B

    2011-01-21

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combined with conventional methods significantly improves the network detection ability in an efficient matter.

  11. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  12. Mapping social processes at work in nursing knowledge development.

    PubMed

    Hamilton, Patti; Willis, Eileen; Henderson, Julie; Harvey, Clare; Toffoli, Luisa; Abery, Elizabeth; Verrall, Claire

    2014-09-01

    In this paper, we suggest a blueprint for combining bibliometrics and critical analysis as a way to review published scientific works in nursing. This new approach is neither a systematic review nor meta-analysis. Instead, it is a way for researchers and clinicians to understand how and why current nursing knowledge developed as it did. Our approach will enable consumers and producers of nursing knowledge to recognize and take into account the social processes involved in the development, evaluation, and utilization of new nursing knowledge. We offer a rationale and a strategy for examining the socially-sanctioned actions by which nurse scientists signal to readers the boundaries of their thinking about a problem, the roots of their ideas, and the significance of their work. These actions - based on social processes of authority, credibility, and prestige - have bearing on the careers of nurse scientists and on the ways the knowledge they create enters into the everyday world of nurse clinicians and determines their actions at the bedside, as well as their opportunities for advancement.

  13. Acceleration of iterative Navier-Stokes solvers on graphics processing units

    NASA Astrophysics Data System (ADS)

    Tomczak, Tadeusz; Zadarnowska, Katarzyna; Koza, Zbigniew; Matyka, Maciej; Mirosław, Łukasz

    2013-04-01

    While new power-efficient computer architectures exhibit spectacular theoretical peak performance, they require specific conditions to operate efficiently, which makes porting complex algorithms a challenge. Here, we report results of the semi-implicit method for pressure linked equations (SIMPLE) and the pressure implicit with operator splitting (PISO) methods implemented on the graphics processing unit (GPU). We examine the advantages and disadvantages of the full porting over a partial acceleration of these algorithms run on unstructured meshes. We found that the full-port strategy requires adjusting the internal data structures to the new hardware and proposed a convenient format for storing internal data structures on GPUs. Our implementation is validated on standard steady and unsteady problems and its computational efficiency is checked by comparing its results and run times with those of some standard software (OpenFOAM) run on central processing unit (CPU). The results show that a server-class GPU outperforms a server-class dual-socket multi-core CPU system running essentially the same algorithm by up to a factor of 4.

  14. In Vivo Hypobaric Hypoxia Performed During the Remodeling Process Accelerates Bone Healing in Mice

    PubMed Central

    Durand, Marjorie; Collombet, Jean-Marc; Frasca, Sophie; Begot, Laurent; Lataillade, Jean-Jacques; Le Bousse-Kerdilès, Marie-Caroline

    2014-01-01

    We investigated the effects of respiratory hypobaric hypoxia on femoral bone-defect repair in mice because hypoxia is believed to influence both mesenchymal stromal cell (MSC) and hematopoietic stem cell mobilization, a process involved in the bone-healing mechanism. To mimic conditions of non-weight-bearing limb immobilization in patients suffering from bone trauma, our hypoxic mouse model was further subjected to hind-limb unloading. A hole was drilled in the right femur of adult male C57/BL6J mice. Four days after surgery, mice were subjected to hind-limb unloading for 1 week. Seven days after surgery, mice were either housed for 4 days in a hypobaric room (FiO2 at 10%) or kept under normoxic conditions. Unsuspended control mice were housed in either hypobaric or normoxic conditions. Animals were sacrificed on postsurgery day 11 to allow for collection of both contralateral and lesioned femurs, blood, and spleen. As assessed by microtomography, delayed hypoxia enhanced bone-healing efficiency by increasing the closing of the cortical defect and the newly synthesized bone volume in the cavity by +55% and +35%, respectively. Proteome analysis and histomorphometric data suggested that bone-repair improvement likely results from the acceleration of the natural bone-healing process rather than from extended mobilization of MSC-derived osteoprogenitors. Hind-limb unloading had hardly any effect beyond delayed hypoxia-enhanced bone-healing efficiency. PMID:24944208

  15. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind Braille readers

    PubMed Central

    Bhattacharjee, Arindam; Ye, Amanda J.; Lisak, Joy A.; Vargas, Maria G.; Goldreich, Daniel

    2010-01-01

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at about 100-ms intervals, and adjacent raised dots within a character at 50-ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25-ms target tap) and amplitude discrimination (compared to 100-micron reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker: 50-Hz, 50-micron; target-masker delays ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged, but that perceptual processing is accelerated in congenitally blind Braille readers. PMID:20980584

  16. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind braille readers.

    PubMed

    Bhattacharjee, Arindam; Ye, Amanda J; Lisak, Joy A; Vargas, Maria G; Goldreich, Daniel

    2010-10-27

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at ∼100 ms intervals, and adjacent raised dots within a character at 50 ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25 ms target tap) and amplitude discrimination (compared with 100 μm reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker, 50 Hz, 50 μm; target-masker delays, ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged but that perceptual processing is accelerated in congenitally blind Braille readers.

  17. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  18. Retrospective analysis of linear accelerator output constancy checks using process control techniques.

    PubMed

    Sanghangthum, Taweap; Suriyapee, Sivalee; Srisatit, Somyot; Pawlicki, Todd

    2013-01-01

    Shewhart control charts have previously been suggested as a process control tool for use in routine linear accelerator (linac) output verifications. However, a comprehensive approach to process control has not been investigated for linac output verifications. The purpose of this work is to investigate a comprehensive process control approach to linac output constancy quality assurance (QA). The RBA-3 dose constancy check was used to verify outputs of photon beams and electron beams delivered by a Varian Clinac 21EX linac. The data were collected during 2009 to 2010. Shewhart-type control charts, exponentially weighted moving average (EWMA) charts, and capability indices were applied to these processes. The Shewhart-type individuals chart (X-chart) was used and the number of data points used to calculate the control limits was varied. The parameters tested for the EWMA charts (smoothing parameter (λ) and the control limit width (L)) were λ = 0.05, L = 2.492; λ = 0.10, L = 2.703; and λ = 0.20, L = 2.860, as well as the number of points used to estimate the initial process mean and variation. Lastly, the number of in-control data points used to determine process capability (C(p)) and acceptability (C(pk)) were investigated, comparing the first in-control run to the longest in-control run of the process data. C(p) and C(pk) values greater than 1.0 were considered acceptable. The 95% confidence intervals were reported. The X-charts detected systematic errors (e.g., device setup errors). In-control run lengths on the X-charts varied from 5 to 30 output measurements (about one to seven months). EWMA charts showed in-control runs ranging from 9 to 33 output measurements (about two to eight months). The C(p) and C(pk) ratios are higher than 1.0 for all energies, except 12 and 20 MeV. However, 10 MV and 6, 9, and 16 MeV were in question when considering the 95% confidence limits. The X-chart should be calculated using 8-12 data points. For EWMA chart, using 4 data points

  19. Project Zero Delay: a process for accelerating the activation of cancer clinical trials.

    PubMed

    Kurzrock, Razelle; Pilat, Susan; Bartolazzi, Marcel; Sanders, Dwana; Van Wart Hood, Jill; Tucker, Stanley D; Webster, Kevin; Mallamaci, Michael A; Strand, Steven; Babcock, Eileen; Bast, Robert C

    2009-09-10

    Drug development in cancer research is lengthy and expensive. One of the rate-limiting steps is the initiation of first-in-human (phase I) trials. Three to 6 months can elapse between investigational new drug (IND) approval by the US Food and Drug Administration and the entry of a first patient. Issues related to patient participation have been well analyzed, but the administrative processes relevant to implementing clinical trials have received less attention. While industry and academia often partner for the performance of phase I studies, their administrative processes are generally performed independently, and their timelines driven by different priorities: safety reviews, clinical operations, regulatory submissions, and contracting of clinical delivery vendors for industry; contracts, budgets, and institutional review board approval for academia. Both processes converge on US Food and Drug Administration approval of an IND. In the context of a strategic alliance between M. D. Anderson Cancer Center and AstraZeneca Pharmaceuticals LP, a concerted effort has been made to eliminate delays in implementing clinical trials. These efforts focused on close communications, identifying and matching key timelines, alignment of priorities, and tackling administrative processes in parallel, rather than sequentially. In a recent, first-in-human trial, the study was activated and the first patient identified in 46 days from completion of the final study protocol and about 48 hours after final US Food and Drug Administration IND approval, reducing the overall timeline by about 3 months, while meeting all clinical good practice guidelines. Eliminating administrative delays can accelerate the evaluation of new drugs without compromising patient safety or the quality of clinical research. PMID:19652061

  20. Accelerated and Navigator-Gated Look-Locker Imaging for Cardiac T1 Estimation (ANGIE): Development and Application to T1 Mapping of the Right Ventricle

    PubMed Central

    Mehta, Bhairav B.; Chen, Xiao; Bilchick, Kenneth C.; Salerno, Michael; Epstein, Frederick H.

    2014-01-01

    Purpose: To develop a method for high-resolution cardiac T1 mapping. Methods: A new method, accelerated and navigator-gated look-locker imaging for cardiac T1 estimation (ANGIE), was developed. An adaptive acquisition algorithm that accounts for the interplay between navigator gating and undersampling patterns well-suited for compressed sensing was used to minimize scan time. Computer simulations, phantom experiments, and imaging of the left ventricle (LV) were used to optimize and evaluate ANGIE. ANGIE’s high spatial resolution was demonstrated by T1 mapping of the right ventricle (RV). Comparisons were made to modified Look-Locker imaging (MOLLI). Results: Retrospective reconstruction of fully sampled datasets demonstrated the advantages of the adaptive algorithm. For the LV, ANGIE measurements of T1 were in good agreement with MOLLI. For the RV, ANGIE achieved a spatial resolution of 1.2 × 1.2 mm2 with a scan time of 157±53 s per slice, and measured RV T1 values of 980±96 ms versus 1076±157 ms for lower-resolution MOLLI. ANGIE provided lower intrascan variation in the RV T1 estimate compared with MOLLI (P<0.05). Conclusion: ANGIE enables high-resolution cardiac T1 mapping in clinically reasonable scan times. ANGIE opens the prospect of quantitative T1 mapping of thin cardiovascular structures such as the RV wall. PMID:24515952

  1. Bisphenol A exposure accelerated the aging process in the nematode Caenorhabditis elegans.

    PubMed

    Tan, Ling; Wang, Shunchang; Wang, Yun; He, Mei; Liu, Dahai

    2015-06-01

    Bisphenol A (BPA) is a well-known environmental estrogenic disruptor that causes adverse effects. Recent studies have found that chronic exposure to BPA is associated with a high incidence of several age-related diseases. Aging is characterized by progressive function decline, which affects quality of life. However, the effects of BPA on the aging process are largely unknown. In the present study, by using the nematode Caenorhabditis elegans as a model, we investigated the influence of BPA exposure on the aging process. The decrease in body length, fecundity, and population size and the increased egg laying defection suggested that BPA exposure resulted in fitness loss and reproduction aging in this animal. Lifetime exposure of worms to BPA shortened the lifespan in a dose-dependant manner. Moreover, prolonged BPA exposure resulted in age-related behavior degeneration and the accumulation of lipofuscin and lipid peroxide products. The expression of mitochondria-specific HSP-6 and endoplasmic reticulum (ER)-related HSP-70 exhibited hormetic decrease. The expression of ER-related HSP-4 decreased significantly while HSP-16.2 showed a dose-dependent increase. The decreased expression of GCS-1 and GST-4 implicated the reduced antioxidant ability under BPA exposure, and the increase in SOD-3 expression might be caused by elevated levels of reactive oxygen species (ROS) production. Finally, BPA exposure increased the generation of hydrogen peroxide-related ROS and superoxide anions. Our results suggest that BPA exposure resulted in an accelerated aging process in C. elegans mediated by the induction of oxidative stress.

  2. Accelerated Cardiac T2 Mapping using Breath-hold Multi-Echo Fast Spin-Echo Pulse Sequence with Compressed sensing and Parallel Imaging

    PubMed Central

    Feng, Li; Otazo, Ricardo; Jung, Hong; Jensen, Jens H.; Ye, Jong C.; Sodickson, Daniel K.; Kim, Daniel

    2010-01-01

    Cardiac T2 mapping is a promising method for quantitative assessment of myocardial edema and iron overload. We have developed a new multi-echo fast spin echo (ME-FSE) pulse sequence for breath-hold T2 mapping with acceptable spatial resolution. We propose to further accelerate this new ME-FSE pulse sequence using k-t FOCal Underdetermined System Solver (FOCUSS) adapted with a framework that utilizes both compressed sensing and parallel imaging (.e.g, GRAPPA) to achieve higher spatial resolution. We imaged twelve control subjects in mid-ventricular short-axis planes and compared the accuracy of T2 measurements obtained using ME-FSE with GRAPPA and ME-FSE with k-t FOCUSS. For image reconstruction, we used a bootstrapping two-step approach, where in the first step fast Fourier transform was used as the sparsifying transform and in the final step principal component analysis was used as the sparsifying transform. Compared with T2 measurements obtained using GRAPPA, T2 measurements obtained using k-t FOCUSS were in excellent agreement (mean difference = 0.04 ms; upper/lower 95% limits of agreement were 2.26/−2.19 ms). The proposed accelerated ME-FSE pulse sequence with k-t FOCUSS is a promising investigational method for rapid T2 measurement of the heart with relatively high spatial resolution (1.7 mm × 1.7 mm). PMID:21360737

  3. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments. PMID:23893758

  4. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Meroni, A.; Bahr, T.

    2013-05-01

    Having access to SAR data can be highly important and critical especially for disaster mapping. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. Therefore, we present in this paper the operational processing of SAR data within a GIS environment for rapid disaster mapping. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. A series of COSMO-SkyMed acquisitions was processed in ArcGIS® using a single-sensor, multi-mode, multi-temporal approach. The relevant processing steps were combined using the ArcGIS ModelBuilder to create a new model for rapid disaster mapping in ArcGIS, which can be accessed both via a desktop and a server environment.

  5. A remediation contractor`s view of accelerating the cleanup process

    SciTech Connect

    Librizzi, W.J.; Phelps, G.S.

    1994-12-31

    Superfund, since its passage in December, 1980, has been under continual evaluation and change. Progress has been made over the past 13 years. To date, EPA under Superfund has completed 220 long term cleanups with 1,100 in various stages of completion. In addition, Superfund has been a catalyst for the development of new innovative cleanup technologies. In this regard, EPA has identified more than 150 innovative technologies now being used to treat contaminated soil, groundwater, sludge and sediments. Despite these noted accomplishments, continued criticisms of the program focus on Superfund weaknesses. They include: inconsistent cleanups; high transactional costs; perceived unfairness in liability; overlapping federal/state relationship; Inadequate community involvement; impediments to economic development. Techniques that can accelerate the hazardous waste cleanup process are discussed further in this paper. They include: strengthened interrelationships between the design and remediation contractors; role of the remediation contractor in the implementation of presumptive remedies; a proactive community relations program, partnering and early and frequent interface with regulatory agencies.

  6. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC.

  7. Closing the gap: accelerating the translational process in nanomedicine by proposing standardized characterization techniques

    PubMed Central

    Khorasani, Ali A; Weaver, James L; Salvador-Morales, Carolina

    2014-01-01

    On the cusp of widespread permeation of nanomedicine, academia, industry, and government have invested substantial financial resources in developing new ways to better treat diseases. Materials have unique physical and chemical properties at the nanoscale compared with their bulk or small-molecule analogs. These unique properties have been greatly advantageous in providing innovative solutions for medical treatments at the bench level. However, nanomedicine research has not yet fully permeated the clinical setting because of several limitations. Among these limitations are the lack of universal standards for characterizing nanomaterials and the limited knowledge that we possess regarding the interactions between nanomaterials and biological entities such as proteins. In this review, we report on recent developments in the characterization of nanomaterials as well as the newest information about the interactions between nanomaterials and proteins in the human body. We propose a standard set of techniques for universal characterization of nanomaterials. We also address relevant regulatory issues involved in the translational process for the development of drug molecules and drug delivery systems. Adherence and refinement of a universal standard in nanomaterial characterization as well as the acquisition of a deeper understanding of nanomaterials and proteins will likely accelerate the use of nanomedicine in common practice to a great extent. PMID:25525356

  8. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  9. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC. PMID:24009129

  10. ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors.

    PubMed

    Lai, Vicky Tzuyin; Curran, Tim

    2013-12-01

    Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors.

  11. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway.

    PubMed

    Tooker, Brian C; Brindley, Stephen M; Chiarappa-Zucca, Marina L; Turteltaub, Kenneth W; Newman, Lee S

    2015-01-01

    Exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, it was determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) than HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.

  12. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    SciTech Connect

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.

  13. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    DOE PAGES

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstratemore » that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.« less

  14. Using concept maps to explore preservice teachers' perceptions of science content knowledge, teaching practices, and reflective processes

    NASA Astrophysics Data System (ADS)

    Somers, Judy L.

    This qualitative study examined seven preservice teachers' perceptions of their science content knowledge, teaching practices, and reflective processes through the use of the metacognitive strategy of concept maps. Included in the paper is a review of literature in the areas of preservice teachers' perceptions of teaching, concept development, concept mapping, science content understanding, and reflective process as a part of metacognition. The key questions addressed include the use of concept maps to indicate organization and understanding of science content, mapping strategies to indicate perceptions of teaching practice, and the influence of concept maps on reflective process. There is also a comparison of preservice teachers' perceptions of concept map usage with the purposes and practices of maps as described by experienced teachers. Data were collected primarily through interviews, observations, a pre and post concept mapping activity, and an analysis of those concept maps using a rubric developed for this study. Findings showed that concept map usage clarified students' understanding of the organization and relationships within content area and that the process of creating the concept maps increased participants' understanding of the selected content. The participants felt that the visual element of concept mapping was an important factor in improving content understanding. These participants saw benefit in using concept maps as planning tools and as instructional tools. They did not recognize the use of concept maps as assessment tools. When the participants were able to find personal relevance in and through their concept maps they were better able to be reflective about the process. The experienced teachers discussed student understanding and skill development as the primary purpose of concept map usage, while they were able to use concept maps to accomplish multiple purposes in practice.

  15. Impaired letter-string processing in developmental dyslexia: what visual-to-phonology code mapping disorder?

    PubMed

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-05-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia.

  16. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  17. A method to evaluate dose errors introduced by dose mapping processes for mass conserving deformations

    PubMed Central

    Yan, C.; Hugo, G.; Salguero, F. J.; Saleh-Sayah, N.; Weiss, E.; Sleeman, W. C.; Siebers, J. V.

    2012-01-01

    Purpose: To present a method to evaluate the dose mapping error introduced by the dose mapping process. In addition, apply the method to evaluate the dose mapping error introduced by the 4D dose calculation process implemented in a research version of commercial treatment planning system for a patient case. Methods: The average dose accumulated in a finite volume should be unchanged when the dose delivered to one anatomic instance of that volume is mapped to a different anatomic instance—provided that the tissue deformation between the anatomic instances is mass conserving. The average dose to a finite volume on image S is defined as dS¯=es/mS, where eS is the energy deposited in the mass mS contained in the volume. Since mass and energy should be conserved, when dS¯ is mapped to an image R(dS→R¯=dR¯), the mean dose mapping error is defined as Δdm¯=|dR¯-dS¯|=|eR/mR-eS/mS|, where the eR and eS are integral doses (energy deposited), and mR and mS are the masses within the region of interest (ROI) on image R and the corresponding ROI on image S, where R and S are the two anatomic instances from the same patient. Alternatively, application of simple differential propagation yields the differential dose mapping error, Δdd¯=|∂d¯∂e*Δe+∂d¯∂m*Δm|=|(eS-eR)mR-(mS-mR)mR2*eR|=α|dR¯-dS¯| with α=mS/mR. A 4D treatment plan on a ten-phase 4D-CT lung patient is used to demonstrate the dose mapping error evaluations for a patient case, in which the accumulated dose, DR¯=∑S=09dS→R¯, and associated error values (ΔDm¯ and ΔDd¯) are calculated for a uniformly spaced set of ROIs. Results: For the single sample patient dose distribution, the average accumulated differential dose mapping error is 4.3%, the average absolute differential dose mapping error is 10.8%, and the average accumulated mean dose mapping error is 5.0%. Accumulated differential dose mapping errors within the gross tumor volume (GTV) and planning target volume (PTV) are lower, 0

  18. Learning from Nature - Mapping of Complex Hydrological and Geomorphological Process Systems for More Realistic Modelling of Hazard-related Maps

    NASA Astrophysics Data System (ADS)

    Chifflard, Peter; Tilch, Nils

    2010-05-01

    Introduction Hydrological or geomorphological processes in nature are often very diverse and complex. This is partly due to the regional characteristics which vary over time and space, as well as changeable process-initiating and -controlling factors. Despite being aware of this complexity, such aspects are usually neglected in the modelling of hazard-related maps due to several reasons. But particularly when it comes to creating more realistic maps, this would be an essential component to consider. The first important step towards solving this problem would be to collect data relating to regional conditions which vary over time and geographical location, along with indicators of complex processes. Data should be acquired promptly during and after events, and subsequently digitally combined and analysed. Study area In June 2009, considerable damage occurred in the residential area of Klingfurth (Lower Austria) as a result of great pre-event wetness and repeatedly heavy rainfall, leading to flooding, debris flow deposit and gravitational mass movement. One of the causes is the fact that the meso-scale watershed (16 km²) of the Klingfurth stream is characterised by adverse geological and hydrological conditions. Additionally, the river system network with its discharge concentration within the residential zone contributes considerably to flooding, particularly during excessive rainfall across the entire region, as the flood peaks from different parts of the catchment area are superposed. First results of mapping Hydro(geo)logical surveys across the entire catchment area have shown that - over 600 gravitational mass movements of various type and stage have occurred. 516 of those have acted as a bed load source, while 325 mass movements had not reached the final stage yet and could thus supply bed load in the future. It should be noted that large mass movements in the initial or intermediate stage were predominately found in clayey-silty areas and weathered material

  19. Accelerating POCS interpolation of 3D irregular seismic data with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Wang, Shu-Qin; Gao, Xing; Yao, Zhen-Xing

    2010-10-01

    Seismic trace interpolation is necessary for high-resolution imaging when the acquired data are not adequate or when some traces are missing. Projection-onto-convex-sets (POCS) interpolation can gradually recover missing traces with an iterative algorithm, but its computational cost in a 3D CPU-based implementation is too high for practical applications. We present a computing scheme to speedup 3D POCS interpolation with graphics processing units (GPUs). We accelerate the most time-consuming part of the 3D POCS algorithm (i.e. Fourier transforms) by taking advantage of a GPU-based Fourier transform library. Other parts are fine-tuned to maximize the utilization of GPU computing resources. We upload the whole input data set to the global memory of the GPUs and reuse it until the final result is obtained. This can avoid low-bandwidth data transfer between CPU and GPUs. We minimize the number of intermediate 3D arrays to save GPU global memory by optimizing the algorithm implementation. This allows us to handle a much larger input data set. When reducing the runtime of our GPU implementation, the coalescing of global memory access and the 3D CUFFT library provides us with the greatest performance improvements. Numerical results show that our scheme is 3-29× times faster than the optimized CPU-based implementation, depending on the size of 3D data set. Our GPU computing scheme allows a significant reduction of computational cost and would facilitate 3D POCS interpolation for practical applications.

  20. Characteristics of four SPE groups with different origins and acceleration processes

    NASA Astrophysics Data System (ADS)

    Kim, R.-S.; Cho, K.-S.; Lee, J.; Bong, S.-C.; Joshi, A. D.; Park, Y.-D.

    2015-09-01

    Solar proton events (SPEs) can be categorized into four groups based on their associations with flare or CME inferred from onset timings as well as acceleration patterns using multienergy observations. In this study, we have investigated whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find the following: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 particle flux per unit (pfu)) even if its associated flare and/or CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. (ii) For the former (Group A), the sites are very low (˜ 1 Rs) and close to the western limb, while the latter (Group C) have relatively higher (mean = 6.05 Rs) and wider acceleration sites. (iii) When the proton acceleration starts from the higher energy (Group B), a SPE tends to be a relatively weak event (< 1000 pfu), although its associated CME is relatively stronger than previous groups. (iv) The SPEs categorized by the simultaneous acceleration in whole energy range within 10 min (Group D) tend to show the weakest proton flux (mean = 327 pfu) in spite of strong associated eruptions. Based on those results, we suggest that the different characteristics of SPEs are mainly due to the different conditions of magnetic connectivity and particle density, which are changed with longitude and height as well as their origin.

  1. The Maneuver Planning Process for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Andrews, Stephen; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) was successfully launched from Kennedy Space Center's Eastern Range on June 30, 2001. MAP will measure the cosmic microwave background as a follow up to NASA's Cosmic Background Explorer (COBE) mission from the early 1990's. MAP will take advantage of its mission orbit about the Sun-Earth/Moon L2 Lagrangian point to produce results with higher resolution, sensitivity, and accuracy than COBE. A strategy comprising highly eccentric phasing loops with a lunar gravity assist was utilized to provide a zero-cost insertion into a lissajous orbit about L2. Maneuvers were executed at the phasing loop perigees to correct for launch vehicle errors and to target the lunar gravity assist so that a suitable orbit at L2 was achieved. This paper will discuss the maneuver planning process for designing, verifying, and executing MAP's maneuvers. A discussion of the tools and how they interacted will also be included. The maneuver planning process was iterative and crossed several disciplines, including trajectory design, attitude control, propulsion, power, thermal, communications, and ground planning. Several commercial, off-the-shelf (COTS) packages were used to design the maneuvers. STK/Astrogator was used as the trajectory design tool. All maneuvers were designed in Astrogator to ensure that the Moon was met at the correct time and orientation to provide the energy needed to achieve an orbit about L2. The Mathworks Matlab product was used to develop a tool for generating command quaternions. The command quaternion table (CQT) was used to drive the attitude during the perigee maneuvers. The MatrixX toolset, originally written by Integrated Systems, Inc., now distributed by Mathworks, was used to create HiFi, a high fidelity simulator of the MAP attitude control system. HiFi was used to test the CQT and to make sure that all attitude requirements were met during the maneuver. In addition, all ACS data plotting and output were generated in

  2. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  3. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  4. Mapping the nursing process: a new approach for understanding the work of nursing.

    PubMed

    Potter, Patricia; Boxerman, Stuart; Wolf, Laurie; Marshall, Jessica; Grayson, Deborah; Sledge, Jennifer; Evanoff, Bradley

    2004-02-01

    The work of nursing is nonlinear and involves complex reasoning and clinical decision making. The use of human factors engineering (HFE) as a sole means for analyzing the work of nursing is problematic. Combining HFE analysis with qualitative observation has created a new methodology for mapping the nursing process. A cognitive pathway offers a new perspective for understanding the work of nursing and analyzing how disruptions to the nursing process may contribute to errors in the acute care environment.

  5. Processing techniques for the production of an experimental computer-generated shaded-relief map

    USGS Publications Warehouse

    Judd, Damon D.

    1986-01-01

    The data consisted of forty-eight 1° by 1° blocks of resampled digital elevation model (DEM) data. These data were digitally mosaicked and assigned colors based on intervals of elevation values. The color-coded data set was then used to create a shaded-relief image that was photographically composited with cartographic line information to produce a shaded-relief map. The majority of the processing was completed at the National Mapping Division EROS Data Center in Sioux Falls, South Dakota.

  6. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the

  7. Flow Behavior and Processing Maps of a Low-Carbon Steel During Hot Deformation

    NASA Astrophysics Data System (ADS)

    Yang, Xiawei; Li, Wenya

    2015-12-01

    The hot isothermal compression tests of a low-carbon steel containing 0.20 pct C were performed in the temperature range of 973 K to 1273 K (700 °C to 1000 °C) and at the strain rate range of 0.001 to 1 s-1. The results show that the flow stress is dependent on deformation temperature and strain rate (decreasing with increasing temperature and/or increasing with increasing strain rate). The flow stress predicted by Arrhenius-type and artificial neural network models were both in a good agreement with experimental data, while the prediction accuracy of the latter is better than the former. A processing map can be obtained by superimposing an instability map on a power dissipation map. Finally, an FEM model was successfully established to simulate the compression test process of this steel. The processing map combined with the FEM model can be very beneficial to solve the problems of residual stress, distortion, and flow instability of components.

  8. Lightweight Hyperspectral Mapping System and a Novel Photogrammetric Processing Chain for UAV-based Sensing

    NASA Astrophysics Data System (ADS)

    Suomalainen, Juha; Franke, Jappe; Anders, Niels; Iqbal, Shahzad; Wenting, Philip; Becker, Rolf; Kooistra, Lammert

    2014-05-01

    We have developed a lightweight Hyperspectral Mapping System (HYMSY) and a novel processing chain for UAV based mapping. The HYMSY consists of a custom pushbroom spectrometer (range 450-950nm, FWHM 9nm, ~20 lines/s, 328 pixels/line), a consumer camera (collecting 16MPix raw image every 2 seconds), a GPS-Inertia Navigation System (GPS-INS), and synchronization and data storage units. The weight of the system at take-off is 2.0kg allowing us to mount it on a relatively small octocopter. The novel processing chain exploits photogrammetry in the georectification process of the hyperspectral data. At first stage the photos are processed in a photogrammetric software producing a high-resolution RGB orthomosaic, a Digital Surface Model (DSM), and photogrammetric UAV/camera position and attitude at the moment of each photo. These photogrammetric camera positions are then used to enhance the internal accuracy of GPS-INS data. These enhanced GPS-INS data are then used to project the hyperspectral data over the photogrammetric DSM, producing a georectified end product. The presented photogrammetric processing chain allows fully automated georectification of hyperspectral data using a compact GPS-INS unit while still producingin UAV use higher georeferencing accuracy than would be possible using the traditional processing method. During 2013, we have operated HYMSY on 150+ octocopter flights at 60+ sites or days. On typical flight we have produced for a 2-10ha area: a RGB orthoimagemosaic at 1-5cm resolution, a DSM in 5-10cm resolution, and hyperspectral datacube at 10-50cm resolution. The targets have mostly consisted of vegetated targets including potatoes, wheat, sugar beets, onions, tulips, coral reefs, and heathlands,. In this poster we present the Hyperspectral Mapping System and the photogrammetric processing chain with some of our first mapping results.

  9. Topological data analysis of contagion maps for examining spreading processes on networks.

    PubMed

    Taylor, Dane; Klimm, Florian; Harrington, Heather A; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A; Mucha, Peter J

    2015-01-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges-for example, due to airline transportation or communication media-allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct 'contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  10. Topological data analysis of contagion maps for examining spreading processes on networks

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-07-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  11. Evaluating different mapping approaches of dominant runoff processes with similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2015-04-01

    The identification of landscape units with similar hydrologic response behaviour is crucial for runoff prediction in ungauged basins. An established method for catchment classification is based on the dominant runoff process (DRP) concept (Grayson & Blöschl, 2000). Different mapping approaches of DRPs exist and differ in several aspects such as time and data required for mapping. On one hand, manual approaches based on intensive field investigations and expert knowledge are reliable but time expensive. On the other hand, GIS-based approaches are easier to realize but rely on simplifications which restrict their application range. Therefore, it is important to investigate to what extent these assumptions are transferable to other catchments. In this study, different GIS-based mapping approaches (Schmocker-Fackel et al., 2007; Müller et al., 2009; Gharari et al., 2011) were used to classify DRPs of two catchments on the Swiss Plateau and were compared to manually derived DRP-maps elaborated using the rule-based approach by Scherrer & Naef (2003). Similarity measures such as mapcurves (Hargrove et al., 2006) and fuzzy kappa statistics (Hagen-Zanker, 2009), as well as a categorical comparison, were performed. Furthermore, the different DRP-mapping approaches are evaluated through synthetic runoff simulations with an adapted version of the well-established hydrological model PREVAH (Viviroli et al., 2009). The different mapping approaches are not unconditionally reasonable for arbitrary catchment characteristics. Generally, all approaches represent the areas where subsurface flow dominates well, whereas they exhibit difficulties with the mapping of very fast and not contributing areas.

  12. Insights into siloxane removal from biogas in biotrickling filters via process mapping-based analysis.

    PubMed

    Soreanu, Gabriela

    2016-03-01

    Data process mapping using response surface methodology (RSM)-based computational techniques is performed in this study for the diagnosis of a laboratory-scale biotrickling filter applied for siloxane (i.e. octamethylcyclotetrasiloxane (D4) and decamethylcyclopentasiloxane (D5)) removal from biogas. A mathematical model describing the process performance (i.e. Si removal efficiency, %) was obtained as a function of key operating parameters (e.g biogas flowrate, D4 and D5 concentration). The contour plots and the response surfaces generated for the obtained objective function indicate a minimization trend in siloxane removal performance, however a maximum performance of approximately 60% Si removal efficiency was recorded. Analysis of the process mapping results provides indicators of improvement to biological system performance.

  13. Mapping knowledge translation and innovation processes in Cancer Drug Development: the case of liposomal doxorubicin.

    PubMed

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Hector; Castaño, Victor M

    2014-09-03

    We explored how the knowledge translation and innovation processes are structured when theyresult in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research.

  14. Insights into siloxane removal from biogas in biotrickling filters via process mapping-based analysis.

    PubMed

    Soreanu, Gabriela

    2016-03-01

    Data process mapping using response surface methodology (RSM)-based computational techniques is performed in this study for the diagnosis of a laboratory-scale biotrickling filter applied for siloxane (i.e. octamethylcyclotetrasiloxane (D4) and decamethylcyclopentasiloxane (D5)) removal from biogas. A mathematical model describing the process performance (i.e. Si removal efficiency, %) was obtained as a function of key operating parameters (e.g biogas flowrate, D4 and D5 concentration). The contour plots and the response surfaces generated for the obtained objective function indicate a minimization trend in siloxane removal performance, however a maximum performance of approximately 60% Si removal efficiency was recorded. Analysis of the process mapping results provides indicators of improvement to biological system performance. PMID:26745382

  15. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    NASA Astrophysics Data System (ADS)

    Frigeri, Alessandro; Hare, Trent; Neteler, Markus; Coradini, Angioletta; Federico, Costanzo; Orosei, Roberto

    2011-09-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  16. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    USGS Publications Warehouse

    Frigeri, A.; Hare, T.; Neteler, M.; Coradini, A.; Federico, C.; Orosei, R.

    2011-01-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  17. [Effect of pilot UASB-SFSBR-MAP process for the large scale swine wastewater treatment].

    PubMed

    Wang, Liang; Chen, Chong-Jun; Chen, Ying-Xu; Wu, Wei-Xiang

    2013-03-01

    In this paper, a treatment process consisted of UASB, step-fed sequencing batch reactor (SFSBR) and magnesium ammonium phosphate precipitation reactor (MAP) was built to treat the large scale swine wastewater, which aimed at overcoming drawbacks of conventional anaerobic-aerobic treatment process and SBR treatment process, such as the low denitrification efficiency, high operating costs and high nutrient losses and so on. Based on the treatment process, a pilot engineering was constructed. It was concluded from the experiment results that the removal efficiency of COD, NH4(+) -N and TP reached 95.1%, 92.7% and 88.8%, the recovery rate of NH4(+) -N and TP by MAP process reached 23.9% and 83.8%, the effluent quality was superior to the discharge standard of pollutants for livestock and poultry breeding (GB 18596-2001), mass concentration of COD, TN, NH4(+) -N, TP and SS were not higher than 135, 116, 43, 7.3 and 50 mg x L(-1) respectively. The process developed was reliable, kept self-balance of carbon source and alkalinity, reached high nutrient recovery efficiency. And the operating cost was equal to that of the traditional anaerobic-aerobic treatment process. So the treatment process could provide a high value of application and dissemination and be fit for the treatment pf the large scale swine wastewater in China.

  18. An Approach to Optimize Size Parameters of Forging by Combining Hot-Processing Map and FEM

    NASA Astrophysics Data System (ADS)

    Hu, H. E.; Wang, X. Y.; Deng, L.

    2014-11-01

    The size parameters of 6061 aluminum alloy rib-web forging were optimized by using hot-processing map and finite element method (FEM) based on high-temperature compression data. The results show that the stress level of the alloy can be represented by a Zener-Holloman parameter in a hyperbolic sine-type equation with the hot deformation activation energy of 343.7 kJ/mol. Dynamic recovery and dynamic recrystallization concurrently preceded during high-temperature deformation of the alloy. Optimal hot-processing parameters for the alloy corresponding to the peak value of 0.42 are 753 K and 0.001 s-1. The instability domain occurs at deformation temperature lower than 653 K. FEM is an available method to validate hot-processing map in actual manufacture by analyzing the effect of corner radius, rib width, and web thickness on workability of rib-web forging of the alloy. Size parameters of die forgings can be optimized conveniently by combining hot-processing map and FEM.

  19. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted

  20. Mapping the particle acceleration in the cool core of the galaxy cluster RX J1720.1+2638

    SciTech Connect

    Giacintucci, S.; Markevitch, M.; Brunetti, G.; Venturi, T.; ZuHone, J. A.

    2014-11-01

    We present new deep, high-resolution radio images of the diffuse minihalo in the cool core of the galaxy cluster RX J1720.1+2638. The images have been obtained with the Giant Metrewave Radio Telescope at 317, 617, and 1280 MHz and with the Very Large Array at 1.5, 4.9, and 8.4 GHz, with angular resolutions ranging from 1'' to 10''. This represents the best radio spectral and imaging data set for any minihalo. Most of the radio flux of the minihalo arises from a bright central component with a maximum radius of ∼80 kpc. A fainter tail of emission extends out from the central component to form a spiral-shaped structure with a length of ∼230 kpc, seen at frequencies 1.5 GHz and below. We find indication of a possible steepening of the total radio spectrum of the minihalo at high frequencies. Furthermore, a spectral index image shows that the spectrum of the diffuse emission steepens with increasing distance along the tail. A striking spatial correlation is observed between the minihalo emission and two cold fronts visible in the Chandra X-ray image of this cool core. These cold fronts confine the minihalo, as also seen in numerical simulations of minihalo formation by sloshing-induced turbulence. All these observations favor the hypothesis that the radio-emitting electrons in cluster cool cores are produced by turbulent re-acceleration.

  1. Journey to the Edges: Social Structures and Neural Maps of Intergroup Processes

    PubMed Central

    Fiske, Susan T.

    2013-01-01

    This article explores boundaries of the intellectual map of intergroup processes, going to the macro (social structure) boundary and the micro (neural systems) boundary. Both are illustrated by with my own and others’ work on social structures and on neural structures related to intergroup processes. Analyzing the impact of social structures on intergroup processes led to insights about distinct forms of sexism and underlies current work on forms of ageism. The stereotype content model also starts with the social structure of intergroup relations (interdependence and status) and predicts images, emotions, and behaviors. Social structure has much to offer the social psychology of intergroup processes. At the other, less explored boundary, social neuroscience addresses the effects of social contexts on neural systems relevant to intergroup processes. Both social structural and neural analyses circle back to traditional social psychology as converging indicators of intergroup processes. PMID:22435843

  2. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    PubMed

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  3. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  4. Development of a new flux map processing code for moveable detector system in PWR

    SciTech Connect

    Li, W.; Lu, H.; Li, J.; Dang, Z.; Zhang, X.

    2013-07-01

    This paper presents an introduction to the development of the flux map processing code MAPLE developed by China Nuclear Power Technology Research Institute (CNPPJ), China Guangdong Nuclear Power Group (CGN). The method to get the three-dimensional 'measured' power distribution according to measurement signal has also been described. Three methods, namely, Weight Coefficient Method (WCM), Polynomial Expand Method (PEM) and Thin Plane Spline (TPS) method, have been applied to fit the deviation between measured and predicted results for two-dimensional radial plane. The measured flux map data of the LINGAO nuclear power plant (NPP) is processed using MAPLE as a test case to compare the effectiveness of the three methods, combined with a 3D neutronics code COCO. Assembly power distribution results show that MAPLE results are reasonable and satisfied. More verification and validation of the MAPLE code will be carried out in future. (authors)

  5. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  6. Ab initio nonadiabatic dynamics of multichromophore complexes: a scalable graphical-processing-unit-accelerated exciton framework.

    PubMed

    Sisto, Aaron; Glowacki, David R; Martinez, Todd J

    2014-09-16

    ("fragmenting") a molecular system and then stitching it back together. In this Account, we address both of these problems, the first by using graphical processing units (GPUs) and electronic structure algorithms tuned for these architectures and the second by using an exciton model as a framework in which to stitch together the solutions of the smaller problems. The multitiered parallel framework outlined here is aimed at nonadiabatic dynamics simulations on large supramolecular multichromophoric complexes in full atomistic detail. In this framework, the lowest tier of parallelism involves GPU-accelerated electronic structure theory calculations, for which we summarize recent progress in parallelizing the computation and use of electron repulsion integrals (ERIs), which are the major computational bottleneck in both density functional theory (DFT) and time-dependent density functional theory (TDDFT). The topmost tier of parallelism relies on a distributed memory framework, in which we build an exciton model that couples chromophoric units. Combining these multiple levels of parallelism allows access to ground and excited state dynamics for large multichromophoric assemblies. The parallel excitonic framework is in good agreement with much more computationally demanding TDDFT calculations of the full assembly. PMID:25186064

  7. Characterization of Hot Deformation Behavior of Hastelloy C-276 Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zhang, Liwen; Shen, Wenfei; Li, Mengfei; Gu, Sendong

    2015-01-01

    In order to clarify the microstructural evolution and workability of Hastelloy C-276 during hot forming to get excellent mechanical properties, the hot deformation behavior of this superalloy is characterized. The cylindrical specimens were isothermal compressed in the temperature range of 1000-1200 °C and strain rate range of 0.001-5 s-1 on a Gleeble 1500 thermal-mechanical simulator. The flow curves and microstructural investigation indicates that dynamic recrystallization is the prime softening mechanism at the evaluated deformation conditions. The constitutive equation was presented as a function of the deformation temperature, strain rate, and strain, and the deformation activation energy was about 450 kJ/mol. The processing maps based on dynamic materials model at the strains of 0.2, 0.4, 0.6, 0.8, and 1.0 were established, and the processing map at 1.0 strain shows good correspondence to the microstructural observation. The domains in processing map in which the efficiency of power dissipation (η) is higher than 0.25 are corresponding to sufficient dynamic recyrstallization phenomenon, which are suggested to be the optimum working areas for Hastelloy C-276.

  8. Mapping of submerged aquatic vegetation with a physically based process chain

    NASA Astrophysics Data System (ADS)

    Heege, Thomas; Bogner, Anke; Pinnel, Nicole

    2004-02-01

    Mapping the submerse vegetation is of prime importance for the ecological evaluation of an entire lake. Remote sensing techniques are efficient for such mapping tasks, if the retrieval algorithms and processing methods are robust and mostly independent from additional ground truth measurements. The Modular Inversion Program (MIP) follows this concept. It is a processing tool designed for the recovery of hydro-biological parameters from multi- and hyper-spectral remote sensing data. The architecture of the program consists of physical inversion schemes that derive bio-physical parameters from the measured radiance signal at the sensor. Program modules exist for the retrieval of aerosols, sun glitter correction, atmospheric corrections, retrieval of water constituents among others. For the purpose of mapping the bottom coverage in optically shallow waters, two modules have been added to MIP. The first module calculates the bottom reflectance using the subsurface reflectance, the depth and an approximation of the water constituent concentrations as input. The second module fractionalizes the bottom reflectance to three endmembers of specific reflectance spectra by linear unmixing. The three endmembers are specific reflectance spectra of bottom sediments, small growing macrophytes (Characeae) and tall macrophytes such as Potamogeton perfoliatus & P. pectinatus. The processing system has been tested with data collected from the multi-spectral airborne scanner Daedalus AADS1268 at Lake Constance, Germany, for multi- temporal analysis.

  9. Optimization of the accelerated curing process of concrete using a fibre Bragg grating-based control system and microwave technology

    NASA Astrophysics Data System (ADS)

    Fabian, Matthias; Jia, Yaodong; Shi, Shi; McCague, Colum; Bai, Yun; Sun, Tong; Grattan, Kenneth T. V.

    2016-05-01

    In this paper, an investigation into the suitability of using fibre Bragg gratings (FBGs) for monitoring the accelerated curing process of concrete in a microwave heating environment is presented. In this approach, the temperature data provided by the FBGs are used to regulate automatically the microwave power so that a pre-defined temperature profile is maintained to optimize the curing process, achieving early strength values comparable to those of conventional heat-curing techniques but with significantly reduced energy consumption. The immunity of the FBGs to interference from the microwave radiation used ensures stable readings in the targeted environment, unlike conventional electronic sensor probes.

  10. PREFACE: 3rd International Workshop on Materials Analysis and Processing in Magnetic Fields (MAP3)

    NASA Astrophysics Data System (ADS)

    Sakka, Yoshio; Hirota, Noriyuki; Horii, Shigeru; Ando, Tsutomu

    2009-07-01

    The 3rd International Workshop on Materials Analysis and Processing in Materials Fields (MAP3) was held on 14-16 May 2008 at the University of Tokyo, Japan. The first was held in March 2004 at the National High Magnetic Field Laboratory in Tallahassee, USA. Two years later the second took place in Grenoble, France. MAP3 was held at The University of Tokyo International Symposium, and jointly with MANA Workshop on Materials Processing by External Stimulation, and JSPS CORE Program of Construction of the World Center on Electromagnetic Processing of Materials. At the end of MAP3 it was decided that the next MAP4 will be held in Atlanta, USA in 2010. Processing in magnetic fields is a rapidly expanding research area with a wide range of promising applications in materials science. MAP3 focused on the magnetic field interactions involved in the study and processing of materials in all disciplines ranging from physics to chemistry and biology: Magnetic field effects on chemical, physical, and biological phenomena Magnetic field effects on electrochemical phenomena Magnetic field effects on thermodynamic phenomena Magnetic field effects on hydrodynamic phenomena Magnetic field effects on crystal growth Magnetic processing of materials Diamagnetic levitation Magneto-Archimedes effect Spin chemistry Application of magnetic fields to analytical chemistry Magnetic orientation Control of structure by magnetic fields Magnetic separation and purification Magnetic field-induced phase transitions Materials properties in high magnetic fields Development of NMR and MRI Medical application of magnetic fields Novel magnetic phenomena Physical property measurement by Magnetic fields High magnetic field generation> MAP3 consisted of 84 presentations including 16 invited talks. This volume of Journal of Physics: Conference Series contains the proceeding of MAP3 with 34 papers that provide a scientific record of the topics covered by the conference with the special topics (13 papers) in

  11. Molecular Mechanisms and Evolutionary Processes Contributing to Accelerated Divergence of Gene Expression on the Drosophila X Chromosome.

    PubMed

    Coolon, Joseph D; Stevenson, Kraig R; McManus, C Joel; Yang, Bing; Graveley, Brenton R; Wittkopp, Patricia J

    2015-10-01

    In species with a heterogametic sex, population genetics theory predicts that DNA sequences on the X chromosome can evolve faster than comparable sequences on autosomes. Both neutral and nonneutral evolutionary processes can generate this pattern. Complex traits like gene expression are not predicted to have accelerated evolution by these theories, yet a "faster-X" pattern of gene expression divergence has recently been reported for both Drosophila and mammals. Here, we test the hypothesis that accelerated adaptive evolution of cis-regulatory sequences on the X chromosome is responsible for this pattern by comparing the relative contributions of cis- and trans-regulatory changes to patterns of faster-X expression divergence observed between strains and species of Drosophila with a range of divergence times. We find support for this hypothesis, especially among male-biased genes, when comparing different species. However, we also find evidence that trans-regulatory differences contribute to a faster-X pattern of expression divergence both within and between species. This contribution is surprising because trans-acting regulators of X-linked genes are generally assumed to be randomly distributed throughout the genome. We found, however, that X-linked transcription factors appear to preferentially regulate expression of X-linked genes, providing a potential mechanistic explanation for this result. The contribution of trans-regulatory variation to faster-X expression divergence was larger within than between species, suggesting that it is more likely to result from neutral processes than positive selection. These data show how accelerated evolution of both coding and noncoding sequences on the X chromosome can lead to accelerated expression divergence on the X chromosome relative to autosomes.

  12. Molecular Mechanisms and Evolutionary Processes Contributing to Accelerated Divergence of Gene Expression on the Drosophila X Chromosome

    PubMed Central

    Coolon, Joseph D.; Stevenson, Kraig R.; McManus, C. Joel; Yang, Bing; Graveley, Brenton R.; Wittkopp, Patricia J.

    2015-01-01

    In species with a heterogametic sex, population genetics theory predicts that DNA sequences on the X chromosome can evolve faster than comparable sequences on autosomes. Both neutral and nonneutral evolutionary processes can generate this pattern. Complex traits like gene expression are not predicted to have accelerated evolution by these theories, yet a “faster-X” pattern of gene expression divergence has recently been reported for both Drosophila and mammals. Here, we test the hypothesis that accelerated adaptive evolution of cis-regulatory sequences on the X chromosome is responsible for this pattern by comparing the relative contributions of cis- and trans-regulatory changes to patterns of faster-X expression divergence observed between strains and species of Drosophila with a range of divergence times. We find support for this hypothesis, especially among male-biased genes, when comparing different species. However, we also find evidence that trans-regulatory differences contribute to a faster-X pattern of expression divergence both within and between species. This contribution is surprising because trans-acting regulators of X-linked genes are generally assumed to be randomly distributed throughout the genome. We found, however, that X-linked transcription factors appear to preferentially regulate expression of X-linked genes, providing a potential mechanistic explanation for this result. The contribution of trans-regulatory variation to faster-X expression divergence was larger within than between species, suggesting that it is more likely to result from neutral processes than positive selection. These data show how accelerated evolution of both coding and noncoding sequences on the X chromosome can lead to accelerated expression divergence on the X chromosome relative to autosomes. PMID:26041937

  13. Accelerating the commercialization of university technologies for military healthcare applications: the role of the proof of concept process

    NASA Astrophysics Data System (ADS)

    Ochoa, Rosibel; DeLong, Hal; Kenyon, Jessica; Wilson, Eli

    2011-06-01

    The von Liebig Center for Entrepreneurism and Technology Advancement at UC San Diego (vonliebig.ucsd.edu) is focused on accelerating technology transfer and commercialization through programs and education on entrepreneurism. Technology Acceleration Projects (TAPs) that offer pre-venture grants and extensive mentoring on technology commercialization are a key component of its model which has been developed over the past ten years with the support of a grant from the von Liebig Foundation. In 2010, the von Liebig Entrepreneurism Center partnered with the U.S. Army Telemedicine and Advanced Technology Research Center (TATRC), to develop a regional model of Technology Acceleration Program initially focused on military research to be deployed across the nation to increase awareness of military medical needs and to accelerate the commercialization of novel technologies to treat the patient. Participants to these challenges are multi-disciplinary teams of graduate students and faculty in engineering, medicine and business representing universities and research institutes in a region, selected via a competitive process, who receive commercialization assistance and funding grants to support translation of their research discoveries into products or services. To validate this model, a pilot program focused on commercialization of wireless healthcare technologies targeting campuses in Southern California has been conducted with the additional support of Qualcomm, Inc. Three projects representing three different universities in Southern California were selected out of forty five applications from ten different universities and research institutes. Over the next twelve months, these teams will conduct proof of concept studies, technology development and preliminary market research to determine the commercial feasibility of their technologies. This first regional program will help build the needed tools and processes to adapt and replicate this model across other regions in the

  14. HiCUP: pipeline for mapping and processing Hi-C data

    PubMed Central

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies. PMID:26835000

  15. HiCUP: pipeline for mapping and processing Hi-C data.

    PubMed

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies.

  16. Neutron activation processes simulation in an Elekta medical linear accelerator head.

    PubMed

    Juste, B; Miró, R; Verdú, G; Díez, S; Campayo, J M

    2014-01-01

    Monte Carlo estimation of the giant-dipole-resonance (GRN) photoneutrons inside the Elekta Precise LINAC head (emitting a 15 MV photon beam) were performed using the MCNP6 (general-purpose Monte Carlo N-Particle code, version 6). Each component of LINAC head geometry and materials were modelled in detail using the given manufacturer information. Primary photons generate photoneutrons and its transport across the treatment head was simulated, including the (n, γ) reactions which undergo activation products. The MCNP6 was used to develop a method for quantifying the activation of accelerator components. The approach described in this paper is useful in quantifying the origin and the amount of nuclear activation.

  17. Laser Processing on the Surface of Niobium Superconducting Radio-Frequency Accelerator Cavities

    NASA Astrophysics Data System (ADS)

    Singaravelu, Senthilraja; Klopf, Michael; Krafft, Geoffrey; Kelley, Michael

    2011-03-01

    Superconducting Radio frequency (SRF) niobium cavities are at the heart of an increasing number of particle accelerators.~ Their performance is dominated by a several nm thick layer at the interior surface. ~Maximizing its smoothness is found to be critical and aggressive chemical treatments are employed to this end.~ We describe laser-induced surface melting as an alternative ``greener'' approach.~ Modeling guided selection of parameters for irradiation with a Q-switched Nd:YAG laser.~ The resulting topography was examined by SEM, AFM and Stylus Profilometry.

  18. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  19. Mapping of Inner and Outer Celestial Bodies Using New Global and Local Topographic Data Derived from Photogrammetric Image Processing

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kokhanov, A. A.; Rodionova, J. F.; Zharkova, A. Yu.; Lazareva, M. S.

    2016-06-01

    New estimation of fundamental geodetic parameters and global and local topography of planets and satellites provide basic coordinate systems for mapping as well as opportunities for studies of processes on their surfaces. The main targets of our study are Europa, Ganymede, Calisto and Io (satellites of Jupiter), Enceladus (a satellite of Saturn), terrestrial planetary bodies, including Mercury, the Moon and Phobos, one of the Martian satellites. In particular, based on new global shape models derived from three-dimensional control point networks and processing of high-resolution stereo images, we have carried out studies of topography and morphology. As a visual representation of the results, various planetary maps with different scale and thematic direction were created. For example, for Phobos we have produced a new atlas with 43 maps, as well as various wall maps (different from the maps in the atlas by their format and design): basemap, topography and geomorphological maps. In addition, we compiled geomorphologic maps of Ganymede on local level, and a global hypsometric Enceladus map. Mercury's topography was represented as a hypsometric globe for the first time. Mapping of the Moon was carried out using new images with super resolution (0.5-1 m/pixel) for activity regions of the first Soviet planetary rovers (Lunokhod-1 and -2). New results of planetary mapping have been demonstrated to the scientific community at planetary map exhibitions (Planetary Maps Exhibitions, 2015), organized by MExLab team in frame of the International Map Year, which is celebrated in 2015-2016. Cartographic products have multipurpose applications: for example, the Mercury globe is popular for teaching and public outreach, the maps like those for the Moon and Phobos provide cartographic support for Solar system exploration.

  20. Processing of airborne lidar bathymetry data for detailed sea floor mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael

    2014-10-01

    Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas. With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field data collection can be directed into selected areas and even focused to identification of specific targets detected in the lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data, and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding density.

  1. The development of a growth regime map for a novel reverse-phase wet granulation process.

    PubMed

    Wade, Jonathan B; Martin, Gary P; Long, David F

    2016-10-15

    The feasibility of a novel reverse-phase wet granulation process has been established and potential advantages identified. Granule growth in the reverse-phase process proceeds via a steady state growth mechanism controlled by capillary forces, whereas granule growth in the conventional process proceeds via an induction growth regime controlled by viscous forces. The resultant reverse-phase granules generally have greater mass mean diameter and lower intragranular porosity when compared to conventional granules prepared under the same liquid saturation and impeller speed conditions indicating the two processes may be operating under different growth regimes. Given the observed differences in growth mechanism and consolidation behaviour of the reverse-phase and conventional granules the applicability of the current conventional granulation regime map is unclear. The aim of the present study was therefore to construct and evaluate a growth regime map, which depicts the regime as a function of liquid saturation and Stokes deformation number, for the reverse-phase granulation process. Stokes deformation number was shown to be a good predictor of both granule mass mean diameter and intragranular porosity over a wide range of process conditions. The data presented support the hypothesis that reverse-phase granules have a greater amount of surface liquid present which can dissipate collision energy and resist granule rebound resulting in the greater granule growth observed. As a result the reverse-phase granulation process results in a greater degree of granule consolidation than that produced using the conventional granulation process. Stokes deformation number was capable of differentiating these differences in the granulation process.

  2. A review of advanced small-scale parallel bioreactor technology for accelerated process development: current state and future need.

    PubMed

    Bareither, Rachel; Pollard, David

    2011-01-01

    The pharmaceutical and biotech industries face continued pressure to reduce development costs and accelerate process development. This challenge occurs alongside the need for increased upstream experimentation to support quality by design initiatives and the pursuit of predictive models from systems biology. A small scale system enabling multiple reactions in parallel (n ≥ 20), with automated sampling and integrated to purification, would provide significant improvement (four to fivefold) to development timelines. State of the art attempts to pursue high throughput process development include shake flasks, microfluidic reactors, microtiter plates and small-scale stirred reactors. The limitations of these systems are compared to desired criteria to mimic large scale commercial processes. The comparison shows that significant technological improvement is still required to provide automated solutions that can speed upstream process development.

  3. The Maneuver Planning Process for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Andrews, Stephen F.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) mission utilized a strategy combining highly eccentric phasing loops with a lunar gravity assist to provide a zero-cost insertion into a Lissajous orbit about the Sun-Earth/Moon L2 point. Maneuvers were executed at the phasing loop perigees to correct for launch vehicle errors and to target the lunar gravity assist so that a suitable orbit at L2 was achieved. This paper will discuss the maneuver planning process for designing, verifying, and executing MAP's maneuvers. This paper will also describe how commercial off-the-shelf (COTS) tools were used to execute these tasks and produce a command sequence ready for upload to the spacecraft. These COTS tools included Satellite Tool Kit, MATLAB, and Matrix-X.

  4. Topological data analysis of contagion maps for examining spreading processes on networks

    PubMed Central

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-01-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth’s surface; however, in modern contagions long-range edges—for example, due to airline transportation or communication media—allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct “contagion maps” that use multiple contagions on a network to map the nodes as a point cloud. By analyzing the topology, geometry, and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modeling, forecast, and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks. PMID:26194875

  5. Using saliency maps to separate competing processes in infant visual cognition.

    PubMed

    Althaus, Nadja; Mareschal, Denis

    2012-01-01

    This article presents an eye-tracking study using a novel combination of visual saliency maps and "area-of-interest" analyses to explore online feature extraction during category learning in infants. Category learning in 12-month-olds (N = 22) involved a transition from looking at high-saliency image regions to looking at more informative, highly variable object parts. In contrast, 4-month-olds (N = 27) exhibited a different pattern displaying a similar decreasing impact of saliency accompanied by a steady focus on the object's center, indicating that targeted feature extraction during category learning develops across the 1st year of life. These results illustrate how the effects of lower and higher level processes may be disentangled using a combined saliency map and area-of-interest analysis. PMID:22533474

  6. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  7. Improved laser damage threshold performance of calcium fluoride optical surfaces via Accelerated Neutral Atom Beam (ANAB) processing

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, S.; Walsh, M.; Svrluga, R.; Thomas, M.

    2015-11-01

    Optics are not keeping up with the pace of laser advancements. The laser industry is rapidly increasing its power capabilities and reducing wavelengths which have exposed the optics as a weak link in lifetime failures for these advanced systems. Nanometer sized surface defects (scratches, pits, bumps and residual particles) on the surface of optics are a significant limiting factor to high end performance. Angstrom level smoothing of materials such as calcium fluoride, spinel, magnesium fluoride, zinc sulfide, LBO and others presents a unique challenge for traditional polishing techniques. Exogenesis Corporation, using its new and proprietary Accelerated Neutral Atom Beam (ANAB) technology, is able to remove nano-scale surface damage and particle contamination leaving many material surfaces with roughness typically around one Angstrom. This surface defect mitigation via ANAB processing can be shown to increase performance properties of high intensity optical materials. This paper describes the ANAB technology and summarizes smoothing results for calcium fluoride laser windows. It further correlates laser damage threshold improvements with the smoothing produced by ANAB surface treatment. All ANAB processing was performed at Exogenesis Corporation using an nAccel100TM Accelerated Particle Beam processing tool. All surface measurement data for the paper was produced via AFM analysis on a Park Model XE70 AFM, and all laser damage testing was performed at Spica Technologies, Inc. Exogenesis Corporation's ANAB processing technology is a new and unique surface modification technique that has demonstrated to be highly effective at correcting nano-scale surface defects. ANAB is a non-contact vacuum process comprised of an intense beam of accelerated, electrically neutral gas atoms with average energies of a few tens of electron volts. The ANAB process does not apply mechanical forces associated with traditional polishing techniques. ANAB efficiently removes surface

  8. Dynamic recrystallization behavior and processing map of the Cu-Cr-Zr-Nd alloy.

    PubMed

    Zhang, Yi; Sun, Huili; Volinsky, Alex A; Tian, Baohong; Song, Kexing; Chai, Zhe; Liu, Ping; Liu, Yong

    2016-01-01

    Hot deformation behavior of the Cu-Cr-Zr-Nd alloy was studied by hot compressive tests in the temperature range of 650-950 °C and the strain rate range of 0.001-10 s(-1) using Gleeble-1500D thermo-mechanical simulator. The results showed that the flow stress is strongly dependent on the deformation temperature and the strain rate. With the increase of temperature or the decrease of strain rate, the flow stress significantly decreases. Hot activation energy of the alloy is about 404.84 kJ/mol and the constitutive equation of the alloy based on the hyperbolic-sine equation was established. Based on the dynamic material model, the processing map was established to optimize the deformation parameters. The optimal processing parameters for the Cu-Cr-Zr-Nd alloy hot working are in the temperature range of 900-950 °C and strain rate range of 0.1-1 s(-1). A full dynamic recrystallization structure with fine and homogeneous grain size can be obtained at optimal processing conditions. The microstructure of specimens deformed at different conditions was analyzed and connected with the processing map. The surface fracture was observed to identify instability conditions. PMID:27347462

  9. Locality-Aware Parallel Process Mapping for Multi-Core HPC Systems

    SciTech Connect

    Hursey, Joshua J; Squyres, Jeffrey M.; Dontje, Terry

    2011-01-01

    High Performance Computing (HPC) systems are composed of servers containing an ever-increasing number of cores. With such high processor core counts, non-uniform memory access (NUMA) architectures are almost universally used to reduce inter-processor and memory communication bottlenecks by distributing processors and memory throughout a server-internal networking topology. Application studies have shown that the tuning of processes placement in a server s NUMA networking topology to the application can have a dramatic impact on performance. The performance implications are magnified when running a parallel job across multiple server nodes, especially with large scale HPC applications. This paper presents the Locality-Aware Mapping Algorithm (LAMA) for distributing the individual processes of a parallel application across processing resources in an HPC system, paying particular attention to the internal server NUMA topologies. The algorithm is able to support both homogeneous and heterogeneous hardware systems, and dynamically adapts to the available hardware and user-specified process layout at run-time. As implemented in Open MPI, the LAMA provides 362,880 mapping permutations and is able to naturally scale out to additional hardware resources as they become available in future architectures.

  10. Dynamic recrystallization behavior and processing map of the Cu-Cr-Zr-Nd alloy.

    PubMed

    Zhang, Yi; Sun, Huili; Volinsky, Alex A; Tian, Baohong; Song, Kexing; Chai, Zhe; Liu, Ping; Liu, Yong

    2016-01-01

    Hot deformation behavior of the Cu-Cr-Zr-Nd alloy was studied by hot compressive tests in the temperature range of 650-950 °C and the strain rate range of 0.001-10 s(-1) using Gleeble-1500D thermo-mechanical simulator. The results showed that the flow stress is strongly dependent on the deformation temperature and the strain rate. With the increase of temperature or the decrease of strain rate, the flow stress significantly decreases. Hot activation energy of the alloy is about 404.84 kJ/mol and the constitutive equation of the alloy based on the hyperbolic-sine equation was established. Based on the dynamic material model, the processing map was established to optimize the deformation parameters. The optimal processing parameters for the Cu-Cr-Zr-Nd alloy hot working are in the temperature range of 900-950 °C and strain rate range of 0.1-1 s(-1). A full dynamic recrystallization structure with fine and homogeneous grain size can be obtained at optimal processing conditions. The microstructure of specimens deformed at different conditions was analyzed and connected with the processing map. The surface fracture was observed to identify instability conditions.

  11. Comparison of ArcToolbox and Terrain Tiles processing procedures for inundation mapping in mountainous terrain.

    PubMed

    Darnell, Andrew; Wise, Richard; Quaranta, John

    2013-01-01

    Floodplain management consists of efforts to reduce flood damage to critical infrastructure and to protect the life and health of individuals from flooding. A major component of this effort is the monitoring of flood control structures such as dams because the potential failure of these structures may have catastrophic consequences. To prepare for these threats, engineers use inundation maps that illustrate the flood resulting from high river stages. To create the maps, the structure and river systems are modeled using engineering software programs, and hydrologic events are used to simulate the conditions leading to the failure of the structure. The output data are then exported to other software programs for the creation of inundation maps. Although the computer programs for this process have been established, the processing procedures vary and yield inconsistent results. Thus, these processing methods need to be examined to determine the functionality of each in floodplain management practices. The main goal of this article is to present the development of a more integrated, accurate, and precise graphical interface tool for interpretation by emergency managers and floodplain engineers. To accomplish this purpose, a potential dam failure was simulated and analyzed for a candidate river system using two processing methods: ArcToolbox and Terrain Tiles. The research involved performing a comparison of the outputs, which revealed that both procedures yielded similar inundations for single river reaches. However, the results indicated key differences when examining outputs for large river systems. On the basis of criteria involving the hydrologic accuracy and effects on infrastructure, the Terrain Tiles inundation surpassed the ArcToolbox inundation in terms of following topography and depicting flow rates and flood extents at confluences, bends, and tributary streams. Thus, the Terrain Tiles procedure is a more accurate representation of flood extents for use by

  12. IntenCD: an application for CD uniformity mapping of photomask and process control at maskshops

    NASA Astrophysics Data System (ADS)

    Kim, Heebom; Lee, MyoungSoo; Lee, Sukho; Sung, Young-Su; Kim, Byunggook; Woo, Sang-Gyun; Cho, HanKu; Yishai, Michael Ben; Shoval, Lior; Couderc, Christophe

    2008-05-01

    Lithographic process steps used in today's integrated circuit production require tight control of critical dimensions (CD). With new design rules dropping to 32 nm and emerging double patterning processes, parameters that were of secondary importance in previous technology generations have now become determining for the overall CD budget in the wafer fab. One of these key parameters is the intra-field mask CD uniformity (CDU) error, which is considered to consume an increasing portion of the overall CD budget for IC fabrication process. Consequently, it has become necessary to monitor and characterize CDU in both the maskshop and the wafer fab. Here, we describe the introduction of a new application for CDU monitoring into the mask making process at Samsung. The IntenCDTM application, developed by Applied Materials, is implemented on an aerial mask inspection tool. It uses transmission inspection data, which contains information about CD variation over the mask, to create a dense yet accurate CDU map of the whole mask. This CDU map is generated in parallel to the normal defect inspection run, thus adding minimal overhead to the regular inspection time. We present experimental data showing examples of mask induced CD variations from various sources such as geometry, transmission and phase variations. We show how these small variations were captured by IntenCDTM and demonstrate a high level of correlation between CD SEM analysis and IntenCDTM mapping of mask CDU. Finally, we suggest a scheme for integrating the IntenCDTM application as part of mask qualification procedure at maskshops.

  13. Electromagnetic oil field mapping for improved process monitoring and reservoir characterization: A poster presentation

    SciTech Connect

    Waggoner, J.R.; Mansure, A.J.

    1992-02-01

    This report is a permanent record of a poster paper presented by the authors at the Third International Reservoir Characterization Technical Conference in Tulsa, Oklahoma on November 3--5, 1991. The subject is electromagnetic (EM) techniques that are being developed to monitor oil recovery processes to improve overall process performance. The potential impact of EM surveys is very significant, primarily in the areas of locating oil, identifying oil inside and outside the pattern, characterizing flow units, and pseudo-real time process control to optimize process performance and efficiency. Since a map of resistivity alone has little direct application to these areas, an essential part of the EM technique is understanding the relationship between the process and the formation resistivity at all scales, and integrating this understanding into reservoir characterization and simulation. First is a discussion of work completed on the core scale petrophysics of resistivity changes in an oil recovery process; a steamflood is used as an example. A system has been developed for coupling the petrophysics of resistivity with reservoir simulation to simulate the formation resistivity structure arising from a recovery process. Preliminary results are given for an investigation into the effect of heterogeneity and anisotropy on the EM technique, as well as the use of the resistivity simulator to interpret EM data in terms of reservoir and process parameters. Examples illustrate the application of the EM technique to improve process monitoring and reservoir characterization.

  14. Acceleration{endash}deceleration process of thin foils confined in water and submitted to laser driven shocks

    SciTech Connect

    Romain, J.P.; Auroux, E.

    1997-08-01

    An experimental, numerical, and analytical study of the acceleration and deceleration process of thin metallic foils immersed in water and submitted to laser driven shocks is presented. Aluminum and copper foils of 20 to 120 {mu}m thickness, confined on both sides by water, have been irradiated at 1.06 {mu}m wavelength by laser pulses of {approximately}20ns duration, {approximately}17J energy, and {approximately}4GW/cm{sup 2} incident intensity. Time resolved velocity measurements have been made, using an electromagnetic velocity gauge. The recorded velocity profiles reveal an acceleration{endash}deceleration process, with a peak velocity up to 650 m/s. Predicted profiles from numerical simulations reproduce all experimental features, such as wave reverberations, rate of increase and decrease of velocity, peak velocity, effects of nature, and thickness of the foils. A shock pressure of about 2.5 GPa is inferred from the velocity measurements. Experimental points on the evolution of plasma pressure are derived from the measurements of peak velocities. An analytical description of the acceleration{endash}deceleration process, involving multiple shock and release waves reflecting on both sides of the foils, is presented. The space{endash}time diagrams of waves propagation and the successive pressure{endash}particle velocity states are determined, from which theoretical velocity profiles are constructed. All characteristics of experimental records and numerical simulations are well reproduced. The role of foil nature and thickness, in relation with the shock impedance of the materials, appears explicitly. {copyright} {ital 1997 American Institute of Physics.}

  15. Concept Maps for the Modelling of Controlled Flexibility in Software Processes

    NASA Astrophysics Data System (ADS)

    Martinho, Ricardo; Domingos, Dulce; Varajão, João

    Software processes and corresponding models are dynamic entities that are often changed and evolved by skillful knowledge workers such as the members of a software development team. Consequently, process flexibility has been identified as one of the most important features that should be supported by both Process Modelling Languages (PMLs) and software tools that manage the processes. However, in the everyday practice, most software team members do not want total flexibility. They rather prefer to have controlled flexibility, i.e., to learn and follow advices previously modelled by a process engineer on which and how they can change the elements that compose a software process. Since process models constitute a preferred vehicle for sharing and communicating knowledge on software processes, the process engineer needs a PML that can express this controlled flexibility, along with other process perspectives. To achieve this enhanced PML, we first need a sound core set of concepts and relationships that defines the knowledge domain associated with the modelling of controlled flexibility. In this paper we capture and represent this domain by using Concept Maps (Cmaps). These include diagrams and descriptions that elicit the relationships between the concepts involved. The proposed Cmaps can then be used as input to extend a PML with modelling constructs to express controlled flexibility within software processes. Process engineers can use these constructs to define, in a process model, advices on changes that can be made to the model itself or to related instances. Software team members can then consult this controlled flexibility information within the process models and perform changes accordingly.

  16. Impact absorption of four processed soft denture liners as influenced by accelerated aging.

    PubMed

    Kawano, F; Koran, A; Nuryanti, A; Inoue, S

    1997-01-01

    The cushioning effect of soft denture liners was evaluated by using a free drop test with an accelerometer. Materials tested included SuperSoft (Coe Laboratories, Chicago, IL), Kurepeet-Dough (Kreha Chemical, Tokyo), Molteno Soft (Molten, Hiroshima, Japan), and Molloplast-B (Molloplast Regneri, Karlsruhe, Germany). All materials were found to reduce the impact force when compared to acrylic denture base resin. A 2.4-mm layer of soft denture material demonstrated good impact absorption, and Molloplast-B and Molteno had excellent impact absorption. When the soft denture liner was kept in an accelerated aging chamber for 900 hours, the damping effect recorded increased for all materials tested. Aging of all materials also affected the cushioning effect.

  17. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  18. Studies of $${\\rm Nb}_{3}{\\rm Sn}$$ Strands Based on the Restacked-Rod Process for High Field Accelerator Magnets

    DOE PAGES

    Barzi, E.; Bossert, M.; Gallo, G.; Lombardo, V.; Turrioni, D.; Yamada, R.; Zlobin, A. V.

    2011-12-21

    A major thrust in Fermilab's accelerator magnet R&D program is the development of Nb3Sn wires which meet target requirements for high field magnets, such as high critical current density, low effective filament size, and the capability to withstand the cabling process. The performance of a number of strands with 150/169 restack design produced by Oxford Superconducting Technology was studied for round and deformed wires. To optimize the maximum plastic strain, finite element modeling was also used as an aid in the design. Results of mechanical, transport and metallographic analyses are presented for round and deformed wires.

  19. Young coconut juice can accelerate the healing process of cutaneous wounds

    PubMed Central

    2012-01-01

    Background Estrogen has been reported to accelerate cutaneous wound healing. This research studies the effect of young coconut juice (YCJ), presumably containing estrogen-like substances, on cutaneous wound healing in ovairectomized rats. Methods Four groups of female rats (6 in each group) were included in this study. These included sham-operated, ovariectomized (ovx), ovx receiving estradiol benzoate (EB) injections intraperitoneally, and ovx receiving YCJ orally. Two equidistant 1-cm full-thickness skin incisional wounds were made two weeks after ovariectomy. The rats were sacrificed at the end of the third and the fourth week of the study, and their serum estradiol (E2) level was measured by chemiluminescent immunoassay. The skin was excised and examined in histological sections stained with H&E, and immunostained using anti-estrogen receptor (ER-α an ER-β) antibodies. Results Wound healing was accelerated in ovx rats receiving YCJ, as compared to controls. This was associated with significantly higher density of immunostaining for ER-α an ER-β in keratinocytes, fibroblasts, white blood cells, fat cells, sebaceous gland, skeletal muscles, and hair shafts and follicles. This was also associated with thicker epidermis and dermis, but with thinner hypodermis. In addition, the number and size of immunoreactive hair follicles for both ER-α and ER-β were the highest in the ovx+YCJ group, as compared to the ovx+EB group. Conclusions This study demonstrates that YCJ has estrogen-like characteristics, which in turn seem to have beneficial effects on cutaneous wound healing. PMID:23234369

  20. Real-time dual-mode standard/complex Fourier-domain OCT system using graphics processing unit accelerated 4D signal processing and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Kang, Jin U.

    2011-03-01

    We realized a real-time dual-mode standard/complex Fourier-domain optical coherence tomography (FD-OCT) system using graphics processing unit (GPU) accelerated 4D (3D+time) signal processing and visualization. For both standard and complex FD-OCT modes, the signal processing tasks were implemented on a dual-GPUs architecture that included λ-to-k spectral re-sampling, fast Fourier transform (FFT), modified Hilbert transform, logarithmic-scaling, and volume rendering. The maximum A-scan processing speeds achieved are >3,000,000 line/s for the standard 1024-pixel-FD-OCT, and >500,000 line/s for the complex 1024-pixel-FD-OCT. Multiple volumerendering of the same 3D data set were preformed and displayed with different view angles. The GPU-acceleration technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

  1. Remote sensing of the energy of Jovian auroral electrons with STIS: a clue to unveil plasma acceleration processes

    NASA Astrophysics Data System (ADS)

    Gerard, Jean-Claude

    2013-10-01

    The polar aurora, an important energy source for the Earth's upper atmosphere, is about two orders of magnitude more intense at Jupiter where it releases approximately 10 GW in Jupiter's thermosphere. So far, HST observations of Jupiter's aurora have concentrated on the morphology and the relationship between the solar wind and the brightness distribution. While STIS-MAMA is still operational, time is now critical to move into a new era where FUV long-slit spectroscopy and the spatial scanning capabilities of HST are combined. We propose to use this powerful tool to remotely sense the characteristics of the precipitated electrons by slewing the spectral slit over the different auroral components. It will then be possible to associate electron energies with spatial auroral components and constrain acceleration mechanisms {field-aligned acceleration, magnetic field reconnection, pitch angle electron scattering} associated with specific emission regions. For this, a combination of FUV imaging with STIS long slit spectroscopy will map the spatial variations of the auroral depth and thus the energy of the precipitated electrons. These results will be compared with current models of the Jovian magnetosphere-ionosphere interactions and will provide key inputs to a 3-D model of the Jupiter's atmosphere global heat budget and dynamics currently under development. This compact timely program is designed to provide a major step forward for a better understanding of the physical interactions taking place in Jupiter's magnetosphere and their effects on giant planets' atmospheres, a likely paradigm for many giant fast spinning planets with massive magnetic field in the universe.

  2. The effect of sleep fragmentation on cognitive processing using computerized topographic brain mapping.

    PubMed

    Kingshott, R N; Cosway, R J; Deary, I J; Douglas, N J

    2000-12-01

    Topographic brain mapping of evoked potentials can be used to localize abnormalities of cortical function. We evaluated the effect of sleep fragmentation on brain function by measuring the visual P300 waveform using brain mapping. Eight normal subjects (Epworth Score +/- SD: 5 +/- 3) underwent tone-induced sleep fragmentation and undisturbed study nights in a randomized cross-over design. Study nights were followed by topographic brain mapping using a visual information processing test and concurrent event-related potentials. Experimental sleep fragmentation did not significantly increase objective daytime sleepiness or lower cognitive performance on a battery of cognitive function tests (all P > or = 0.1). There were no significant topographical delays in P300 latencies with sleep fragmentation (all P > 0.15). However, at sites Fz, F4, T3, C3, Cz and C4 the P300 amplitudes were reduced significantly after sleep fragmentation (all P < 0.05). A reduction in P300 amplitude has previously been interpreted as a decrease in attention. These reductions in P300 amplitudes with sleep fragmentation in frontal, central and temporal brain areas suggest that sleep fragmentation may cause a broad decrease in attention. Sleep fragmentation did not delay P300 latencies in any brain area, and so does not explain the delay in P300 latencies reported in sleep apnoeics.

  3. Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J. J.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-08-01

    This paper describes the processing applied to the cleaned, time-ordered information obtained from the Planck High Frequency Instrument (HFI) with the aim of producing photometrically calibrated maps in temperature and (for the first time) in polarization. The data from the entire 2.5-year HFI mission include almost five full-sky surveys. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. The 545 and 857 GHz data are calibrated using models of planetary atmospheric emission. The lower frequencies (from 100 to 353 GHz) are calibrated using the time-variable cosmological microwave background dipole, which we call the orbital dipole. This source of calibration only depends on the satellite velocity with respect to the solar system. Using a CMB temperature of TCMB = 2.7255 ± 0.0006 K, it permits an independent measurement of the amplitude of the CMB solar dipole (3364.3 ± 1.5 μK), which is approximatively 1σ higher than the WMAP measurement with a direction that is consistent between the two experiments. We describe the pipeline used to produce the maps ofintensity and linear polarization from the HFI timelines, and the scheme used to set the zero level of the maps a posteriori. We also summarize the noise characteristics of the HFI maps in the 2015 Planck data release and present some null tests to assess their quality. Finally, we discuss the major systematic effects and in particular the leakage induced by flux mismatch between the detectors that leads to spurious polarization signal.

  4. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2014-05-01

    The use of SAR data has become increasingly popular in recent years and in a wide array of industries. Having access to SAR can be highly important and critical especially for public safety. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. SAR imaging offers the great advantage, over its optical counterparts, of not being affected by darkness, meteorological conditions such as clouds, fog, etc., or smoke and dust, frequently associated with disaster zones. In this paper we present the operational processing of SAR data within a GIS environment for rapid disaster mapping. For this technique we integrated the SARscape modules for ENVI with ArcGIS®, eliminating the need to switch between software packages. Thereby the premier algorithms for SAR image analysis can be directly accessed from ArcGIS desktop and server environments. They allow processing and analyzing SAR data in almost real time and with minimum user interaction. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. The Bacchiglione River burst its banks on Nov. 2nd after two days of heavy rainfall throughout the northern Italian region. The community of Bovolenta, 22 km SSE of Padova, was covered by several meters of water. People were requested to stay in their homes; several roads, highways sections and railroads had to be closed. The extent of this flooding is documented by a series of Cosmo-SkyMed acquisitions with a GSD of 2.5 m (StripMap mode). Cosmo-SkyMed is a constellation of four Earth observation satellites, allowing a very frequent coverage, which enables monitoring using a very high temporal resolution. This data is processed in ArcGIS using a single-sensor, multi-mode, multi-temporal approach consisting of 3 steps: (1) The single images are filtered with a Gamma DE-MAP filter. (2) The filtered images are geocoded using a reference

  5. Analytical control of process impurities in Pazopanib hydrochloride by impurity fate mapping.

    PubMed

    Li, Yan; Liu, David Q; Yang, Shawn; Sudini, Ravinder; McGuire, Michael A; Bhanushali, Dharmesh S; Kord, Alireza S

    2010-08-01

    Understanding the origin and fate of organic impurities within the manufacturing process along with a good control strategy is an integral part of the quality control of drug substance. Following the underlying principles of quality by design (QbD), a systematic approach to analytical control of process impurities by impurity fate mapping (IFM) has been developed and applied to the investigation and control of impurities in the manufacturing process of Pazopanib hydrochloride, an anticancer drug approved recently by the U.S. FDA. This approach requires an aggressive chemical and analytical search for potential impurities in the starting materials, intermediates and drug substance, and experimental studies to track their fate through the manufacturing process in order to understand the process capability for rejecting such impurities. Comprehensive IFM can provide elements of control strategies for impurities. This paper highlights the critical roles that analytical sciences play in the IFM process and impurity control. The application of various analytical techniques (HPLC, LC-MS, NMR, etc.) and development of sensitive and selective methods for impurity detection, identification, separation and quantification are highlighted with illustrative examples. As an essential part of the entire control strategy for Pazopanib hydrochloride, analytical control of impurities with 'meaningful' specifications and the 'right' analytical methods is addressed. In particular, IFM provides scientific justification that can allow for control of process impurities up-stream at the starting materials or intermediates whenever possible.

  6. Application of Low Level, Uniform Ultrasound Field for Acceleration of Enzymatic Bio-processing of Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. Our research has found that th...

  7. Implications of acceleration environments on scaling materials processing in space to production

    NASA Technical Reports Server (NTRS)

    Demel, Ken

    1990-01-01

    Some considerations regarding materials processing in space are covered from a commercial perspective. Key areas include power, proprietary data, operational requirements (including logistics), and also the center of gravity location, and control of that location with respect to materials processing payloads.

  8. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  9. Topographic power spectral density study of the effect of surface treatment processes on niobium for superconducting radio frequency accelerator cavities

    SciTech Connect

    Charles Reece, Hui Tian, Michael Kelley, Chen Xu

    2012-04-01

    Microroughness is viewed as a critical issue for attaining optimum performance of superconducting radio frequency accelerator cavities. The principal surface smoothing methods are buffered chemical polish (BCP) and electropolish (EP). The resulting topography is characterized by atomic force microscopy (AFM). The power spectral density (PSD) of AFM data provides a more thorough description of the topography than a single-value roughness measurement. In this work, one dimensional average PSD functions derived from topography of BCP and EP with different controlled starting conditions and durations have been fitted with a combination of power law, K correlation, and shifted Gaussian models to extract characteristic parameters at different spatial harmonic scales. While the simplest characterizations of these data are not new, the systematic tracking of scale-specific roughness as a function of processing is new and offers feedback for tighter process prescriptions more knowledgably targeted at beneficial niobium topography for superconducting radio frequency applications.

  10. An Indexed, Mapped Mutant Library Enables Reverse Genetics Studies of Biological Processes in Chlamydomonas reinhardtii.

    PubMed

    Li, Xiaobo; Zhang, Ru; Patena, Weronika; Gang, Spencer S; Blum, Sean R; Ivanova, Nina; Yue, Rebecca; Robertson, Jacob M; Lefebvre, Paul A; Fitz-Gibbon, Sorel T; Grossman, Arthur R; Jonikas, Martin C

    2016-02-01

    The green alga Chlamydomonas reinhardtii is a leading unicellular model for dissecting biological processes in photosynthetic eukaryotes. However, its usefulness has been limited by difficulties in obtaining mutants in specific genes of interest. To allow generation of large numbers of mapped mutants, we developed high-throughput methods that (1) enable easy maintenance of tens of thousands of Chlamydomonas strains by propagation on agar media and by cryogenic storage, (2) identify mutagenic insertion sites and physical coordinates in these collections, and (3) validate the insertion sites in pools of mutants by obtaining >500 bp of flanking genomic sequences. We used these approaches to construct a stably maintained library of 1935 mapped mutants, representing disruptions in 1562 genes. We further characterized randomly selected mutants and found that 33 out of 44 insertion sites (75%) could be confirmed by PCR, and 17 out of 23 mutants (74%) contained a single insertion. To demonstrate the power of this library for elucidating biological processes, we analyzed the lipid content of mutants disrupted in genes encoding proteins of the algal lipid droplet proteome. This study revealed a central role of the long-chain acyl-CoA synthetase LCS2 in the production of triacylglycerol from de novo-synthesized fatty acids. PMID:26764374

  11. Geological Mapping of Fortuna Tessera (V-2): Venus and Earth's Archean Process Comparisons

    NASA Technical Reports Server (NTRS)

    Head, James W.; Hurwitz,D. M.; Ivanov, M. A.; Basilevsky, A. T.; Kumar, P. Senthil

    2008-01-01

    The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new insight into the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record provides important insight into high heat-flux tectonic and magmatic environments and structures and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Elsewhere we have addressed the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. Here we present the major goals of the Venus-Archean comparison and show how preliminary mapping of the geology of the V-2 Fortuna Tessera quadrangle is providing insight on these problems. We have identified five key themes and questions common to both the Archean and Venus, the assessment of which could provide important new insights into the history and processes of both planets.

  12. Hot Deformation Processing Map and Microstructural Evaluation of the Ni-Based Superalloy IN-738LC

    NASA Astrophysics Data System (ADS)

    Sajjadi, S. A.; Chaichi, A.; Ezatpour, H. R.; Maghsoudlou, A.; Kalaie, M. A.

    2016-04-01

    Hot deformation behavior of the Ni-based superalloy IN-738LC was investigated by means of hot compression tests over the temperature range of 1000-1200 °C and strain rate range of 0.01-1 s-1. The obtained peak flow stresses were related to strain rate and temperature through the hyperbolic sine equation with activation energy of 950 kJ/mol. Dynamic material model was used to obtain the processing map of IN-738LC. Analysis of the microstructure was carried out in order to study each domain's characteristic represented by the processing map. The results showed that dynamic recrystallization occurs in the temperature range of 1150-1200 °C and strain rate of 0.1 s-1 with the maximum power dissipation efficiency of 35%. The unstable domain was exhibited in the temperature range of 1000-1200 °C and strain rate of 1 s-1 on the occurrence of severe deformation bands and grain boundary cracking.

  13. Mapping the connectivity underlying multimodal (verbal and non-verbal) semantic processing: a brain electrostimulation study.

    PubMed

    Moritz-Gasser, Sylvie; Herbet, Guillaume; Duffau, Hugues

    2013-08-01

    Accessing the meaning of words, objects, people and facts is a human ability, made possible thanks to semantic processing. Although studies concerning its cortical organization are proficient, the subcortical connectivity underlying this semantic network received less attention. We used intraoperative direct electrostimulation, which mimics a transient virtual lesion during brain surgery for glioma in eight awaken patients, to map the anatomical white matter substrate subserving the semantic system. Patients performed a picture naming task and a non-verbal semantic association test during the electrical mapping. Direct electrostimulation of the inferior fronto-occipital fascicle, a poorly known ventral association pathway which runs throughout the brain, induced in all cases semantic disturbances. These transient disorders were highly reproducible, and concerned verbal as well as non-verbal output. Our results highlight for the first time the essential role of the left inferior fronto-occipital fascicle in multimodal (and not only in verbal) semantic processing. On the basis of these original findings, and in the lights of phylogenetic considerations regarding this fascicle, we suggest its possible implication in the monitoring of the human level of consciousness related to semantic memory, namely noetic consciousness. PMID:23778263

  14. Mapping the connectivity underlying multimodal (verbal and non-verbal) semantic processing: a brain electrostimulation study.

    PubMed

    Moritz-Gasser, Sylvie; Herbet, Guillaume; Duffau, Hugues

    2013-08-01

    Accessing the meaning of words, objects, people and facts is a human ability, made possible thanks to semantic processing. Although studies concerning its cortical organization are proficient, the subcortical connectivity underlying this semantic network received less attention. We used intraoperative direct electrostimulation, which mimics a transient virtual lesion during brain surgery for glioma in eight awaken patients, to map the anatomical white matter substrate subserving the semantic system. Patients performed a picture naming task and a non-verbal semantic association test during the electrical mapping. Direct electrostimulation of the inferior fronto-occipital fascicle, a poorly known ventral association pathway which runs throughout the brain, induced in all cases semantic disturbances. These transient disorders were highly reproducible, and concerned verbal as well as non-verbal output. Our results highlight for the first time the essential role of the left inferior fronto-occipital fascicle in multimodal (and not only in verbal) semantic processing. On the basis of these original findings, and in the lights of phylogenetic considerations regarding this fascicle, we suggest its possible implication in the monitoring of the human level of consciousness related to semantic memory, namely noetic consciousness.

  15. An Indexed, Mapped Mutant Library Enables Reverse Genetics Studies of Biological Processes in Chlamydomonas reinhardtii.

    PubMed

    Li, Xiaobo; Zhang, Ru; Patena, Weronika; Gang, Spencer S; Blum, Sean R; Ivanova, Nina; Yue, Rebecca; Robertson, Jacob M; Lefebvre, Paul A; Fitz-Gibbon, Sorel T; Grossman, Arthur R; Jonikas, Martin C

    2016-02-01

    The green alga Chlamydomonas reinhardtii is a leading unicellular model for dissecting biological processes in photosynthetic eukaryotes. However, its usefulness has been limited by difficulties in obtaining mutants in specific genes of interest. To allow generation of large numbers of mapped mutants, we developed high-throughput methods that (1) enable easy maintenance of tens of thousands of Chlamydomonas strains by propagation on agar media and by cryogenic storage, (2) identify mutagenic insertion sites and physical coordinates in these collections, and (3) validate the insertion sites in pools of mutants by obtaining >500 bp of flanking genomic sequences. We used these approaches to construct a stably maintained library of 1935 mapped mutants, representing disruptions in 1562 genes. We further characterized randomly selected mutants and found that 33 out of 44 insertion sites (75%) could be confirmed by PCR, and 17 out of 23 mutants (74%) contained a single insertion. To demonstrate the power of this library for elucidating biological processes, we analyzed the lipid content of mutants disrupted in genes encoding proteins of the algal lipid droplet proteome. This study revealed a central role of the long-chain acyl-CoA synthetase LCS2 in the production of triacylglycerol from de novo-synthesized fatty acids.

  16. Effects of accelerated reading rate on processing words' syntactic functions by normal and dyslexic readers: event related potentials evidence.

    PubMed

    Breznitz, Z; Leikin, M

    2001-09-01

    In the present study, the authors examined differences in brain activity, as measured by amplitudes and latencies of event related potentials (ERP) components, in Hebrew-speaking adult dyslexic and normal readers when processing sentence components with different grammatical functions. Participants were 20 dyslexic and 20 normally reading male college students aged 18-27 years. The authors examined the processing of normal word strings in word-by-word reading of sentences having subject-verb-object (SVO) syntactic structure in self- and fast-paced conditions. Data revealed that in both reading conditions, the N100 and P300 ERP components were sensitive to internal processes such as recognition of words' grammatical functions. However, the results revealed that fast-paced reading rate might affect this process, as was reflected in the systematic changes of amplitudes and latencies of both ERP components. In accelerated reading, a significant decrease of latencies and increase of amplitudes in dyslexics were shown. It was also found that influence of fast-paced reading rate was realized in the full usage of the word-order strategy in sentence processing. In turn, this fact confirmed the hypothesis concerning a syntactic processing "weakness" in dyslexia.

  17. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    NASA Astrophysics Data System (ADS)

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-01

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized system.

  18. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    SciTech Connect

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  19. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  20. VLSI architectures for geometrical mapping problems in high-definition image processing

    NASA Technical Reports Server (NTRS)

    Kim, K.; Lee, J.

    1991-01-01

    This paper explores a VLSI architecture for geometrical mapping address computation. The geometric transformation is discussed in the context of plane projective geometry, which invokes a set of basic transformations to be implemented for the general image processing. The homogeneous and 2-dimensional cartesian coordinates are employed to represent the transformations, each of which is implemented via an augmented CORDIC as a processing element. A specific scheme for a processor, which utilizes full-pipelining at the macro-level and parallel constant-factor-redundant arithmetic and full-pipelining at the micro-level, is assessed to produce a single VLSI chip for HDTV applications using state-of-art MOS technology.

  1. Brightest Fermi-LAT flares of PKS 1222+216: implications on emission and acceleration processes

    SciTech Connect

    Kushwaha, Pankaj; Singh, K. P.; Sahayanathan, Sunder

    2014-11-20

    We present a high time resolution study of the two brightest γ-ray outbursts from a blazar PKS 1222+216 observed by the Fermi Large Area Telescope (LAT) in 2010. The γ-ray light curves obtained in four different energy bands, 0.1-3, 0.1-0.3, 0.3-1, and 1-3 GeV, with time bins of six hours, show asymmetric profiles with similar rise times in all the bands but a rapid decline during the April flare and a gradual one during the June flare. The light curves during the April flare show an ∼2 day long plateau in 0.1-0.3 GeV emission, erratic variations in 0.3-1 GeV emission, and a daily recurring feature in 1-3 GeV emission until the rapid rise and decline within a day. The June flare shows a monotonic rise until the peak, followed by a gradual decline powered mainly by the multi-peak 0.1-0.3 GeV emission. The peak fluxes during both the flares are similar except in the 1-3 GeV band in April, which is twice the corresponding flux during the June flare. Hardness ratios during the April flare indicate spectral hardening in the rising phase followed by softening during the decay. We attribute this behavior to the development of a shock associated with an increase in acceleration efficiency followed by its decay leading to spectral softening. The June flare suggests hardening during the rise followed by a complicated energy dependent behavior during the decay. Observed features during the June flare favor multiple emission regions while the overall flaring episode can be related to jet dynamics.

  2. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  3. The Potential for Signal Integration and Processing in Interacting Map Kinase Cascades

    PubMed Central

    Schwacke, John H.; Voit, Eberhard O.

    2009-01-01

    The cellular response to environmental stimuli requires biochemical information processing through which sensory inputs and cellular status are integrated and translated into appropriate responses by way of interacting networks of enzymes. One such network, the Mitogen Activated Protein (MAP) kinase cascade is a highly conserved signal transduction module that propagates signals from cell surface receptors to various cytosolic and nuclear targets by way of a phosphorylation cascade. We have investigated the potential for signal processing within a network of interacting feed-forward kinase cascades typified by the MAP kinase cascade. A genetic algorithm was used to search for sets of kinetic parameters demonstrating representative key input-output patterns of interest. We discuss two of the networks identified in our study, one implementing the exclusive-or function (XOR) and another implementing what we refer to as an in-band detector (IBD) or two-sided threshold. These examples confirm the potential for logic and amplitude-dependent signal processing in interacting MAP kinase cascades demonstrating limited cross-talk. Specifically, the XOR function allows the network to respond to either one, but not both signals simultaneously, while the IBD permits the network to respond exclusively to signals within a given range of strength, and to suppress signals below as well as above this range. The solution to the XOR problem is interesting in that it requires only two interacting pathways, crosstalk at only one layer, and no feedback or explicit inhibition. These types of responses are not only biologically relevant but constitute signal processing modules that can be combined to create other logical functions and that, in contrast to amplification, cannot be achieved with a single cascade or with two non-interacting cascades. Our computational results revealed surprising similarities between experimental data describing the JNK/MKK4/MKK7 pathway and the solution for

  4. Making clinical case-based learning in veterinary medicine visible: analysis of collaborative concept-mapping processes and reflections.

    PubMed

    Khosa, Deep K; Volet, Simone E; Bolton, John R

    2014-01-01

    The value of collaborative concept mapping in assisting students to develop an understanding of complex concepts across a broad range of basic and applied science subjects is well documented. Less is known about students' learning processes that occur during the construction of a concept map, especially in the context of clinical cases in veterinary medicine. This study investigated the unfolding collaborative learning processes that took place in real-time concept mapping of a clinical case by veterinary medical students and explored students' and their teacher's reflections on the value of this activity. This study had two parts. The first part investigated the cognitive and metacognitive learning processes of two groups of students who displayed divergent learning outcomes in a concept mapping task. Meaningful group differences were found in their level of learning engagement in terms of the extent to which they spent time understanding and co-constructing knowledge along with completing the task at hand. The second part explored students' and their teacher's views on the value of concept mapping as a learning and teaching tool. The students' and their teacher's perceptions revealed congruent and contrasting notions about the usefulness of concept mapping. The relevance of concept mapping to clinical case-based learning in veterinary medicine is discussed, along with directions for future research.

  5. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  6. NREL Develops Accelerated Sample Activation Process for Hydrogen Storage Materials (Fact Sheet)

    SciTech Connect

    Not Available

    2010-12-01

    This fact sheet describes NREL's accomplishments in developing a new sample activation process that reduces the time to prepare samples for measurement of hydrogen storage from several days to five minutes and provides more uniform samples. Work was performed by NREL's Chemical and Materials Science Center.

  7. Single-step affinity purification of enzyme biotherapeutics: a platform methodology for accelerated process development.

    PubMed

    Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean

    2014-01-01

    Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT).

  8. Heat Capacity Mapping Radiometer (HCMR) data processing algorithm, calibration, and flight performance evaluation

    NASA Technical Reports Server (NTRS)

    Bohse, J. R.; Bewtra, M.; Barnes, W. L.

    1979-01-01

    The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.

  9. Hot Deformation Characteristics of 13Cr-4Ni Stainless Steel Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Kishor, Brij; Chaudhari, G. P.; Nath, S. K.

    2016-07-01

    Hot compression tests were performed to study the hot deformation characteristics of 13Cr-4Ni stainless steel. The tests were performed in the strain rate range of 0.001-10 s-1 and temperature range of 900-1100 °C using Gleeble® 3800 simulator. A constitutive equation of Arrhenius type was established based on the experimental data to calculate the different material constants, and average value of apparent activation energy was found to be 444 kJ/mol. Zener-Hollomon parameter, Z, was estimated in order to characterize the flow stress behavior. Power dissipation and instability maps developed on the basis of dynamic materials model for true strain of 0.5 show optimum hot working conditions corresponding to peak efficiency range of about 28-32%. These lie in the temperature range of 950-1025 °C and corresponding strain rate range of 0.001-0.01 s-1 and in the temperature range of 1050-1100 °C and corresponding strain rate range of 0.01-0.1 s-1. The flow characteristics in these conditions show dynamic recrystallization behavior. The microstructures are correlated to the different stability domains indicated in the processing map.

  10. Processing real-time stereo video for an autonomous robot using disparity maps and sensor fusion

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald W.; Hall, Ernest L.

    2004-10-01

    The Bearcat "Cub" Robot is an interactive, intelligent, Autonomous Guided Vehicle (AGV) designed to serve in unstructured environments. Recent advances in computer stereo vision algorithms that produce quality disparity and the availability of low cost high speed camera systems have simplified many of tasks associated with robot navigation and obstacle avoidance using stereo vision. Leveraging these benefits, this paper describes a novel method for autonomous navigation and obstacle avoidance currently being implemented on the UC Bearcat Robot. The core of this approach is the synthesis of multiple sources of real-time data including stereo image disparity maps, tilt sensor data, and LADAR data with standard contour, edge, color, and line detection methods to provide robust and intelligent obstacle avoidance. An algorithm is presented with Matlab code to process the disparity maps to rapidly produce obstacle size and location information in a simple format, and features cancellation of noise and correction for pitch and roll. The vision and control computers are clustered with the Parallel Virtual Machine (PVM) software. The significance of this work is in presenting the methods needed for real time navigation and obstacle avoidance for intelligent autonomous robots.

  11. Data processing for fabrication of GMT primary segments: raw data to final surface maps

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Hubler, William; Martin, Hubert M.; West, Steven C.; Zhou, Ping

    2014-07-01

    The Giant Magellan Telescope (GMT) primary mirror is a 25 meter f/0.7 surface composed of seven 8.4 meter circular segments, six of which are identical off-axis segments. The fabrication and testing challenges with these severely aspheric segments (about 14 mm of aspheric departure, mostly astigmatism) are well documented. Converting the raw phase data to useful surface maps involves many steps and compensations. They include large corrections for: image distortion from the off-axis null test; misalignment of the null test; departure from the ideal support forces; and temperature gradients in the mirror. The final correction simulates the active-optics correction that will be made at the telescope. Data are collected and phase maps are computed in 4D Technology's 4SightTM software. The data are saved to a .h5 (HDF5) file and imported into MATLAB® for further analysis. A semi-automated data pipeline has been developed to reduce the analysis time as well as reducing the potential for error. As each operation is performed, results and analysis parameters are appended to a data file, so in the end, the history of data processing is embedded in the file. A report and a spreadsheet are automatically generated to display the final statistics as well as how each compensation term varied during the data acquisition. This gives us valuable statistics and provides a quick starting point for investigating atypical results.

  12. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  13. In-Database Raster Analytics: Map Algebra and Parallel Processing in Oracle Spatial Georaster

    NASA Astrophysics Data System (ADS)

    Xie, Q. J.; Zhang, Z. Z.; Ravada, S.

    2012-07-01

    Over the past decade several products have been using enterprise database technology to store and manage geospatial imagery and raster data inside RDBMS, which in turn provides the best manageability and security. With the data volume growing exponentially, real-time or near real-time processing and analysis of such big data becomes more challenging. Oracle Spatial GeoRaster, different from most other products, takes the enterprise database-centric approach for both data management and data processing. This paper describes one of the central components of this database-centric approach: the processing engine built completely inside the database. Part of this processing engine is raster algebra, which we call the In-database Raster Analytics. This paper discusses the three key characteristics of this in-database analytics engine and the benefits. First, it moves the data processing closer to the data instead of moving the data to the processing, which helps achieve greater performance by overcoming the bottleneck of computer networks. Second, we designed and implemented a new raster algebra expression language. This language is based on PL/SQL and is currently focused on the "local" function type of map algebra. This language includes general arithmetic, logical and relational operators and any combination of them, which dramatically improves the analytical capability of the GeoRaster database. The third feature is the implementation of parallel processing of such operations to further improve performance. This paper also presents some sample use cases. The testing results demonstrate that this in-database approach for raster analytics can effectively help solve the biggest performance challenges we are facing today with big raster and image data.

  14. [Acceleration of osmotic dehydration process through ohmic heating of foods: raspberries (Rubus idaeus)].

    PubMed

    Simpson, Ricardo R; Jiménez, Maite P; Carevic, Erica G; Grancelli, Romina M

    2007-06-01

    Raspberries (Rubus idaeus) were osmotically dehydrated by applying a conventional method under the supposition of a homogeneous solution, all in a 62% glucose solution at 50 degrees C. Raspberries (Rubus idaeus) were also osmotically dehydrated by using ohmic heating in a 57% glucose solution at a variable voltage (to maintain temperature between 40 and 50 degrees C) and an electric field intensity <100 V/cm. When comparing the results from both experiments it was evident that processing time is reduced when ohmic heating technique was used. In some cases this reduction reached even 50%. This is explained by the additional effect to the thermal damage that is generated in an ohmic process, denominated electroporation.

  15. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided. PMID:24880899

  16. High-resolution mapping of combustion processes and implications for CO2 emissions

    NASA Astrophysics Data System (ADS)

    Wang, R.; Tao, S.; Ciais, P.; Shen, H. Z.; Huang, Y.; Chen, H.; Shen, G. F.; Wang, B.; Li, W.; Zhang, Y. Y.; Lu, Y.; Zhu, D.; Chen, Y. C.; Liu, X. P.; Wang, W. T.; Wang, X. L.; Liu, W. X.; Li, B. G.; Piao, S. L.

    2013-05-01

    High-resolution mapping of fuel combustion and CO2 emission provides valuable information for modeling pollutant transport, developing mitigation policy, and for inverse modeling of CO2 fluxes. Previous global emission maps included only few fuel types, and emissions were estimated on a grid by distributing national fuel data on an equal per capita basis, using population density maps. This process distorts the geographical distribution of emissions within countries. In this study, a sub-national disaggregation method (SDM) of fuel data is applied to establish a global 0.1° × 0.1° geo-referenced inventory of fuel combustion (PKU-FUEL) and corresponding CO2 emissions (PKU-CO2) based upon 64 fuel sub-types for the year 2007. Uncertainties of the emission maps are evaluated using a Monte Carlo method. It is estimated that CO2 emission from combustion sources including fossil fuel, biomass, and solid wastes in 2007 was 11.2 Pg C yr-1 (9.1 Pg C yr-1 and 13.3 Pg C yr-1 as 5th and 95th percentiles). Of this, emission from fossil fuel combustion is 7.83 Pg C yr-1, which is very close to the estimate of the International Energy Agency (7.87 Pg C yr-1). By replacing national data disaggregation with sub-national data in this study, the average 95th minus 5th percentile ranges of CO2 emission for all grid points can be reduced from 417 to 68.2 Mg km-2 yr-1. The spread is reduced because the uneven distribution of per capita fuel consumptions within countries is better taken into account by using sub-national fuel consumption data directly. Significant difference in per capita CO2 emissions between urban and rural areas was found in developing countries (2.08 vs. 0.598 Mg C/(cap. × yr)), but not in developed countries (3.55 vs. 3.41 Mg C/(cap. × yr)). This implies that rapid urbanization of developing countries is very likely to drive up their emissions in the future.

  17. The acceleration of spoken-word processing in children's native-language acquisition: an ERP cohort study.

    PubMed

    Ojima, Shiro; Matsuba-Kurita, Hiroko; Nakamura, Naoko; Hagiwara, Hiroko

    2011-04-01

    Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children's native-language acquisition. In a picture-word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n=80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children's onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7.

  18. Modulation of the phenolic composition and colour of red wines subjected to accelerated ageing by controlling process variables.

    PubMed

    González-Sáiz, J M; Esteban-Díez, I; Rodríguez-Tecedor, S; Pérez-Del-Notario, N; Arenzana-Rámila, I; Pizarro, C

    2014-12-15

    The aim of the present work was to evaluate the effect of the main factors conditioning accelerated ageing processes (oxygen dose, chip dose, wood origin, toasting degree and maceration time) on the phenolic and chromatic profiles of red wines by using a multivariate strategy based on experimental design methodology. The results obtained revealed that the concentrations of monomeric anthocyanins and flavan-3-ols could be modified through the application of particular experimental conditions. This fact was particularly remarkable since changes in phenolic profile were closely linked to changes observed in chromatic parameters. The main strength of this study lies in the possibility of using its conclusions as a basis to make wines with specific colour properties based on quality criteria. To our knowledge, the influence of such a large number of alternative ageing parameters on wine phenolic composition and chromatic attributes has not been studied previously using a comprehensive experimental design methodology.

  19. DIGITAL PROCESSING TECHNIQUES FOR IMAGE MAPPING WITH LANDSAT TM AND SPOT SIMULATOR DATA.

    USGS Publications Warehouse

    Chavez, Pat S., Jr.

    1984-01-01

    To overcome certain problems associated with the visual selection of Landsat TM bands for image mapping, the author used a quantitative technique that ranks the 20 possible three-band combinations based upon their information content. Standard deviations and correlation coefficients can be used to compute a value called the Optimum Index Factor (OIF) for each of the 20 possible combinations. SPOT simulator images were digitally processed and compared with Landsat-4 Thematic Mapper (TM) images covering a semi-arid region in northern Arizona and a highly vegetated urban area near Washington, D. C. Statistical comparisons indicate the more radiometric or color information exists in certain TM three-band combinations than in the three SPOT bands.

  20. Demonstration of wetland vegetation mapping in Florida from computer-processed satellite and aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1979-01-01

    The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.

  1. Hardware acceleration of PIC codes: tapping into the power of state of the art processing units

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Abreu, P.; Martins, S. F.; Silva, L. O.

    2008-11-01

    There are many astrophysical and laboratory scenarios where kinetic effects play an important role. Further understanding of these scenarios requires detailed numerical modeling using fully relativistic three-dimensional kinetic code such as OSIRIS [1]. However, these codes are computationally heavy. Explicitly using available hardware resources such as SIMD units (Altivec/SSE3) [2], cell processors or graphics processing units (GPUs) may allow us to significantly boost performance of these codes. For the most cases, the processing units are limited to single precision arithmetic, and require specific C/C++ code to be used. We present a comparison between double precision and single precision results, focusing both on performance and on the effects on the simulation in terms of algorithm properties. Details on a framework allowing the integration of hardware optimized routines with existing high performance codes in languages other than C is given. Finally, initial results of high performance modules of the PIC algorithm using SIMD units and GPU's will also be presented. [1] R. A. Fonseca et al., LNCS 2331, 342, (2002) [2] K. J. Bowers et al., Phys Plasmas vol. 15 (5) pp. 055703 (2008)

  2. Mapping Glacial Weathering Processes with Thermal Infrared Remote Sensing: A Case Study at Robertson Glacier, Canada

    NASA Astrophysics Data System (ADS)

    Rutledge, A. M.; Christensen, P. R.; Shock, E.; Canovas, P. A., III

    2014-12-01

    Geologic weathering processes in cold environments, especially subglacial chemical processes acting on rock and sediment, are not well characterized due to the difficulty of accessing these environments. Glacial weathering of geologic materials contributes to the solute flux in meltwater and provides a potential source of energy to chemotrophic microbes, and is thus an important component to understand. In this study, we use Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data to map the extent of glacial weathering in the front range of the Canadian Rockies using remotely detected infrared spectra. We ground-truth our observations using laboratory infrared spectroscopy, x-ray diffraction, and geochemical analyses of field samples. The major goals of the project are to quantify weathering inputs to the glacial energy budget, and to link in situ sampling with remote sensing capabilities. Robertson Glacier, Alberta, Canada is an excellent field site for this technique as it is easily accessible and its retreating stage allows sampling of fresh subglacial and englacial sediments. Infrared imagery of the region was collected with the ASTER satellite instrument. At that same time, samples of glacially altered rock and sediments were collected on a downstream transect of the glacier and outwash plain. Infrared laboratory spectroscopy and x-ray diffraction were used to determine the composition and abundance of minerals present. Geochemical data were also collected at each location, and ice and water samples were analyzed for major and minor elements. Our initial conclusion is that the majority of the weathering seems to be occurring at the glacier-rock interface rather than in the outwash stream. Results from both laboratory and ASTER data indicate the presence of leached weathering rinds. A general trend of decreasing carbonate abundances with elevation (i.e. residence time in ice) is observed, which is consistent with increasing calcium ion

  3. Audit Report on "Waste Processing and Recovery Act Acceleration Efforts for Contact-Handled Transuranic Waste at the Hanford Site"

    SciTech Connect

    2010-05-01

    The Department of Energy's Office of Environmental Management's (EM), Richland Operations Office (Richland), is responsible for disposing of the Hanford Site's (Hanford) transuranic (TRU) waste, including nearly 12,000 cubic meters of radioactive contact-handled TRU wastes. Prior to disposing of this waste at the Department's Waste Isolation Pilot Plant (WIPP), Richland must certify that it meets WIPP's waste acceptance criteria. To be certified, the waste must be characterized, screened for prohibited items, treated (if necessary) and placed into a satisfactory disposal container. In a February 2008 amendment to an existing Record of Decision (Decision), the Department announced its plan to ship up to 8,764 cubic meters of contact-handled TRU waste from Hanford and other waste generator sites to the Advanced Mixed Waste Treatment Project (AMWTP) at Idaho's National Laboratory (INL) for processing and certification prior to disposal at WIPP. The Department decided to maximize the use of the AMWTP's automated waste processing capabilities to compact and, thereby, reduce the volume of contact-handled TRU waste. Compaction reduces the number of shipments and permits WIPP to more efficiently use its limited TRU waste disposal capacity. The Decision noted that the use of AMWTP would avoid the time and expense of establishing a processing capability at other sites. In May 2009, EM allocated $229 million of American Recovery and Reinvestment Act of 2009 (Recovery Act) funds to support Hanford's Solid Waste Program, including Hanford's contact-handled TRU waste. Besides providing jobs, these funds were intended to accelerate cleanup in the short term. We initiated this audit to determine whether the Department was effectively using Recovery Act funds to accelerate processing of Hanford's contact-handled TRU waste. Relying on the availability of Recovery Act funds, the Department changed course and approved an alternative plan that could increase costs by about $25 million

  4. Comparing Two Forms of Concept Map Critique Activities to Facilitate Knowledge Integration Processes in Evolution Education

    ERIC Educational Resources Information Center

    Schwendimann, Beat A.; Linn, Marcia C.

    2016-01-01

    Concept map activities often lack a subsequent revision step that facilitates knowledge integration. This study compares two collaborative critique activities using a Knowledge Integration Map (KIM), a form of concept map. Four classes of high school biology students (n?=?81) using an online inquiry-based learning unit on evolution were assigned…

  5. Influence of processing procedure on the quality of Radix Scrophulariae: a quantitative evaluation of the main compounds obtained by accelerated solvent extraction and high-performance liquid chromatography.

    PubMed

    Cao, Gang; Wu, Xin; Li, Qinglin; Cai, Hao; Cai, Baochang; Zhu, Xuemei

    2015-02-01

    An improved high-performance liquid chromatography with diode array detection combined with accelerated solvent extraction method was used to simultaneously determine six compounds in crude and processed Radix Scrophulariae samples. Accelerated solvent extraction parameters such as extraction solvent, temperature, number of cycles, and analysis procedure were systematically optimized. The results indicated that compared with crude Radix Scrophulariae samples, the processed samples had lower contents of harpagide and harpagoside but higher contents of catalpol, acteoside, angoroside C, and cinnamic acid. The established method was sufficiently rapid and reliable for the global quality evaluation of crude and processed herbal medicines.

  6. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... promptly reinstated on the MAP-Approved Lender list posted on HUD's Web site. ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  7. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... promptly reinstated on the MAP-Approved Lender list posted on HUD's Web site. ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  8. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... promptly reinstated on the MAP-Approved Lender list posted on HUD's Web site. ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  9. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... promptly reinstated on the MAP-Approved Lender list posted on HUD's Web site. ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  10. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... promptly reinstated on the MAP-Approved Lender list posted on HUD's Web site. ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  11. Using general-purpose computing on graphics processing units (GPGPU) to accelerate the ordinary kriging algorithm

    NASA Astrophysics Data System (ADS)

    Gutiérrez de Ravé, E.; Jiménez-Hornero, F. J.; Ariza-Villaverde, A. B.; Gómez-López, J. M.

    2014-03-01

    Spatial interpolation methods have been applied to many disciplines, the ordinary kriging interpolation being one of the methods most frequently used. However, kriging comprises a computational cost that scales as the cube of the number of data points. Therefore, one most pressing problems in geostatistical simulations is that of developing methods that can reduce the computational time. Weights calculation and then the estimate for each unknown point is the most time-consuming step in ordinary kriging. This work investigates the potential reduction in execution time by selecting the suitable operations involved in this step to be parallelized by using general-purpose computing on graphics processing units (GPGPU) and Compute Unified Device Architecture (CUDA). This study has been performed by taking into account comparative studies between graphic and central processing units on two different machines, a personal computer (GPU, GeForce 9500, and CPU, AMD Athlon X2 4600) and a server (GPU, Tesla C1060, and CPU, Xeon 5600). In addition, two data types (float and double) have been considered in the executions. The experimental results indicate that parallel implementation of matrix inverse by using GPGPU and CUDA will be enough to reduce the execution time of weights calculation and estimation for each unknown point and, as a result, the global performance time of ordinary kriging. In addition, suitable array dimensions for using the available parallelized code have been determined for each case. Thus, it is possible to obtain relevant saved times compared to those resulting from considering wider parallelized extension. This fact demonstrates the convenience of carrying out this kind of study in other interpolation calculation methodologies using matrices.

  12. Adaptive Classification of Landscape Process and Function: An Integration of Geoinformatics and Self-Organizing Maps

    SciTech Connect

    Coleman, Andre M.

    2009-07-17

    The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.

  13. Hot deformation behavior and processing map of a 9Cr ferritic/martensitic ODS steel

    NASA Astrophysics Data System (ADS)

    Zhang, Guangming; Zhou, Zhangjian; Sun, Hongying; Zou, Lei; Wang, Man; Li, Shaofu

    2014-12-01

    The hot deformation behavior of 9Cr oxide-dispersion-strengthened (ODS) steel fabricated through the process of mechanical alloying and hot isostatic pressing (HIP) as investigated through hot compression deformation tests on the Gleeble-1500D simulator in the temperature range of 1050-1200 °C and strain rate range of 0.001 s-1-1 s-1. The relationship between the rheological stress and the strain rate was also studied. The activation energy and the stress and material parameters of the hyperbolic-sine equation were resolved according to the data obtained. The processing map was also proposed. The results show that the flow stress decreases as the temperature increases, and that decreasing of the strain rate of the 9Cr ODS steel results in a positive strain rate sensitivity. It is clear that dynamic recrystallization is influenced by both temperature and strain rate. The results of this study may provide a good reference for the selection of hot working parameters for 9Cr ODS steel. The optimum processing domains are at 1200 °C with a strain rate of 1 s-1 and in the range of 1080-1100 °C with a strain rate between 0.018 s-1 and 0.05 s-1.

  14. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2012-12-01

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  15. Rock varnish in New York: An accelerated snapshot of accretionary processes

    NASA Astrophysics Data System (ADS)

    Krinsley, David H.; Dorn, Ronald I.; DiGregorio, Barry E.; Langworthy, Kurt A.; Ditto, Jeffrey

    2012-02-01

    Samples of manganiferous rock varnish collected from fluvial, bedrock outcrop and Erie Barge Canal settings in New York state host a variety of diatom, fungal and bacterial microbial forms that are enhanced in manganese and iron. Use of a Dual-Beam Focused Ion Beam Scanning Electron Microscope to manipulate the varnish in situ reveals microbial forms that would not have otherwise been identified. The relative abundance of Mn-Fe-enriched biotic forms in New York samples is far greater than varnishes collected from warm deserts. Moisture availability has long been noted as a possible control on varnish growth rates, a hypothesis consistent with the greater abundance of Mn-enhancing bioforms. Sub-micron images of incipient varnish formation reveal that varnishing in New York probably starts with the mortality of microorganisms that enhanced Mn on bare mineral surfaces; microbial death results in the adsorption of the Mn-rich sheath onto the rock in the form of filamentous networks. Clay minerals are then cemented by remobilization of the Mn-rich material. Thus, the previously unanswered question of what comes first - clay mineral deposition or enhancement of Mn - can be answered in New York because of the faster rate of varnish growth. In contrast, very slow rates of varnishing seen in warm deserts, of microns per thousand years, make it less likely that collected samples will reveal varnish accretionary processes than samples collected from fast-accreting moist settings.

  16. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  17. Accelerators and the Accelerator Community

    SciTech Connect

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  18. Updated mapping and seismic reflection data processing along the Queen Charlotte fault system, southeast Alaska

    NASA Astrophysics Data System (ADS)

    Walton, M. A. L.; Gulick, S. P. S.; Haeussler, P. J.; Rohr, K.; Roland, E. C.; Trehu, A. M.

    2014-12-01

    The Queen Charlotte Fault (QCF) is an obliquely convergent strike-slip system that accommodates offset between the Pacific and North America plates in southeast Alaska and western Canada. Two recent earthquakes, including a M7.8 thrust event near Haida Gwaii on 28 October 2012, have sparked renewed interest in the margin and led to further study of how convergent stress is accommodated along the fault. Recent studies have looked in detail at offshore structure, concluding that a change in strike of the QCF at ~53.2 degrees north has led to significant differences in stress and the style of strain accommodation along-strike. We provide updated fault mapping and seismic images to supplement and support these results. One of the highest-quality seismic reflection surveys along the Queen Charlotte system to date, EW9412, was shot aboard the R/V Maurice Ewing in 1994. The survey was last processed to post-stack time migration for a 1999 publication. Due to heightened interest in high-quality imaging along the fault, we have completed updated processing of the EW9412 seismic reflection data and provide prestack migrations with water-bottom multiple reduction. Our new imaging better resolves fault and basement surfaces at depth, as well as the highly deformed sediments within the Queen Charlotte Terrace. In addition to re-processing the EW9412 seismic reflection data, we have compiled and re-analyzed a series of publicly available USGS seismic reflection data that obliquely cross the QCF. Using these data, we are able to provide updated maps of the Queen Charlotte fault system, adding considerable detail along the northernmost QCF where it links up with the Chatham Strait and Transition fault systems. Our results support conclusions that the changing geometry of the QCF leads to fundamentally different convergent stress accommodation north and south of ~53.2 degrees; namely, reactivated splay faults to the north vs. thickening of sediments and the upper crust to the south

  19. Insights on Arctic Sea Ice Processes from New Seafloor and Coastline Mapping

    NASA Astrophysics Data System (ADS)

    Nghiem, S. V.; Hall, D. K.; Rigor, I. G.; Clemente-Colon, P.; Li, P.; Neumann, G.

    2014-12-01

    The seafloor can exert a significant control on Arctic sea ice patterns by guiding the distribution of ocean water masses and river discharge in the Arctic Ocean. Satellite observations of sea ice and surface temperature are used together with bathymetry data to understand dynamic and thermodynamic processes of sea ice. In particular, data from satellite radars, including scatterometer and synthetic aperture radar (SAR) instruments, are used to identify and map sea ice with different spatial and temporal resolutions across the Arctic. Data from a satellite spectroradiometer, such as MODIS, are used to accurately measure surface temperature under clear sky conditions. For seafloor measurements, advances have been made with new observations surveyed to modern standards in different regions of the Arctic, enabling the production of an improved bathymetry dataset, such as the International Bathymetric Chart of the Arctic Ocean Version 3.0 (IBCAO 3.0) released in 2012. The joint analyses of these datasets reveal that the seafloor can govern warm- and cold-water distribution and thereby dictate sea ice patterns on the sea surface from small local scales to a large regional scale extending over thousands of km. Satellite results show that warm river waters can intrude into the Arctic Ocean and affect sea ice melt hundreds of km away from the river mouths. The Arctic rivers bring significant heat as their waters come from sources across vast watersheds influenced by warm continental climate effects in summertime. In the case of the Mackenzie River, results from the analysis with the new IBCAO 3.0 indicated that the formation and break-up of landfast sea ice is related to the depth and not the slope of the seafloor. In turn, such ice processes can impact the discharge and distribution of warm river waters and influence the melting of sea ice. Animations of satellite observations of sea ice overlaid on both the old and new versions of IBCAO will be presented to illustrate

  20. Searching for optimal setting conditions in technological processes using parametric estimation models and neural network mapping approach: a tutorial.

    PubMed

    Fjodorova, Natalja; Novič, Marjana

    2015-09-01

    Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits. PMID:26388367

  1. Searching for optimal setting conditions in technological processes using parametric estimation models and neural network mapping approach: a tutorial.

    PubMed

    Fjodorova, Natalja; Novič, Marjana

    2015-09-01

    Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits.

  2. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  3. Can Accelerators Accelerate Learning?

    SciTech Connect

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-10

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ)[1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  4. Web Based Rapid Mapping of Disaster Areas using Satellite Images, Web Processing Service, Web Mapping Service, Frequency Based Change Detection Algorithm and J-iView

    NASA Astrophysics Data System (ADS)

    Bandibas, J. C.; Takarada, S.

    2013-12-01

    Timely identification of areas affected by natural disasters is very important for a successful rescue and effective emergency relief efforts. This research focuses on the development of a cost effective and efficient system of identifying areas affected by natural disasters, and the efficient distribution of the information. The developed system is composed of 3 modules which are the Web Processing Service (WPS), Web Map Service (WMS) and the user interface provided by J-iView (fig. 1). WPS is an online system that provides computation, storage and data access services. In this study, the WPS module provides online access of the software implementing the developed frequency based change detection algorithm for the identification of areas affected by natural disasters. It also sends requests to WMS servers to get the remotely sensed data to be used in the computation. WMS is a standard protocol that provides a simple HTTP interface for requesting geo-registered map images from one or more geospatial databases. In this research, the WMS component provides remote access of the satellite images which are used as inputs for land cover change detection. The user interface in this system is provided by J-iView, which is an online mapping system developed at the Geological Survey of Japan (GSJ). The 3 modules are seamlessly integrated into a single package using J-iView, which could rapidly generate a map of disaster areas that is instantaneously viewable online. The developed system was tested using ASTER images covering the areas damaged by the March 11, 2011 tsunami in northeastern Japan. The developed system efficiently generated a map showing areas devastated by the tsunami. Based on the initial results of the study, the developed system proved to be a useful tool for emergency workers to quickly identify areas affected by natural disasters.

  5. Constitutive Modeling for Flow Behavior of Medium-Carbon Bainitic Steel and Its Processing Maps

    NASA Astrophysics Data System (ADS)

    Yang, Zhinan; Li, Yingnan; Li, Yanguo; Zhang, Fucheng; Zhang, Ming

    2016-09-01

    The hot deformation behavior of a medium-carbon bainitic steel was studied in a temperature range of 900-1100 °C and a strain rate range of 0.01-10 s-1. With increasing strain, the flow stress displays three tendencies: a continuous increase under most conditions and a peak stress with and without a steady-state region. Accurate constitutive modeling was proposed and exhibits a correlation coefficient of 0.984 and an average absolute relative error of 0.063 between the experimental and predicted stress values. The activation energy of the steel increased from 393 to 447 kJ/mol, when the strain increased from 0.1 to 0.4, followed by a slight fluctuation at higher strain. Finally, processing maps under different strains were constructed and exhibit a varied instability region with increasing strain. Microstructural observations show that a mischcrystal structure formed in the specimens that worked on the instability regions, which resulted from the occurrence of flow localization. Some deformation twins were also observed in certain specimens and were responsible for negative m-values. The optimum hot working processing parameters for the studied steel were 989-1012 °C, 0.01-0.02 s-1 and 1034-1066 °C, 0.07-0.22 s-1, and a full dynamic recrystallization structure with fine homogeneous grains could be obtained.

  6. Process maps for plasma spray: Part 1: Plasma-particle interactions

    SciTech Connect

    GILMORE,DELWYN L.; NEISER JR.,RICHARD A.; WAN,YUEPENG; SAMPATH,SANJAY

    2000-01-26

    This is the first paper of a two part series based on an integrated study carried out at Sandia National Laboratories and the State University of New York at Stony Brook. The aim of the study is to develop a more fundamental understanding of plasma-particle interactions, droplet-substrate interactions, deposit formation dynamics and microstructural development as well as final deposit properties. The purpose is to create models that can be used to link processing to performance. Process maps have been developed for air plasma spray of molybdenum. Experimental work was done to investigate the importance of such spray parameters as gun current, auxiliary gas flow, and powder carrier gas flow. In-flight particle diameters, temperatures, and velocities were measured in various areas of the spray plume. Samples were produced for analysis of microstructures and properties. An empirical model was developed, relating the input parameters to the in-flight particle characteristics. Multi-dimensional numerical simulations of the plasma gas flow field and in-flight particles under different operating conditions were also performed. In addition to the parameters which were experimentally investigated, the effect of particle injection velocity was also considered. The simulation results were found to be in good general agreement with the experimental data.

  7. Hot Deformation Characteristics and Processing Maps of the Cu-Cr-Zr-Ag Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chai, Zhe; Volinsky, Alex A.; Sun, Huili; Tian, Baohong; Liu, Ping; Liu, Yong

    2016-03-01

    The hot deformation behavior of the Cu-Cr-Zr-Ag alloy has been investigated by hot compressive tests in the 650-950 °C temperature and 0.001-10 s-1 strain rate ranges using Gleeble-1500D thermo-mechanical simulator. The microstructure evolution of the alloy during deformation was characterized using optical and transmission electron microscopy. The flow stress decreases with the deformation temperature and increases with the strain rate. The apparent activation energy for hot deformation of the alloy was 343.23 kJ/mol. The constitutive equation of the alloy based on the hyperbolic-sine equation was established to characterize the flow stress as a function of the strain rate and the deformation temperature. The processing maps were established based on the dynamic material model. The optimal processing parameters for hot deformation of the Cu-Cr-Zr-Ag alloy are 900-950 °C and 0.001-0.1 s-1 strain rate. The evolution of DRX microstructure strongly depends on the deformation temperature and the strain rate.

  8. Effect of Grain Size Distribution on Processing Maps for Isothermal Compression of Inconel 718 Superalloy

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Liu, Dong; Hu, Yang; Yang, Yanhui; Zhu, Xinglin

    2016-02-01

    Cylindrical specimens of Inconel 718 alloys with three types of grain size distribution were used in the compression tests and processing maps were developed in 940-1040 °C and 0.001-10 s-1. The equiaxed fine grain is more effective on the dynamic softening behavior. For partial recrystallized microstructure, the peak efficiency of power dissipation occurs at the strain rate of 0.001 s-1, and the temperature range of 1000-1020 °C. In order to obtain homogeneous microstructure with fine grains, the partial recrystallized microstructure should be deformed at the low temperature and slow strain rates. The area fraction of instability domains decreases with strain increasing. The peak efficiency of power dissipation increases with average grain size decreasing. The efficiency of power dissipation will be stimulated by the precipitation of δ phase at slow strain rate of 0.001-0.01 s-1, and the initial deformed substructure at the strain rate of 0.1-1 s-1. Equiaxed fine grain is the optimum state for forging process and dynamic recrystallization. The grain size distribution has slight influence on the microstructure evolution at high temperatures.

  9. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  10. Nanoscale mapping of excitonic processes in single-layer MoS2 using tip-enhanced photoluminescence microscopy.

    PubMed

    Su, Weitao; Kumar, Naresh; Mignuzzi, Sandro; Crain, Jason; Roy, Debdulal

    2016-05-19

    In two-dimensional (2D) semiconductors, photoluminescence originating from recombination processes involving neutral electron-hole pairs (excitons) and charged complexes (trions) is strongly affected by the localized charge transfer due to inhomogeneous interactions with the local environment and surface defects. Herein, we demonstrate the first nanoscale mapping of excitons and trions in single-layer MoS2 using the full spectral information obtained via tip-enhanced photoluminescence (TEPL) microscopy along with tip-enhanced Raman spectroscopy (TERS) imaging of a 2D flake. Finally, we show the mapping of the PL quenching centre in single-layer MoS2 with an unprecedented spatial resolution of 20 nm. In addition, our research shows that unlike in aperture-scanning near field microscopy, preferential exciton emission mapping at the nanoscale using TEPL and Raman mapping using TERS can be obtained simultaneously using this method that can be used to correlate the structural and excitonic properties. PMID:27152366

  11. IGIS (Interactive Geologic Interpretation System) computer-aided photogeologic mapping with image processing, graphics and CAD/CAM capabilities

    SciTech Connect

    McGuffie, B.A.; Johnson, L.F.; Alley, R.E.; Lang, H.R. )

    1989-10-01

    Advances in computer technology are changing the way geologists integrate and use data. Although many geoscience disciplines are absolutely dependent upon computer processing, photogeological and map interpretation computer procedures are just now being developed. Historically, geologists collected data in the field and mapped manually on a topographic map or aerial photographic base. New software called the interactive Geologic Interpretation System (IGIS) is being developed at the Jet Propulsion Laboratory (JPL) within the National Aeronautics and Space Administration (NASA)-funded Multispectral Analysis of Sedimentary Basins Project. To complement conventional geological mapping techniques, Landsat Thematic Mapper (TM) or other digital remote sensing image data and co-registered digital elevation data are combined using computer imaging, graphics, and CAD/CAM techniques to provide tools for photogeologic interpretation, strike/dip determination, cross section construction, stratigraphic section measurement, topographic slope measurement, terrain profile generation, rotatable 3-D block diagram generation, and seismic analysis.

  12. Nanoscale mapping of excitonic processes in single-layer MoS2 using tip-enhanced photoluminescence microscopy.

    PubMed

    Su, Weitao; Kumar, Naresh; Mignuzzi, Sandro; Crain, Jason; Roy, Debdulal

    2016-05-19

    In two-dimensional (2D) semiconductors, photoluminescence originating from recombination processes involving neutral electron-hole pairs (excitons) and charged complexes (trions) is strongly affected by the localized charge transfer due to inhomogeneous interactions with the local environment and surface defects. Herein, we demonstrate the first nanoscale mapping of excitons and trions in single-layer MoS2 using the full spectral information obtained via tip-enhanced photoluminescence (TEPL) microscopy along with tip-enhanced Raman spectroscopy (TERS) imaging of a 2D flake. Finally, we show the mapping of the PL quenching centre in single-layer MoS2 with an unprecedented spatial resolution of 20 nm. In addition, our research shows that unlike in aperture-scanning near field microscopy, preferential exciton emission mapping at the nanoscale using TEPL and Raman mapping using TERS can be obtained simultaneously using this method that can be used to correlate the structural and excitonic properties.

  13. Hot deformation characterization of duplex low-density steel through 3D processing map development

    SciTech Connect

    Mohamadizadeh, A.; Zarei-Hanzaki, A.; Abedi, H.R.; Mehtonen, S.; Porter, D.

    2015-09-15

    The high temperature deformation behavior of duplex low-density Fe–18Mn–8Al–0.8C steel was investigated at temperatures in the range of 600–1000 °C. The primary constitutive analysis indicated that the Zener–Hollomon parameter, which represents the coupled effects of temperature and strain rate, significantly varies with the amount of deformation. Accordingly, the 3D processing maps were developed considering the effect of strain and were used to determine the safe and unsafe deformation conditions in association with the microstructural evolution. The deformation at efficiency domain I (900–1100 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}) was found to be safe at different strains due to the occurrence of dynamic recrystallization in austenite. The safe efficiency domain II (700–900 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1}), which appeared at logarithmic strain of 0.4, was characterized by deformation induced ferrite formation. Scanning electron microscopy revealed that the microband formation and crack initiation at ferrite\\austenite interphases were the main causes of deformation instability at 600–800 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}. The degree of instability was found to decrease by increasing the strain due to the uniformity of microbanded structure obtained at higher strains. The shear band formation at 900–1100 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1} was verified by electron backscattered diffraction. The local dynamic recrystallization of austenite and the deformation induced ferrite formation were observed within shear-banded regions as the results of flow localization. - Graphical abstract: Display Omitted - Highlights: • The 3D processing map is developed for duplex low-density Fe–Mn–Al–C steel. • The efficiency domains shrink, expand or appear with increasing strain. • The occurrence of DRX and DIFF increases the power efficiency. • Crack initiation

  14. Effect of accelerated electron beam on mechanical properties of human cortical bone: influence of different processing methods.

    PubMed

    Kaminski, Artur; Grazka, Ewelina; Jastrzebska, Anna; Marowska, Joanna; Gut, Grzegorz; Wojciechowski, Artur; Uhrynowska-Tyszkiewicz, Izabela

    2012-08-01

    Accelerated electron beam (EB) irradiation has been a sufficient method used for sterilisation of human tissue grafts for many years in a number of tissue banks. Accelerated EB, in contrast to more often used gamma photons, is a form of ionizing radiation that is characterized by lower penetration, however it is more effective in producing ionisation and to reach the same level of sterility, the exposition time of irradiated product is shorter. There are several factors, including dose and temperature of irradiation, processing conditions, as well as source of irradiation that may influence mechanical properties of a bone graft. The purpose of this study was to evaluate the effect e-beam irradiation with doses of 25 or 35 kGy, performed on dry ice or at ambient temperature, on mechanical properties of non-defatted or defatted compact bone grafts. Left and right femurs from six male cadaveric donors, aged from 46 to 54 years, were transversely cut into slices of 10 mm height, parallel to the longitudinal axis of the bone. Compact bone rings were assigned to the eight experimental groups according to the different processing method (defatted or non-defatted), as well as e-beam irradiation dose (25 or 35 kGy) and temperature conditions of irradiation (ambient temperature or dry ice). Axial compression testing was performed with a material testing machine. Results obtained for elastic and plastic regions of stress-strain curves examined by univariate analysis are described. Based on multivariate analysis, including all groups, it was found that temperature of e-beam irradiation and defatting had no consistent significant effect on evaluated mechanical parameters of compact bone rings. In contrast, irradiation with both doses significantly decreased the ultimate strain and its derivative toughness, while not affecting the ultimate stress (bone strength). As no deterioration of mechanical properties was observed in the elastic region, the reduction of the energy

  15. Ultra-high density intra-specific genetic linkage maps accelerate identification of functionally relevant molecular tags governing important agronomic traits in chickpea.

    PubMed

    Kujur, Alice; Upadhyaya, Hari D; Shree, Tanima; Bajaj, Deepak; Das, Shouvik; Saxena, Maneesha S; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C L L; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K; Parida, Swarup K

    2015-05-05

    We discovered 26785 and 16573 high-quality SNPs differentiating two parental genotypes of a RIL mapping population using reference desi and kabuli genome-based GBS assay. Of these, 3625 and 2177 SNPs have been integrated into eight desi and kabuli chromosomes, respectively in order to construct ultra-high density (0.20-0.37 cM) intra-specific chickpea genetic linkage maps. One of these constructed high-resolution genetic map has potential to identify 33 major genomic regions harbouring 35 robust QTLs (PVE: 17.9-39.7%) associated with three agronomic traits, which were mapped within <1 cM mean marker intervals on desi chromosomes. The extended LD (linkage disequilibrium) decay (~15 cM) in chromosomes of genetic maps have encouraged us to use a rapid integrated approach (comparative QTL mapping, QTL-region specific haplotype/LD-based trait association analysis, expression profiling and gene haplotype-based association mapping) rather than a traditional QTL map-based cloning method to narrow-down one major seed weight (SW) robust QTL region. It delineated favourable natural allelic variants and superior haplotype-containing one seed-specific candidate embryo defective gene regulating SW in chickpea. The ultra-high-resolution genetic maps, QTLs/genes and alleles/haplotypes-related genomic information generated and integrated strategy for rapid QTL/gene identification developed have potential to expedite genomics-assisted breeding applications in crop plants, including chickpea for their genetic enhancement.

  16. Monitoring of pigmented and wooden surfaces in accelerated ageing processes by FT-Raman spectroscopy and multivariate control charts.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Liparota, Maria Cristina; Gennaro, Maria Carla

    2004-07-01

    Two of the most suitable analytical techniques used in the field of cultural heritage are NIR (near-infrared) and Raman spectroscopy. FT-Raman spectroscopy coupled to multivariate control charts is applied here for the development of a new method for monitoring the conservation state of pigmented and wooden surfaces. These materials were exposed to different accelerated ageing processes in order to evaluate the effect of the applied treatments on the goods surfaces. In this work, a new approach based on the principles of statistical process control (SPC) to the monitoring of cultural heritage, has been developed: the conservation state of samples simulating works-of-art has been treated like an industrial process, monitored with multivariate control charts, owing to the complexity of the spectroscopic data collected. The Raman spectra were analysed by principal component analysis (PCA) and the relevant principal components (PCs) were used for constructing multivariate Shewhart and cumulative sum (CUSUM) control charts. These tools were successfully applied for the identification of the presence of relevant modifications occurring on the surfaces. CUSUM charts however proved to be more effective in the identification of the exact beginning of the applied treatment. In the case of wooden boards, where a sufficient number of PCs were available, simultaneous scores monitoring and residuals tracking (SMART) charts were also investigated. The exposure to a basic attack and to high temperatures produced deep changes on the wooden samples, clearly identified by the multivariate Shewhart, CUSUM and SMART charts. A change on the pigment surface was detected after exposure to an acidic solution and to the UV light, while no effect was identified on the painted surface after the exposure to natural atmospheric events. PMID:18969526

  17. EMITTING ELECTRONS SPECTRA AND ACCELERATION PROCESSES IN THE JET OF Mrk 421: FROM THE LOW STATE TO THE GIANT FLARE STATE

    SciTech Connect

    Yan Dahai; Zhang Li; Fan Zhonghui; Zeng Houdun; Yuan Qiang

    2013-03-10

    We investigate the electron energy distributions (EEDs) and the acceleration processes in the jet of Mrk 421 through fitting the spectral energy distributions (SEDs) in different active states in the frame of a one-zone synchrotron self-Compton model. After assuming two possible EEDs formed in different acceleration models: the shock-accelerated power law with exponential cut-off (PLC) EED and the stochastic-turbulence-accelerated log-parabolic (LP) EED, we fit the observed SEDs of Mrk 421 in both low and giant flare states using the Markov Chain Monte Carlo method which constrains the model parameters in a more efficient way. The results from our calculations indicate that (1) the PLC and LP models give comparably good fits for the SED in the low state, but the variations of model parameters from low state to flaring can be reasonably explained only in the case of the PLC in the low state; and (2) the LP model gives better fits compared to the PLC model for the SED in the flare state, and the intra-day/night variability observed at GeV-TeV bands can be accommodated only in the LP model. The giant flare may be attributed to the stochastic turbulence re-acceleration of the shock-accelerated electrons in the low state. Therefore, we may conclude that shock acceleration is dominant in the low state, while stochastic turbulence acceleration is dominant in the flare state. Moreover, our result shows that the extrapolated TeV spectra from the best-fit SEDs from optical through GeV with the two EEDs are different. It should be considered with caution when such extrapolated TeV spectra are used to constrain extragalactic background light models.

  18. Hot Compression of TC8M-1: Constitutive Equations, Processing Map, and Microstructure Evolution

    NASA Astrophysics Data System (ADS)

    Yue, Ke; Chen, Zhiyong; Liu, Jianrong; Wang, Qingjiang; Fang, Bo; Dou, Lijun

    2016-06-01

    Hot compression of TC8M-1 was carried out under isothermal working conditions with temperature from 1173 K to 1323 K (900 °C to 1050 °C), strain rate from 0.001 to 10/s, and height reduction from 20 to 80 pct (corresponding true strain from 0.22 to 1.61). Constitutive equations were constructed and apparent activation energies of 149.5 and 617.4 kJ/mol were obtained for deformation in the β and upper α/ β phase regions, respectively. Microstructure examination confirmed the dominant role of dynamic recrystallization in the α/ β phase region and that of dynamic recovery in the β phase region, with the occurrence of grain boundary sliding at very low strain rate (0.001/s) in both regions. Based on the dynamic materials model, processing maps were constructed, providing optimal domains for hot working at the temperature of 1253 K (980 °C) and the strain rate of 0.01 to 0.1/s, or at 1193 K to 1213 K (920 °C to 940 °C) and 0.001/s. Moreover, our results indicated that the initial temperature non-uniformity along the specimen axis before compression existed and influenced the strain distribution, which contributed to the abnormal oscillations and/or abrupt rise-up of true stress and inhomogeneous deformation.

  19. Processing multi temporal Thematic Mapper data for mapping the submarine shelf of the Island Kerkennah

    NASA Astrophysics Data System (ADS)

    Katlane, Rim; Berges, Jean-Claude; Beltrando, Gérard; Zargouni, Fouad

    2014-05-01

    Gulf of Gabes in Tunisia is unique among Mediterranean coastal environments by shallow water extension and tide amplitude. Kerkennah islands, located in this this gulf, are characterized by a -10 m isobath few kilometers away from the shoreline and by a lithology composition dominated by smooth rocks (sandstone and mio-plocene clay). These features, combined with a sea level rise and an active subsidence, constitute major risk factors. Islands vulnerability is increased by sebkha (salted low lands) extension which accounts now for 45% of the total area. Thus assessing the littoral sea depth change is a key issue for risk monitoring. Our study relies on the 30 years archive of Landsat 5 TM sensor managed by GSFC/NASA. The depth assessment has been carried out by an empiric method based on TM1 channel which has the better water penetration properties (up to 25 m). We focused on summer period and selected images from July 1986, August 1987, June 2003 and July 2009. After a first step of data preprocessing to ensure data homogeneity, we produced sub-aquatic morphology change maps. The observed features (submarine channels enlargement, cells sinking) are consistent with the hypothesis of the ebb tide as the process leading phenomenon.

  20. Water quality mapping and assessment, and weathering processes of selected aflaj in Oman.

    PubMed

    Ghrefat, Habes Ahmad; Jamarh, Ahmad; Al-Futaisi, Ahmed; Al-Abri, Badr

    2011-10-01

    There are more than 4,000 falaj (singular of a peculiar dug channel) distributed in different regions in Oman. The chemical characteristics of the water in 42 falaj were studied to evaluate the major ion chemistry; geochemical processes controlling water composition; and suitability of water for drinking, domestic, and irrigation uses. GIS-based maps indicate that the spatial distribution of chemical properties and concentrations vary within the same region and the different regions as well. The molar ratios of (Ca + Mg)/Total cations, (Na + K)/Total cations, (Ca + Mg)/(Na + K), (Ca + Mg)/(HCO₃ + SO₄), and Na/Cl reveal that the water chemistry of the majority of aflaj are dominated by carbonate weathering and evaporite dissolution, with minor contribution of silicate weathering. The concentrations of most of the elements were less than the permissible limits of Omani standards and WHO guidelines for drinking water and domestic use and do not generally pose any health and environmental problems. Some aflaj in ASH Sharqiyah and Muscat regions can be used for irrigation with slight to severe restriction because of the high levels of electrical conductivity, total dissolved solids, chloride, and sodium absorption ratio.

  1. Evaluation of acceleration and deceleration cardiac processes using phase-rectified signal averaging in healthy and idiopathic dilated cardiomyopathy subjects.

    PubMed

    Bas, Rosana; Vallverdú, Montserrat; Valencia, Jose F; Voss, Andreas; de Luna, Antonio Bayés; Caminal, Pere

    2015-02-01

    The aim of the present study was to investigate the suitability of the Phase-Rectified Signal Averaging (PRSA) method for improved risk prediction in cardiac patients. Moreover, this technique, which separately evaluates acceleration and deceleration processes of cardiac rhythm, allows the effect of sympathetic and vagal modulations of beat-to-beat intervals to be characterized. Holter recordings of idiopathic dilated cardiomyopathy (IDC) patients were analyzed: high-risk (HR), who suffered sudden cardiac death (SCD) during the follow-up; and low-risk (LR), without any kind of cardiac-related death. Moreover, a control group of healthy subjects was analyzed. PRSA indexes were analyzed, for different time scales T and wavelet scales s, from RR series of 24 h-ECG recordings, awake periods and sleep periods. Also, the behavior of these indexes from simulated data was analyzed and compared with real data results. Outcomes demonstrated the PRSA capacity to significantly discriminate healthy subjects from IDC patients and HR from LR patients on a higher level than traditional temporal and spectral measures. The behavior of PRSA indexes agrees with experimental evidences related to cardiac autonomic modulations. Also, these parameters reflect more regularity of the autonomic nervous system (ANS) in HR patients. PMID:25585858

  2. Tools for Developing a Quality Management Program: Proactive Tools (Process Mapping, Value Stream Mapping, Fault Tree Analysis, and Failure Mode and Effects Analysis)

    SciTech Connect

    Rath, Frank

    2008-05-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  3. BESIII Physics Data Storing and Processing on HBase and MapReduce

    NASA Astrophysics Data System (ADS)

    LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu

    2015-12-01

    In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.

  4. Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques

    SciTech Connect

    Danforth, W.W.; Schwab, W.C.; O'Brien, T.F. ); Karl, H. )

    1990-05-01

    The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred to a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.

  5. Compact Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Foster, John E.

    2004-01-01

    A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.

  6. Mapping mass movement processes using terrestrial LIDAR: a swift mechanism for hazard and disaster risk assessment

    NASA Astrophysics Data System (ADS)

    Garnica-Peña, Ricardo; Murillo-García, Franny; Alcántara-Ayala, Irasema

    2014-05-01

    The impact of disasters associated with mass movement processes has increased in the past decades. Either triggered by earthquakes, volcanic activity or rainfall, mass movement processes have affected people, infrastructure, economic activities and the environment in different parts of the world. Extensive damage is particularly linked to rainfall induced landslides due to the occurrence of tropical storms, hurricanes, and the combination of different meteorological phenomenon on exposed vulnerable communities. Therefore, landslide susceptibility analysis, hazard and risk assessments are considered as significant mechanisms to lessen the impact of disasters. Ideally, these procedures ought to be carried out before disasters take place. However, under intense or persistent periods of rainfall, the evaluation of potentially unstable slopes becomes a critical issue. Such evaluations are constrained by the availability of resources, capabilities and scientific and technological tools. Among them, remote sensing has proved to be a valuable tool to evaluate areas affected by mass movement processes during the post-disaster stage. Nonetheless, the high cost of imagery acquisition inhibits their wide use. High resolution topography field surveys consequently, turn out to be an essential approach to address landslide evaluation needs. In this work, we present the evaluation and mapping of a series of mass movement processes induced by hurricane Ingrid in September, 2013, in Teziutlán, Puebla, México, a municipality situated 265 km Northeast of Mexico City. Geologically, Teziutlán is characterised by the presence, in the North, of siltstones and conglomerates of the Middle Jurassic, whereas the central and Southern sectors consist of volcanic deposits of various types: andesitic tuffs of Tertiary age, and basalts, rhyolitic tuffs and ignimbrites from the Quaternary. Major relief structures are formed by the accumulation of volcanic material; lava domes, partially buried

  7. Urban land use mapping by machine processing of ERTS-1 multispectral data: A San Francisco Bay area example

    NASA Technical Reports Server (NTRS)

    Ellefsen, R.; Swain, P. H.; Wray, J. R.

    1973-01-01

    The study is reported to develop computer produced urban land use maps using multispectral scanner data from a satellite is reported. Data processing is discussed along with the results of the San Francisco Bay area, which was chosen as the test area.

  8. Higher Education Planning for a Strategic Goal with a Concept Mapping Process at a Small Private College

    ERIC Educational Resources Information Center

    Driscoll, Deborah P.

    2010-01-01

    Faculty, staff, and administrators at a small independent college determined that planning with a Concept Mapping process efficiently produced strategic thinking and action plans for the accomplishment of a strategic goal to expand experiential learning within the curriculum. One year into a new strategic plan, the college enjoyed enrollment…

  9. Social comparison processes, narrative mapping and their shaping of the cancer experience: a case study of an elite athlete.

    PubMed

    Sparkes, Andrew C; Pérez-Samaniego, Víctor; Smith, Brett

    2012-09-01

    Drawing on data generated by life history interviews and fieldwork observations we illuminate the ways in which a young elite athlete named David (a pseudonym) gave meaning to his experiences of cancer that eventually led to his death. Central to this process were the ways in which David utilized both social comparisons and a narrative map provided by the published autobiography of Lance Armstrong (2000). Our analysis reveals the selective manner in which social comparison processes operated around the following key dimensions: mental attitude to treatment; the sporting body; the ageing body; and physical appearance. The manner in which different comparison targets were chosen, the ways in which these were framed by Armstrong's autobiography, and the work that the restitution narrative as an actor did in this process are also examined. Some reflections are offered regarding the experiential consequences of the social comparison processes utilized by David when these are shaped by specific forms of embodiment and selective narrative maps of cancer survival.

  10. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  11. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated

  12. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Weiland, J.L.; Hill, R.S.; Odegard, 3.; Larson, D.; Bennett, C.L.; Dunkley, J.; Jarosik, N.; Page, L.; Spergel, D.N.; Halpern, M.; Meyer, S.S.; Tucker, G.S.; Wright, E.L.

    2008-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between

  13. Fluid expulsion sites on the Cascadia accretionary prism: mapping diagenetic deposits with processed GLORIA imagery

    USGS Publications Warehouse

    Carson, Bobb; Seke, Erol; Paskevich, Valerie F.; Holmes, Mark L.

    1994-01-01

     Point-discharge fluid expulsion on accretionary prisms is commonly indicated by diagenetic deposition of calcium carbonate cements and gas hydrates in near-surface (<10 m below seafloor; mbsf) hemipelagic sediment. The contrasting clastic and diagenetic lithologies should be apparent in side scan images. However, sonar also responds to variations in bottom slope, so unprocessed images mix topographic and lithologic information. We have processed GLORIA imagery from the Oregon continental margin to remove topographic effects. A synthetic side scan image was created initially from Sea Beam bathymetric data and then was subtracted iteratively from the original GLORIA data until topographic features disappeared. The residual image contains high-amplitude backscattering that we attribute to diagenetic deposits associated with fluid discharge, based on submersible mapping, Ocean Drilling Program drilling, and collected samples. Diagenetic deposits are concentrated (1) near an out-of-sequence thrust fault on the second ridge landward of the base of the continental slope, (2) along zones characterized by deep-seated strikeslip faults that cut transversely across the margin, and (3) in undeformed Cascadia Basin deposits which overlie incipient thrust faults seaward of the toe of the prism. There is no evidence of diagenetic deposition associated with the frontal thrust that rises from the dècollement. If the dècollement is an important aquifer, apparently the fluids are passed either to the strike-slip faults which intersect the dècollement or to the incipient faults in Cascadia Basin for expulsion. Diagenetic deposits seaward of the prism toe probably consist dominantly of gas hydrates

  14. Regional assessment of boreal forest productivity using an ecological process model and remote sensing parameter maps.

    PubMed

    Kimball, J. S.; Keyser, A. R.; Running, S. W.; Saatchi, S. S.

    2000-06-01

    An ecological process model (BIOME-BGC) was used to assess boreal forest regional net primary production (NPP) and response to short-term, year-to-year weather fluctuations based on spatially explicit, land cover and biomass maps derived by radar remote sensing, as well as soil, terrain and daily weather information. Simulations were conducted at a 30-m spatial resolution, over a 1205 km(2) portion of the BOREAS Southern Study Area of central Saskatchewan, Canada, over a 3-year period (1994-1996). Simulations of NPP for the study region were spatially and temporally complex, averaging 2.2 (+/- 0.6), 1.8 (+/- 0.5) and 1.7 (+/- 0.5) Mg C ha(-1) year(-1) for 1994, 1995 and 1996, respectively. Spatial variability of NPP was strongly controlled by the amount of aboveground biomass, particularly photosynthetic leaf area, whereas biophysical differences between broadleaf deciduous and evergreen coniferous vegetation were of secondary importance. Simulations of NPP were strongly sensitive to year-to-year variations in seasonal weather patterns, which influenced the timing of spring thaw and deciduous bud-burst. Reductions in annual NPP of approximately 17 and 22% for 1995 and 1996, respectively, were attributed to 3- and 5-week delays in spring thaw relative to 1994. Boreal forest stands with greater proportions of deciduous vegetation were more sensitive to the timing of spring thaw than evergreen coniferous stands. Similar relationships were found by comparing simulated snow depth records with 10-year records of aboveground NPP measurements obtained from biomass harvest plots within the BOREAS region. These results highlight the importance of sub-grid scale land cover complexity in controlling boreal forest regional productivity, the dynamic response of the biome to short-term interannual climate variations, and the potential implications of climate change and other large-scale disturbances.

  15. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Weiland, J. L.; Hill, R. S.; Odegard, N.; Larson, D.; Bennett, C. L.; Dunkley, J.; Gold, B.; Greason, M. R.; Jarosik, N.; Komatsu, E.; Nolta, M. R.; Page, L.; Spergel, D. N.; Wollack, E.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wright, E. L.

    2010-01-01

    We present new full-sky temperature and polarization maps in five frequency bands from 23 to 94 GHz, based on data from the first five years of the Wilkinson Microwave Anisotropy Probe (WMAP) sky survey. The new maps are consistent with previous maps and are more sensitive. The five-year maps incorporate several improvements in data processing made possible by the additional years of data and by a more complete analysis of the instrument calibration and in-flight beam response. We present several new tests for systematic errors in the polarization data and conclude that W-band polarization data is not yet suitable for cosmological studies, but we suggest directions for further study. We do find that Ka-band data is suitable for use; in conjunction with the additional years of data, the addition of Ka band to the previously used Q- and V-band channels significantly reduces the uncertainty in the optical depth parameter, tau. Further scientific results from the five-year data analysis are presented in six companion papers and are summarized in Section 7 of this paper. With the five-year WMAP data, we detect no convincing deviations from the minimal six-parameter ACDM model: a flat universe dominated by a cosmological constant, with adiabatic and nearly scale-invariant Gaussian fluctuations. Using WMAP data combined with measurements of Type Ia supernovae and Baryon Acoustic Oscillations in the galaxy distribution, we find (68% CL uncertainties): OMEGA(sub b)h(sup 2) = 0.02267(sup +0.00058)(sub -0.00059), OMEGA(sub c)h(sup 2) = 0.1131 plus or minus 0.0034, OMEGA(sub logical and) = 0.726 plus or minus 0.015, ns = .960 plus or minus 0.013, tau = 0.84 plus or minus 0.016, and DELTA(sup 2)(sub R) = (22.445 plus or minus 0.096) x 10(exp -9) at k = 0.002 Mpc(exp -1). From these we derive sigma(sub 8) = 0.812 plus or minus 0.026, H(sub 0) = 70.5 plus or minus 1.3 kilometers per second Mpc(exp -1), OMEGA(sub b) = 0.0456 plus or minus 0.0015, OMEGA(sub c) = .228 plus or minus

  16. Effect of the drying process on the intensification of phenolic compounds recovery from grape pomace using accelerated solvent extraction.

    PubMed

    Rajha, Hiba N; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G; Maroun, Richard G

    2014-01-01

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications.

  17. Effect of the Drying Process on the Intensification of Phenolic Compounds Recovery from Grape Pomace Using Accelerated Solvent Extraction

    PubMed Central

    Rajha, Hiba N.; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G.; Maroun, Richard G.

    2014-01-01

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications. PMID:25322155

  18. Accelerated aging studies of UHMWPE. I. Effect of resin, processing, and radiation environment on resistance to mechanical degradation.

    PubMed

    Edidin, A A; Herr, M P; Villarraga, M L; Muth, J; Yau, S S; Kurtz, S M

    2002-08-01

    The resin and processing route have been identified as potential variables influencing the mechanical behavior, and hence the clinical performance, of ultra-high molecular weight polyethylene (UHMWPE) orthopedic components. Researchers have reported that components fabricated from 1900 resin may oxidize to a lesser extent than components fabricated from GUR resin during shelf aging after gamma sterilization in air. Conflicting reports on the oxidation resistance for 1900 raise the question of whether resin or manufacturing method, or an interaction between resin and manufacturing method, influences the mechanical behavior of UHMWPE. We conducted a series of accelerated aging studies (no aging, aging in oxygen or in nitrogen) to systematically examine the influence of resin (GUR or 1900), manufacturing method (bulk compression molding or extrusion), and sterilization method (none, in air, or in nitrogen) on the mechanical behavior of UHMWPE. The small punch testing technique was used to evaluate the mechanical behavior of the materials, and Fourier transform infrared spectroscopy was used to characterize the oxidation in selected samples. Our study showed that the sterilization environment, aging condition, and specimen location (surface or subsurface) significantly affected the mechanical behavior of UHMWPE. Each of the three polyethylenes evaluated seem to degrade according to a similar pathway after artificial aging in oxygen and gamma irradiation in air. The initial ability of the materials to exhibit post-yield strain hardening was significantly compromised by degradation. In general, there were only minor differences in the aging behavior of molded and extruded GUR 1050, whereas the molded 1900 material seemed to degrade slightly faster than either of the 1050 materials.

  19. Accelerating patient-care improvement in the ED.

    PubMed

    Forrester, Nancy E

    2003-08-01

    Quality improvement is always in the best interest of healthcare providers. One hospital examined the patient-care delivery process used in its emergency department to determine ways to improve patient satisfaction while increasing the effectiveness and efficiency of healthcare delivery. The hospital used activity-based costing (ABC) plus additional data related to rework, information opportunity costs, and other effectiveness measures to create a process map that helped it accelerate diagnosis and improve redesign of the care process. PMID:12938618

  20. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  1. Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP

    NASA Astrophysics Data System (ADS)

    Shen, B.; Li, J. F.; Cheung, S.

    2013-12-01

    In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural

  2. Movable RF probe eliminates need for calibration in plasma accelerators

    NASA Technical Reports Server (NTRS)

    Miller, D. B.

    1967-01-01

    Movable RF antenna probe in plasma accelerators continuously maps the RF field both within and beyond the accelerator. It eliminates the need for installing probes in the accelerator walls. The moving RF probe can be used to map the RF electrical field under various accelerator conditions.

  3. World Stress Map Release 2003 - A key to tectonic processes and industrial applications

    NASA Astrophysics Data System (ADS)

    Müller, B.; Reinecker, J.; Heidbach, O.; Fuchs, K.

    2003-04-01

    Geoscientists are exploring and penetrating the interior of the Earth crust, to recover from it and to store into it solids, fluids and gas. Management of subsurface underground buildings such as boreholes or reservoirs has to take into account the existing tectonic stress either to their advantage or at least to minimize the effects of manmade stress concentrations and destructive effects as for instance borehole breakouts. The World Map of tectonic stress (in short: World Stress Map or WSM) is a fundamental geophysical database. The impact of the WSM on various aspects of modern civilization is pointed out. There is a whole range from seismic hazard quantification to the increasing interest of the industry in the WSM. The WSM becomes a valuable tool applied to a wide range of technological problems within the stressed crust such as oil reservoir management, and to stability of underground openings (tunnels, boreholes and waste disposal sites). The new release 2003 of the WSM has now more than 13,500 stress data sets. All data were classified according to a unified quality ranking. This provides the comparabilty of data which originate from a wide range of measurement methods. The data base is free available on the website http://www.world-stress-map.org. With the new version 1.1 of the interactive tool CASMO (Create A Stress Map Online) the user can ask for an own stress map.

  4. User alternatives in post-processing for raster-to-vector conversion. [Landsat-based forest mapping

    NASA Technical Reports Server (NTRS)

    Logan, T. L.; Woodcock, C. E.

    1983-01-01

    A number of Landsat-based coniferous forest stratum maps have been created of the Eldorado National Forest in California. These maps were produced in raster image format which is not directly usable by the U.S. Forest Service's vector-based Wildland Resource Information System (WRIS). As a solution, raster-to-vector conversion software has been developed for processing classified images into polygonal data structures. Before conversion, however, the digital classification images must be simplified to remove high spatial variance ('noise', 'speckle') and meet a USFS ten acre minimum requirement. A post-processing (simplification) strategy different from those commonly used in raster image processing may be desired for preparing maps for conversion to vector format, because simplification routines typically permit diagonal connections in the process of reclassifying pixels and forming new polygons. Diagonal connections are often undesirable when converting to vector format because they permit polygons to effectively cross over each other and occupy a common location. Three alternative methodologies are discussed for simplifying raster data for conversion to vector format.

  5. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  6. Tuning maps for setpoint changes and load disturbance upsets in a three capacity process under multivariable control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Smith, Ira C.

    1991-01-01

    Tuning maps are an aid in the controller tuning process because they provide a convenient way for the plant operator to determine the consequences of adjusting different controller parameters. In this application the maps provide a graphical representation of the effect of varying the gains in the state feedback matrix on startup and load disturbance transients for a three capacity process. Nominally, the three tank system, represented in diagonal form, has a Proportional-Integral control on each loop. Cross coupling is then introduced between the loops by using non-zero off-diagonal proportional parameters. Changes in transient behavior due to setpoint and load changes are examined by varying the gains of the cross coupling terms.

  7. Future accelerators (?)

    SciTech Connect

    John Womersley

    2003-08-21

    I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.

  8. Exploring Students' Mapping Behaviors and Interactive Discourses in a Case Diagnosis Problem: Sequential Analysis of Collaborative Causal Map Drawing Processes

    ERIC Educational Resources Information Center

    Lee, Woon Jee

    2012-01-01

    The purpose of this study was to explore the nature of students' mapping and discourse behaviors while constructing causal maps to articulate their understanding of a complex, ill-structured problem. In this study, six graduate-level students were assigned to one of three pair groups, and each pair used the causal mapping software program,…

  9. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  10. A Soft OR Approach to Fostering Systems Thinking: SODA Maps plus Joint Analytical Process

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2016-01-01

    Higher order thinking skills are important for managers. Systems thinking is an important type of higher order thinking in business education. This article investigates a soft Operations Research approach to teaching and learning systems thinking. It outlines the integrative use of Strategic Options Development and Analysis maps for visualizing…

  11. Processing Flexible Form-to-Meaning Mappings: Evidence for Enriched Composition as Opposed to Indeterminacy

    ERIC Educational Resources Information Center

    Roehm, Dietmar; Sorace, Antonella; Bornkessel-Schlesewsky, Ina

    2013-01-01

    Sometimes, the relationship between form and meaning in language is not one-to-one. Here, we used event-related brain potentials (ERPs) to illuminate the neural correlates of such flexible syntax-semantics mappings during sentence comprehension by examining split-intransitivity. While some ("rigid") verbs consistently select one…

  12. USING IMAGE PROCESSING METHODS WITH RASTER EDITING TOOLS FOR MAPPING EELGRASS DISTRIBUTIONS IN PACIFIC NORHWEST ESTUARIES

    EPA Science Inventory

    False-color near-infrared (CIR) aerial photography of seven Oregon estuaries was acquired at extreme low tides and digitally orthorectified with a ground pixel resolution of 25 cm to provide data for intertidal vegetation mapping. Exposed, semi-exposed and some submerged eelgras...

  13. Using Saliency Maps to Separate Competing Processes in Infant Visual Cognition

    ERIC Educational Resources Information Center

    Althaus, Nadja; Mareschal, Denis

    2012-01-01

    This article presents an eye-tracking study using a novel combination of visual saliency maps and "area-of-interest" analyses to explore online feature extraction during category learning in infants. Category learning in 12-month-olds (N = 22) involved a transition from looking at high-saliency image regions to looking at more informative, highly…

  14. ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION

    EPA Science Inventory

    The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...

  15. Processing the CONSOL Energy, Inc. Mine Maps and Records Collection at the University of Pittsburgh

    ERIC Educational Resources Information Center

    Rougeux, Debora A.

    2011-01-01

    This article describes the efforts of archivists and student assistants at the University of Pittsburgh's Archives Service Center to organize, describe, store, and provide timely and efficient access to over 8,000 maps of underground coal mines in southwestern Pennsylvania, as well the records that accompanied them, donated by CONSOL Energy, Inc.…

  16. [Clinical study of prejudicing autochthonous speech act (thought)--acceleration of the activity in the remission process of schizophrenia].

    PubMed

    Kato, S

    1997-01-01

    phenomenon is considered to have a selfhealing effect. Prejudicing autochthonous speech act (thought), presents not only a certain disappearance process of auditory hallucinations, but also a kind of acceleration of the patient's activity in the preremission period which leads to a remission. Prejudicing autochthonous speech act (thought) should be therefore considered in the therapy of schizophrenics.

  17. Toward an understanding of the acceleration of Diels-Alder reactions by a pseudo-intramolecular process achieved by molecular recognition. A DFT study.

    PubMed

    Domingo, Luis R; Aurell, M José; Arnó, Manuel; Saez, José A

    2007-05-25

    The pseudo-intramolecular Diels-Alder (DA) reaction between a 2-substituted furan (1) and a N-maleimide derivative (2) has been analyzed using DFT methods. Formation of two hydrogen bonds between the appendages on furan and maleimide derivatives favors thermodynamically the formation of a molecular complex (MC1) through an efficient molecular recognition process. The large enthalpy stabilization associated with the molecular recognition overcomes the unfavorable activation entropy associated with the bimolecular process. As a consequence, the subsequent DA reaction is clearly accelerated through a pseudo-intramolecular process.

  18. Nonlinear processes in cosmic-ray precursor of strong supernova shock: Maximum energy and average energy spectrum of accelerated particles

    NASA Astrophysics Data System (ADS)

    Ptuskin, V. S.; Zirakashvili, V. N.

    The instability in the cosmic-ray precursor of a supernova shock is studied. The level of turbulence in this region determines the maximum energy of accelerated particles. The consideration is not limited by the case of weak turbulence. It is assumed that the Kolmogorov type nonlinear wave interactions together with the ion-neutral collisions restrict the amplitude of random magnetic field. As a result, the maximum energy of accelerated particles strongly depends on the age of a SNR. The average spectrum of cosmic rays injected in the interstellar medium in the course of adiabatic SNR evolution takes the approximate form E-2 at energies larger than 10 30 GeV/nucleon with the maximum energy that is close to the position of the knee in cosmic-ray spectrum at 4 × 1015 eV. At an earlier stage of SNR evolution the ejecta-dominated stage, the particles are accelerated to higher energies and have a rather steep power-law distribution. These results suggest that the knee may mark the transition from the ejecta-dominated to the adiabatic evolution of SNR shocks which accelerate cosmic rays.

  19. Added value products for imaging remote sensing by processing actual GNSS reflectometry delay doppler maps

    NASA Astrophysics Data System (ADS)

    Schiavulli, Domenico; Frappart, Frédéric; Ramilien, Guillaume; Darrozes, José; Nunziata, Ferdinando; Migliaccio, Maurizio

    2016-04-01

    Global Navigation Satellite System Reflectometry (GNSS-R) is an innovative and promising tool for remote sensing. It is based on the exploitation of GNSS signals reflected off Earth's surface as signals of opportunity to infer geophysical information of the reflecting surface. The main advantages of GNSS-R with respect dedicated sensors are: the unprecedented spatial-temporal coverage due to the availability of a great amount of transmitting satellite, e.g. GPS, Galileo, Glonass, etc…, long term GNSS mission life and cost effectiveness. In fact only a simple receiver is needed. In the last years several works demonstrated the meaningful of this technique in several Earth Observation applications. All these applications presented results obtained by using a receiver mounted on an aircraft or on a fixed platform. Moreover, space borne missions have been launched or are planned: UK-DMC, TechDemoSat-1 (TDS-1), NASA CYGNSS, Geros ISS. Practically, GNSS-R can be seen as a bistatic radar system where the GNSS satellites continuously transmit the L-band all-weather night-and-day signals that are reflected off a surface, called Glistening Zone (GZ), and a receiver measures the scattered microwave signals in terms of Delay-Doppler maps (DDMs) or delay waveforms. These two products have been widely studied in the literature to extract compact parameters for different remote sensing applications. However, products measured in the Delay Doppler (DD) domain are not able to provide any spatial information of the scattering scene. This could represent a drawback for applications related to imaging remote sensing, e.g. target detection, sea/land and sea/ice transition, oil spill detection, etc…. To overcome these limitations some deconvolution techniques have been proposed in the state of the art aiming at the reconstruction of a radar image of the observed scene by processing the measured DDMs. These techniques have been tested on DDMs related to simulated marine scenario

  20. Susceptibility mapping of shallow landslides using kernel-based Gaussian process, support vector machines and logistic regression

    NASA Astrophysics Data System (ADS)

    Colkesen, Ismail; Sahin, Emrehan Kutlug; Kavzoglu, Taskin

    2016-06-01

    Identification of landslide prone areas and production of accurate landslide susceptibility zonation maps have been crucial topics for hazard management studies. Since the prediction of susceptibility is one of the main processing steps in landslide susceptibility analysis, selection of a suitable prediction method plays an important role in the success of the susceptibility zonation process. Although simple statistical algorithms (e.g. logistic regression) have been widely used in the literature, the use of advanced non-parametric algorithms in landslide susceptibility zonation has recently become an active research topic. The main purpose of this study is to investigate the possible application of kernel-based Gaussian process regression (GPR) and support vector regression (SVR) for producing landslide susceptibility map of Tonya district of Trabzon, Turkey. Results of these two regression methods were compared with logistic regression (LR) method that is regarded as a benchmark method. Results showed that while kernel-based GPR and SVR methods generally produced similar results (90.46% and 90.37%, respectively), they outperformed the conventional LR method by about 18%. While confirming the superiority of the GPR method, statistical tests based on ROC statistics, success rate and prediction rate curves revealed the significant improvement in susceptibility map accuracy by applying kernel-based GPR and SVR methods.

  1. Detecting Buried Archaeological Remains by the Use of Geophysical Data Processing with 'Diffusion Maps' Methodology

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev

    2015-04-01

    observe that as a result of the above operations we embedded the original data into 3-dimensional space where data related to the AT subsurface are well separated from the N data. This 3D set of the data representatives can be used as a reference set for the classification of newly arriving data. Geophysically it means a reliable division of the studied areas for the AT-containing and not containing (N) these objects. Testing this methodology for delineation of archaeological cavities by magnetic and gravity data analysis displayed an effectiveness of this approach. References Alperovich, L., Eppelbaum, L., Zheludev, V., Dumoulin, J., Soldovieri, F., Proto, M., Bavusi, M. and Loperte, A., 2013. A new combined wavelet methodology applied to GPR and ERT data in the Montagnole experiment (French Alps). Journal of Geophysics and Engineering, 10, No. 2, 025017, 1-17. Averbuch, A., Hochman, K., Rabin, N., Schclar, A. and Zheludev, V., 2010. A diffusion frame-work for detection of moving vehicles. Digital Signal Processing, 20, No.1, 111-122. Averbuch A.Z., Neittaanmäki, P., and Zheludev, V.A., 2014. Spline and Spline Wavelet Methods with Applications to Signal and Image Processing. Volume I: Periodic Splines. Springer. Coifman, R.R. and Lafon, S., 2006. Diffusion maps, Applied and Computational Harmonic Analysis. Special issue on Diffusion Maps and Wavelets, 21, No. 7, 5-30. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., 2014a. Geophysical observations at archaeological sites: Estimating informational content. Archaeological Prospection, 21, No. 2, 25-38. Eppelbaum, L.V. 2014b. Four Color Theorem and Applied Geophysics. Applied Mathematics, 5, 358-366. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed

  2. Machine processing of S-192 and supporting aircraft data: Studies of atmospheric effects, agricultural classifications, and land resource mapping

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1975-01-01

    Two tasks of machine processing of S-192 multispectral scanner data are reviewed. In the first task, the effects of changing atmospheric and base altitude on the ability to machine-classify agricultural crops were investigated. A classifier and atmospheric effects simulation model was devised and its accuracy verified by comparison of its predicted results with S-192 processed results. In the second task, land resource maps of a mountainous area near Cripple Creek, Colorado were prepared from S-192 data collected on 4 August 1973.

  3. Hydrogeological Mapping and Hydrological Process Modelling for understanding the interaction of surface runoff and infiltration in a karstic catchment

    NASA Astrophysics Data System (ADS)

    Stadler, Hermann; Reszler, Christian; Komma, Jürgen; Poltnig, Walter; Strobl, Elmar; Blöschl, Günter

    2013-04-01

    This paper presents a study at the interface hydrogeology - hydrology, concerning mapping of surface runoff generation areas in a karstic catchment. The governing processes range from surface runoff with subsequent infiltration to direct infiltration and further deep percolation into different karst conduits. The aim is to identify areas with a potential of surface erosion and thus, identify the hazard of solute/contaminant input into the karst system during aestival thundershowers, which can affect water quality at springs draining the karst massif. According to hydrogeological methods the emphasis of the study are field investigations based on hydrogeological mapping and field measurements in order to gain extensive knowledge about processes and their spatial distribution in the catchment to establish a site specific Dominant Process Concept (DPC). Based on the hydrogeological map, which describes the lithological units relating to their hydrogeological classification, mapping focuses on the following attributes of the overlaying loose material/debris and soils: (i) infiltration capability, (ii) soil depth (as a measure for storage capacity), and (iii) potential surface flow length. Detailed mapping is performed in the reference area, where a variety of data are acquired, such as soil grain size distribution, soil moisture through TDR measurements at characteristic points, etc. The reference area borders both end-members of the dominant surface runoff processes as described above. Geomorphologic analyses based on a 1m resolution Laserscan assist in allocating sinks and flow accumulation paths in the catchment. By a regionalisation model, developed and calibrated based on the results in the reference areas, the process disposition is transposed onto the whole study area. In a further step, a hydrological model will be set up, where model structure and parameters are identified based on above described working steps and following the DPC. The model will be

  4. World Stress Map of the Earth: a key to tectonic processes and technological applications.

    PubMed

    Fuchs, K; Müller, B

    2001-09-01

    Modern civilisation explores and penetrates the interior of the Earth's crust, recovers from it and stores into it solids, fluids and gases to a hitherto unprecedented degree. Management of underground structures such as boreholes or reservoirs take into account the existing stress either to take advantage of it or at least to minimise the effects of man-made stress. This paper presents the World Map of Tectonic Stresses (in short: World Stress Map or WSM) as a fundamental geophysical data-base. The impact of the WSM is pointed out: in the context of global tectonics, in seismic hazard quantification and in a wide range of technological problems in industrial applications such as oil reservoir management and stability of underground openings (tunnels, boreholes and waste disposal sites).

  5. Evaluation of Waveform Mapping as a Signal Processing Tool for Quantitative Ultrasonic NDE

    NASA Technical Reports Server (NTRS)

    Johnston, Patrick H.; Kishoni, Doron

    1993-01-01

    The mapping of one pulsed waveform into another, more desirable waveform by the application of a time-domain filter has been employed in a number of NDE situations. The primary goal of these applications has been to improve the range resolution of an ultrasonic signal for detection of echoes arising from particular interfaces masked by the response of the transducer. The work presented here addresses the use of this technique for resolution enhancement in imaging situations and in mapping signals from different transducers to a common target waveform, allowing maintenance of quantitative calibration of ultrasonic systems. We also describe the use of this technique in terms of the frequency analysis of the resulting waveforms.

  6. Direct Current Accelerators for Industrial Applications

    NASA Astrophysics Data System (ADS)

    Hellborg, Ragnar; Whitlow, Harry J.

    2011-02-01

    Direct current accelerators form the basis of many front-line industrial processes. They have many advantages that have kept them at the forefront of technology for many decades, such as a small and easily managed environmental footprint. In this article, the basic principles of the different subsystems (ion and electron sources, high voltage generation, control, etc.) are overviewed. Some well-known (ion implantation and polymer processing) and lesser-known (electron beam lithography and particle-induced X-ray aerosol mapping) applications are reviewed.

  7. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  8. Production of a water quality map of Saginaw Bay by computer processing of LANDSAT-2 data

    NASA Technical Reports Server (NTRS)

    Mckeon, J. B.; Rogers, R. H.; Smith, V. E.

    1977-01-01

    Surface truth and LANDSAT measurements collected July 31, 1975, for Saginaw Bay were used to demonstrate a technique for producing a color coded water quality map. On this map, color was used as a code to quantify five discrete ranges in the following water quality parameters: (1) temperature, (2) Secchi depth, (3) chloride, (4) conductivity, (5) total Kjeldahl nitrogen, (6) total phosphorous, (7)chlorophyll a, (8) total solids and (9) suspended solids. The LANDSAT and water quality relationship was established through the use of a set of linear regression equations where the water quality parameters are the dependent variables and LANDSAT measurements are the independent variables. Although the procedure is scene and surface truth dependent, it provides both a basis for extrapolating water quality parameters from point samples to unsampled areas and a synoptic view of water mass boundaries over the 3000 sq. km bay area made from one day's ship data that is superior, in many ways, to the traditional machine contoured maps made from three day's ship data.

  9. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population

    PubMed Central

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  10. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population.

    PubMed

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  11. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... to overturn will be posted on HUD's MAP Web site. ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  12. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... to overturn will be posted on HUD's MAP Web site. ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  13. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... to overturn will be posted on HUD's MAP Web site. ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  14. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to overturn will be posted on HUD's MAP Web site. ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  15. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... to overturn will be posted on HUD's MAP Web site. ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing...

  16. Application of ERTS images and image processing to regional geologic problems and geologic mapping in northern Arizona

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H. (Principal Investigator); Billingsley, F. C.; Gillespie, A. R.; Abrams, M. J.; Squires, R. L.; Shoemaker, E. M.; Lucchitta, I.; Elston, D. P.

    1975-01-01

    The author has identified the following significant results. Computer image processing was shown to be both valuable and necessary in the extraction of the proper subset of the 200 million bits of information in an ERTS image to be applied to a specific problem. Spectral reflectivity information obtained from the four MSS bands can be correlated with in situ spectral reflectance measurements after path radiance effects have been removed and a proper normalization has been made. A detailed map of the major fault systems in a 90,000 sq km area in northern Arizona was compiled from high altitude photographs and pre-existing published and unpublished map data. With the use of ERTS images, three major fault systems, the Sinyala, Bright Angel, and Mesa Butte, were identified and their full extent measured. A byproduct of the regional studies was the identification of possible sources of shallow ground water, a scarce commodity in these regions.

  17. THE SPECIFIC ACCELERATION RATE IN LOOP-STRUCTURED SOLAR FLARES-IMPLICATIONS FOR ELECTRON ACCELERATION MODELS

    SciTech Connect

    Guo, Jingnan; Emslie, A. Gordon; Piana, Michele E-mail: piana@dima.unige.it

    2013-03-20

    We analyze electron flux maps based on RHESSI hard X-ray imaging spectroscopy data for a number of extended coronal-loop flare events. For each event, we determine the variation of the characteristic loop length L with electron energy E, and we fit this observed behavior with models that incorporate an extended acceleration region and an exterior 'propagation' region, and which may include collisional modification of the accelerated electron spectrum inside the acceleration region. The models are characterized by two parameters: the plasma density n in, and the longitudinal extent L{sub 0} of, the acceleration region. Determination of the best-fit values of these parameters permits inference of the volume that encompasses the acceleration region and of the total number of particles within it. It is then straightforward to compute values for the emission filling factor and for the specific acceleration rate (electrons s{sup -1} per ambient electron above a chosen reference energy). For the 24 events studied, the range of inferred filling factors is consistent with a value of unity. The inferred mean value of the specific acceleration rate above E{sub 0} = 20 keV is {approx}10{sup -2} s{sup -1}, with a 1{sigma} spread of about a half-order-of-magnitude above and below this value. We compare these values with the predictions of several models, including acceleration by large-scale, weak (sub-Dreicer) fields, by strong (super-Dreicer) electric fields in a reconnecting current sheet, and by stochastic acceleration processes.

  18. Recombinant growth factor mixtures induce cell cycle progression and the upregulation of type I collagen in human skin fibroblasts, resulting in the acceleration of wound healing processes.

    PubMed

    Lee, Do Hyun; Choi, Kyung-Ha; Cho, Jae-We; Kim, So Young; Kwon, Tae Rin; Choi, Sun Young; Choi, Yoo Mi; Lee, Jay; Yoon, Ho Sang; Kim, Beom Joon

    2014-05-01

    Application of growth factor mixtures has been used for wound healing and anti-wrinkles agents. The aim of this study was to evaluate the effect of recombinant growth factor mixtures (RGFM) on the expression of cell cycle regulatory proteins, type I collagen, and wound healing processes of acute animal wound models. The results showed that RGFM induced increased rates of cell proliferation and cell migration of human skin fibroblasts (HSF). In addition, expression of cyclin D1, cyclin E, cyclin-dependent kinase (Cdk)4, and Cdk2 proteins was markedly increased with a growth factor mixtures treatment in fibroblasts. Expression of type I collagen was also increased in growth factor mixtures-treated HSF. Moreover, growth factor mixtures-induced the upregulation of type I collagen was associated with the activation of Smad2/3. In the animal model, RGFM-treated mice showed accelerated wound closure, with the closure rate increasing as early as on day 7, as well as re-epithelization and reduced inflammatory cell infiltration than phosphate-buffered saline (PBS)-treated mice. In conclusion, the results indicated that RGFM has the potential to accelerate wound healing through the upregulation of type I collagen, which is partly mediated by activation of Smad2/3-dependent signaling pathway as well as cell cycle progression in HSF. The topical application of growth factor mixtures to acute and chronic skin wound may accelerate the epithelization process through these molecular mechanisms.

  19. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  20. Discrimination of basic silicate rocks by recognition maps processed from aerial infrared data.

    NASA Technical Reports Server (NTRS)

    Vincent, R. K.; Thomson, F. J.

    1971-01-01

    A method is presented which can be used to map silicate rock-type from aerial infrared data. The method has been partially tested over a sand quarry at Mill Creek, Oklahoma, in which highly siliceous targets were discriminated from nonsilicates in the scene. The technique is currently being tested experimentally on basic silicates. On the basis of the Mill Creek results and theoretical considerations, percent SiO2 differences as small as 14% should be detectable with the University of Michigan's currently available detectors.

  1. Plasma acceleration above martian magnetic anomalies.

    PubMed

    Lundin, R; Winningham, D; Barabash, S; Frahm, R; Holmström, M; Sauvaud, J-A; Fedorov, A; Asamura, K; Coates, A J; Soobiah, Y; Hsieh, K C; Grande, M; Koskinen, H; Kallio, E; Kozyra, J; Woch, J; Fraenz, M; Brain, D; Luhmann, J; McKenna-Lawler, S; Orsini, R S; Brandt, P; Wurz, P

    2006-02-17

    Auroras are caused by accelerated charged particles precipitating along magnetic field lines into a planetary atmosphere, the auroral brightness being roughly proportional to the precipitating particle energy flux. The Analyzer of Space Plasma and Energetic Atoms experiment on the Mars Express spacecraft has made a detailed study of acceleration processes on the nightside of Mars. We observed accelerated electrons and ions in the deep nightside high-altitude region of Mars that map geographically to interface/cleft regions associated with martian crustal magnetization regions. By integrating electron and ion acceleration energy down to the upper atmosphere, we saw energy fluxes in the range of 1 to 50 milliwatts per square meter per second. These conditions are similar to those producing bright discrete auroras above Earth. Discrete auroras at Mars are therefore expected to be associated with plasma acceleration in diverging magnetic flux tubes above crustal magnetization regions, the auroras being distributed geographically in a complex pattern by the many multipole magnetic field lines extending into space. PMID:16484488

  2. Plasma acceleration above martian magnetic anomalies.

    PubMed

    Lundin, R; Winningham, D; Barabash, S; Frahm, R; Holmström, M; Sauvaud, J-A; Fedorov, A; Asamura, K; Coates, A J; Soobiah, Y; Hsieh, K C; Grande, M; Koskinen, H; Kallio, E; Kozyra, J; Woch, J; Fraenz, M; Brain, D; Luhmann, J; McKenna-Lawler, S; Orsini, R S; Brandt, P; Wurz, P

    2006-02-17

    Auroras are caused by accelerated charged particles precipitating along magnetic field lines into a planetary atmosphere, the auroral brightness being roughly proportional to the precipitating particle energy flux. The Analyzer of Space Plasma and Energetic Atoms experiment on the Mars Express spacecraft has made a detailed study of acceleration processes on the nightside of Mars. We observed accelerated electrons and ions in the deep nightside high-altitude region of Mars that map geographically to interface/cleft regions associated with martian crustal magnetization regions. By integrating electron and ion acceleration energy down to the upper atmosphere, we saw energy fluxes in the range of 1 to 50 milliwatts per square meter per second. These conditions are similar to those producing bright discrete auroras above Earth. Discrete auroras at Mars are therefore expected to be associated with plasma acceleration in diverging magnetic flux tubes above crustal magnetization regions, the auroras being distributed geographically in a complex pattern by the many multipole magnetic field lines extending into space.

  3. Exploiting comparative mapping among Brassica species to accelerate the physical delimitation of a genic male-sterile locus (BnRf) in Brassica napus.

    PubMed

    Xie, Yanzhou; Dong, Faming; Hong, Dengfeng; Wan, Lili; Liu, Pingwu; Yang, Guangsheng

    2012-07-01

    The recessive genic male sterility (RGMS) line 9012AB has been used as an important pollination control system for rapeseed hybrid production in China. Here, we report our study on physical mapping of one male-sterile locus (BnRf) in 9012AB by exploiting the comparative genomics among Brassica species. The genetic maps around BnRf from previous reports were integrated and enriched with markers from the Brassica A7 chromosome. Subsequent collinearity analysis of these markers contributed to the identification of a novel ancestral karyotype block F that possibly encompasses BnRf. Fourteen insertion/deletion markers were further developed from this conserved block and genotyped in three large backcross populations, leading to the construction of high-resolution local genetic maps where the BnRf locus was restricted to a less than 0.1-cM region. Moreover, it was observed that the target region in Brassica napus shares a high collinearity relationship with a region from the Brassica rapa A7 chromosome. A BnRf-cosegregated marker (AT3G23870) was then used to screen a B. napus bacterial artificial chromosome (BAC) library. From the resulting 16 positive BAC clones, one (JBnB089D05) was identified to most possibly contain the BnRf (c) allele. With the assistance of the genome sequence from the Brassica rapa homolog, the 13.8-kb DNA fragment covering both closest flanking markers from the BAC clone was isolated. Gene annotation based on the comparison of microcollinear regions among Brassica napus, B. rapa and Arabidopsis showed that five potential open reading frames reside in this fragment. These results provide a foundation for the characterization of the BnRf locus and allow a better understanding of the chromosome evolution around BnRf.

  4. Exploiting comparative mapping among Brassica species to accelerate the physical delimitation of a genic male-sterile locus (BnRf) in Brassica napus.

    PubMed

    Xie, Yanzhou; Dong, Faming; Hong, Dengfeng; Wan, Lili; Liu, Pingwu; Yang, Guangsheng

    2012-07-01

    The recessive genic male sterility (RGMS) line 9012AB has been used as an important pollination control system for rapeseed hybrid production in China. Here, we report our study on physical mapping of one male-sterile locus (BnRf) in 9012AB by exploiting the comparative genomics among Brassica species. The genetic maps around BnRf from previous reports were integrated and enriched with markers from the Brassica A7 chromosome. Subsequent collinearity analysis of these markers contributed to the identification of a novel ancestral karyotype block F that possibly encompasses BnRf. Fourteen insertion/deletion markers were further developed from this conserved block and genotyped in three large backcross populations, leading to the construction of high-resolution local genetic maps where the BnRf locus was restricted to a less than 0.1-cM region. Moreover, it was observed that the target region in Brassica napus shares a high collinearity relationship with a region from the Brassica rapa A7 chromosome. A BnRf-cosegregated marker (AT3G23870) was then used to screen a B. napus bacterial artificial chromosome (BAC) library. From the resulting 16 positive BAC clones, one (JBnB089D05) was identified to most possibly contain the BnRf (c) allele. With the assistance of the genome sequence from the Brassica rapa homolog, the 13.8-kb DNA fragment covering both closest flanking markers from the BAC clone was isolated. Gene annotation based on the comparison of microcollinear regions among Brassica napus, B. rapa and Arabidopsis showed that five potential open reading frames reside in this fragment. These results provide a foundation for the characterization of the BnRf locus and allow a better understanding of the chromosome evolution around BnRf. PMID:22382487

  5. Geomorphology, acoustic backscatter, and processes in Santa Monica Bay from multibeam mapping.

    PubMed

    Gardner, James V; Dartnell, Peter; Mayer, Larry A; Hughes Clarke, John E

    2003-01-01

    Santa Monica Bay was mapped in 1996 using a high-resolution multibeam system, providing the first substantial update of the submarine geomorphology since the initial compilation by Shepard and Emery [(1941) Geol. Soc. Amer. Spec. Paper 31]. The multibeam mapping generated not only high-resolution bathymetry, but also coregistered, calibrated acoustic backscatter at 95 kHz. The geomorphology has been subdivided into six provinces; shelf, marginal plateau, submarine canyon, basin slope, apron, and basin. The dimensions, gradients, and backscatter characteristics of each province is described and related to a combination of tectonics, climate, sea level, and sediment supply. Fluctuations of eustatic sea level have had a profound effect on the area; by periodically eroding the surface of Santa Monica plateau, extending the mouth of the Los Angeles River to various locations along the shelf break, and by connecting submarine canyons to rivers. A wetter glacial climate undoubtedly generated more sediment to the rivers that then transported the increased sediment load to the low-stand coastline and canyon heads. The trends of Santa Monica Canyon and several bathymetric highs suggest a complex tectonic stress field that has controlled the various segments. There is no geomorphic evidence to suggest Redondo Canyon is fault controlled. The San Pedro fault can be extended more than 30 km to the northwest by the alignment of a series of bathymetric highs and abrupt changes in direction of channel thalwegs. PMID:12648948

  6. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2015-09-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  7. Geomorphology, acoustic backscatter, and processes in Santa Monica Bay from multibeam mapping.

    PubMed

    Gardner, James V; Dartnell, Peter; Mayer, Larry A; Hughes Clarke, John E

    2003-01-01

    Santa Monica Bay was mapped in 1996 using a high-resolution multibeam system, providing the first substantial update of the submarine geomorphology since the initial compilation by Shepard and Emery [(1941) Geol. Soc. Amer. Spec. Paper 31]. The multibeam mapping generated not only high-resolution bathymetry, but also coregistered, calibrated acoustic backscatter at 95 kHz. The geomorphology has been subdivided into six provinces; shelf, marginal plateau, submarine canyon, basin slope, apron, and basin. The dimensions, gradients, and backscatter characteristics of each province is described and related to a combination of tectonics, climate, sea level, and sediment supply. Fluctuations of eustatic sea level have had a profound effect on the area; by periodically eroding the surface of Santa Monica plateau, extending the mouth of the Los Angeles River to various locations along the shelf break, and by connecting submarine canyons to rivers. A wetter glacial climate undoubtedly generated more sediment to the rivers that then transported the increased sediment load to the low-stand coastline and canyon heads. The trends of Santa Monica Canyon and several bathymetric highs suggest a complex tectonic stress field that has controlled the various segments. There is no geomorphic evidence to suggest Redondo Canyon is fault controlled. The San Pedro fault can be extended more than 30 km to the northwest by the alignment of a series of bathymetric highs and abrupt changes in direction of channel thalwegs.

  8. Using Medical Text Extraction, Reasoning and Mapping System (MTERMS) to Process Medication Information in Outpatient Clinical Notes

    PubMed Central

    Zhou, Li; Plasek, Joseph M; Mahoney, Lisa M; Karipineni, Neelima; Chang, Frank; Yan, Xuemin; Chang, Fenny; Dimaggio, Dana; Goldman, Debora S.; Rocha, Roberto A.

    2011-01-01

    Clinical information is often coded using different terminologies, and therefore is not interoperable. Our goal is to develop a general natural language processing (NLP) system, called Medical Text Extraction, Reasoning and Mapping System (MTERMS), which encodes clinical text using different terminologies and simultaneously establishes dynamic mappings between them. MTERMS applies a modular, pipeline approach flowing from a preprocessor, semantic tagger, terminology mapper, context analyzer, and parser to structure inputted clinical notes. Evaluators manually reviewed 30 free-text and 10 structured outpatient clinical notes compared to MTERMS output. MTERMS achieved an overall F-measure of 90.6 and 94.0 for free-text and structured notes respectively for medication and temporal information. The local medication terminology had 83.0% coverage compared to RxNorm’s 98.0% coverage for free-text notes. 61.6% of mappings between the terminologies are exact match. Capture of duration was significantly improved (91.7% vs. 52.5%) from systems in the third i2b2 challenge. PMID:22195230

  9. Hot deformation behavior of uniform fine-grained GH4720Li alloy based on its processing map

    NASA Astrophysics Data System (ADS)

    Yu, Qiu-ying; Yao, Zhi-hao; Dong, Jian-xin

    2016-01-01

    The hot deformation behavior of uniform fine-grained GH4720Li alloy was studied in the temperature range from 1040 to 1130°C and the strain-rate range from 0.005 to 0.5 s-1 using hot compression testing. Processing maps were constructed on the basis of compression data and a dynamic materials model. Considerable flow softening associated with superplasticity was observed at strain rates of 0.01 s-1 or lower. According to the processing map and observations of the microstructure, the uniform fine-grained microstructure remains intact at 1100°C or lower because of easily activated dynamic recrystallization (DRX), whereas obvious grain growth is observed at 1130°C. Metallurgical instabilities in the form of non-uniform microstructures under higher and lower Zener-Hollomon parameters are induced by local plastic flow and primary γ' local faster dissolution, respectively. The optimum processing conditions at all of the investigated strains are proposed as 1090-1130°C with 0.08-0.5 s-1 and 0.005-0.008 s-1 and 1040-1085°C with 0.005-0.06 s-1.

  10. Recent processed results from the Skylab S-192 multispectral scanner. [rock mapping and mineral exploration of White Sands area

    NASA Technical Reports Server (NTRS)

    Thomson, F. J.; Nalepta, R. F.; Vincent, R. K.; Salmon, B. C.

    1975-01-01

    Results of mapping of rock types from the White Sands, New Mexico area using digital tape data from the Skylab S-192 multispectral scanner are presented. Spectral recognition techniques were used to process the geological data and signatures were extracted from the training sets using a set of promising ratio features defined by analysis of ERSIS (Earth Resources Spectral Information System). An analysis of ERSIS spectra of rock types yielded 24 promising spectral channel ratio features for separating the rock types into precambrian, calcareous, and clay materials and those containing ferric iron.

  11. Processing Optical and SAR Data for Burned Forests Mapping: An Integrated Framework

    NASA Astrophysics Data System (ADS)

    Stroppiana, Daniela; Azar, Ramin; Calo, Fabiana; Pepe, Antonio; Imperatore, Pasquale; Boschetti, Mirco; Silva, Joao M. N.; Brivio, Pietro A.; Lanari, Riccardo

    2015-05-01

    The application of an integrated monitoring tool to assess and understand the effects of annually occurring forest fires is presented, with special emphasis to Mediterranean and Temperate Continental zones of Europe. The distinctive features of the information conveyed by optical and microwave remote sensing data have been firstly investigated, and pertinent information have been subsequently combined to identify burned areas at the regional scale. We therefore propose a fuzzy-based multisource framework for burned area mapping, in order to overcome the limitations inherent to the use of only optical data (which can be severely affected by cloud cover or include low albedo surface targets). The relevant experimental validation has been carried out on an extensive area, thus quantitatively demonstrating how our approach successes in identifying areas affected by fires. Furthermore, the proposed methodological framework can also be profitably applied to ESA Sentinel (optical and SAR) data.

  12. Mapping the anode surface-electrolyte interphase: investigating a life limiting process of lithium primary batteries.

    PubMed

    Bock, David C; Tappero, Ryan V; Takeuchi, Kenneth J; Marschilok, Amy C; Takeuchi, Esther S

    2015-03-11

    Cathode solubility in batteries can lead to decreased and unpredictable long-term battery behavior due to transition metal deposition on the negative electrode such that it no longer supports high current. Analysis of negative electrodes from cells containing vanadium oxide or phosphorus oxide based cathode systems retrieved after long-term testing was conducted. This report demonstrates the use of synchrotron based X-ray microfluorescence (XRμF) to map negative battery electrodes in conjunction with microbeam X-ray absorption spectroscopy (μXAS) to determine the oxidation states of the metal centers resident in the solid electrolyte interphase (SEI) and at the electrode surface. Based on the empirical findings, a conceptual model for the location of metal ions in the SEI and their role in impacting lithium ion mobility at the electrode surfaces is proposed.

  13. A data processing method for determining instantaneous angular speed and acceleration of crankshaft in an aircraft engine-propeller system using a magnetic encoder

    NASA Astrophysics Data System (ADS)

    Yu, S. D.; Zhang, X.

    2010-05-01

    This paper presents a method for determining the instantaneous angular speed and instantaneous angular acceleration of the crankshaft in a reciprocating engine and propeller dynamical system from electrical pulse signals generated by a magnetic encoder. The method is based on accurate determination of the measured global mean angular speed and precise values of times when leading edges of individual magnetic teeth pass through the magnetic sensor. Under a steady-state operating condition, a discrete deviation time vs. shaft rotational angle series of uniform interval is obtained and used for accurate determination of the crankshaft speed and acceleration. The proposed method for identifying sub- and super-harmonic oscillations in the instantaneous angular speeds and accelerations is new and efficient. Experiments were carried out on a three-cylinder four-stroke Saito 450R model aircraft engine and a Solo propeller in connection with a 64-teeth Admotec KL2202 magnetic encoder and an HS-4 data acquisition system. Comparisons with an independent data processing scheme indicate that the proposed method yields noise-free instantaneous angular speeds and is superior to the finite difference based methods commonly used in the literature.

  14. Lung Master Protocol (Lung-MAP)—A Biomarker-Driven Protocol for Accelerating Development of Therapies for Squamous Cell Lung Cancer: SWOG S1400

    PubMed Central

    Herbst, Roy S.; Gandara, David R.; Hirsch, Fred R.; Redman, Mary W.; LeBlanc, Michael; Mack, Philip C.; Schwartz, Lawrence H.; Vokes, Everett; Ramalingam, Suresh S.; Bradley, Jeffrey D.; Sparks, Dana; Zhou, Yang; Miwa, Crystal; Miller, Vincent A.; Yelensky, Roman; Li, Yali; Allen, Jeff D.; Sigal, Ellen V.; Wholley, David; Sigman, Caroline C.; Blumenthal, Gideon M.; Malik, Shakun; Kelloff, Gary J.; Abrams, Jeffrey S.; Blanke, Charles D.; Papadimitrakopoulou, Vassiliki A.

    2015-01-01

    The Lung Master Protocol (Lung-MAP, S1400) is a groundbreaking clinical trial designed to advance the efficient development of targeted therapies for squamous cell cancer (SCCA) of the lung. There are no approved targeted therapies specific to advanced lung SCCA, although The Cancer Genome Atlas (TCGA) project and similar studies have detected a significant number of somatic gene mutations/amplifications in lung SCCA, some of which are targetable by investigational agents. However, the frequency of these changes is low (5–20%), making recruitment and study conduct challenging in the traditional clinical trial setting. Here we describe our approach to development of a biomarker-driven phase 2/3 multi-substudy “Master Protocol,” employing a common platform (Next Generation DNA Sequencing) to identify actionable molecular abnormalities, followed by randomization to the relevant targeted therapy versus standard of care. PMID:25680375

  15. Lung Master Protocol (Lung-MAP)-A Biomarker-Driven Protocol for Accelerating Development of Therapies for Squamous Cell Lung Cancer: SWOG S1400.

    PubMed

    Herbst, Roy S; Gandara, David R; Hirsch, Fred R; Redman, Mary W; LeBlanc, Michael; Mack, Philip C; Schwartz, Lawrence H; Vokes, Everett; Ramalingam, Suresh S; Bradley, Jeffrey D; Sparks, Dana; Zhou, Yang; Miwa, Crystal; Miller, Vincent A; Yelensky, Roman; Li, Yali; Allen, Jeff D; Sigal, Ellen V; Wholley, David; Sigman, Caroline C; Blumenthal, Gideon M; Malik, Shakun; Kelloff, Gary J; Abrams, Jeffrey S; Blanke, Charles D; Papadimitrakopoulou, Vassiliki A

    2015-04-01

    The Lung Master Protocol (Lung-MAP, S1400) is a groundbreaking clinical trial designed to advance the efficient development of targeted therapies for squamous cell carcinoma (SCC) of the lung. There are no approved targeted therapies specific to advanced lung SCC, although The Cancer Genome Atlas project and similar studies have detected a significant number of somatic gene mutations/amplifications in lung SCC, some of which are targetable by investigational agents. However, the frequency of these changes is low (5%-20%), making recruitment and study conduct challenging in the traditional clinical trial setting. Here, we describe our approach to development of a biomarker-driven phase II/II multisubstudy "Master Protocol," using a common platform (next-generation DNA sequencing) to identify actionable molecular abnormalities, followed by randomization to the relevant targeted therapy versus standard of care.

  16. Mapping racism.

    PubMed

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  17. Science-Grade Observing Systems as Process Observatories: Mapping and Understanding Nonlinearity and Multiscale Memory with Models and Observations

    NASA Astrophysics Data System (ADS)

    Barros, A. P.; Wilson, A. M.; Miller, D. K.; Tao, J.; Genereux, D. P.; Prat, O.; Petersen, W. A.; Brunsell, N. A.; Petters, M. D.; Duan, Y.

    2015-12-01

    Using the planet as a study domain and collecting observations over unprecedented ranges of spatial and temporal scales, NASA's EOS (Earth Observing System) program was an agent of transformational change in Earth Sciences over the last thirty years. The remarkable space-time organization and variability of atmospheric and terrestrial moist processes that emerged from the analysis of comprehensive satellite observations provided much impetus to expand the scope of land-atmosphere interaction studies in Hydrology and Hydrometeorology. Consequently, input and output terms in the mass and energy balance equations evolved from being treated as fluxes that can be used as boundary conditions, or forcing, to being viewed as dynamic processes of a coupled system interacting at multiple scales. Measurements of states or fluxes are most useful if together they map, reveal and/or constrain the underlying physical processes and their interactions. This can only be accomplished through an integrated observing system designed to capture the coupled physics, including nonlinear feedbacks and tipping points. Here, we first review and synthesize lessons learned from hydrometeorology studies in the Southern Appalachians and in the Southern Great Plains using both ground-based and satellite observations, physical models and data-assimilation systems. We will specifically focus on mapping and understanding nonlinearity and multiscale memory of rainfall-runoff processes in mountainous regions. It will be shown that beyond technical rigor, variety, quantity and duration of measurements, the utility of observing systems is determined by their interpretive value in the context of physical models to describe the linkages among different observations. Second, we propose a framework for designing science-grade and science-minded process-oriented integrated observing and modeling platforms for hydrometeorological studies.

  18. Impaired Letter-String Processing in Developmental Dyslexia: What Visual-to-Phonology Code Mapping Disorder?

    ERIC Educational Resources Information Center

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-01-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In…

  19. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  20. Proton acceleration in the electrostatic sheaths of hot electrons governed by strongly relativistic laser-absorption processes.

    PubMed

    Ter-Avetisyan, S; Schnürer, M; Sokollik, T; Nickles, P V; Sandner, W; Reiss, H R; Stein, J; Habs, D; Nakamura, T; Mima, K

    2008-01-01

    Two different laser energy absorption mechanisms at the front side of a laser-irradiated foil have been found to occur, such that two distinct relativistic electron beams with different properties are produced. One beam arises from the ponderomotively driven electrons propagating in the laser propagation direction, and the other is the result of electrons driven by resonance absorption normal to the target surface. These properties become evident at the rear surface of the target, where they give rise to two spatially separated sources of ions with distinguishable characteristics when ultrashort (40fs) high-intensity laser pulses irradiate a foil at 45 degrees incidence. The laser pulse intensity and the contrast ratio are crucial. One can establish conditions such that one or the other of the laser energy absorption mechanisms is dominant, and thereby one can control the ion acceleration scenarios. The observations are confirmed by particle-in-cell (PIC) simulations.

  1. Hyperspectral Data Processing and Mapping of Soil Parameters: Preliminary Data from Tuscany (Italy)

    NASA Astrophysics Data System (ADS)

    Garfagnoli, F.; Moretti, S.; Catani, F.; Innocenti, L.; Chiarantini, L.

    2010-12-01

    -sensor radiance values, where calibration coefficients and parameters from laboratory measurements are applied to non-georeferred VNIR/SWIR DN values. Then, geocoded products are retrieved for each flight line by using a procedure developed in IDL Language and PARGE (PARametric Geocoding) software. When all compensation parameters are applied to hyperspectral data or to the final thematic map, orthorectified, georeferred and coregistered VNIR to SWIR images or maps are available for GIS application and 3D view. Airborne imagery has to be corrected for the influence of the atmosphere, solar illumination, sensor viewing geometry and terrain geometry information, for the retrieval of inherent surface reflectance properties. Then, different geophysical parameters can be investigated and retrieved by means of inversion algorithms. The experimental fitting of laboratory data on mineral content is used for airborne data inversion, whose results are in agreement with laboratory records, demonstrating the possibility to use this methodology for digital mapping of soil properties.

  2. The importance of magnetic methods for soil mapping and process modelling. Case study in Ukraine

    NASA Astrophysics Data System (ADS)

    Menshov, Oleksandr; Pereira, Paulo; Kruglov, Oleksandr; Sukhorada, Anatoliy

    2016-04-01

    The correct planning of agriculture areas is fundamental for a sustainable future in Ukraine. After the recent political problems in Ukraine, new challenges emerged regarding sustainability questions. At the same time the soil mapping and modelling are intensively developing all over the world (Pereira et al., 2015; Brevik et al., in press). Magnetic susceptibility (MS) methods are low cost and accurate for the developing maps of agricultural areas, fundamental for Ukrain's economy.This allow to colleact a great amount of soil data, usefull for a better understading of the spatial distribution of soil properties. Recently, this method have been applied in other works in Ukraine and elsewhere (Jordanova et al., 2011; Menshov et al., 2015). The objective of this work is to study the spatial distribution of MS and humus content on the topsoils (0-5 cm) in two different areas. The first is located in Poltava region and the second in Kharkiv region. The results showed that MS depends of soil type, topography and anthropogenic influence. For the interpretation of MS spatial distribution in top soil we consider the frequency and time after the last tillage, tilth depth, fertilizing, and the puddling regarding the vehicle model. On average the soil MS of the top soil of these two cases is about 30-70×10-8 m3/kg. In Poltava region not disturbed soil has on average MS values of 40-50×10-8 m3/kg, for Kharkiv region 50-60×10-8 m3/kg. The tilled soil of Poltava region has on average an MS of 60×10-8 m3/kg, and 70×10-8 m3/kg in Kharkiv region. MS is higher in non-tilled soils than in the tilled ones. The correlation between MS and soil humus content is very high ( up to 0.90) in both cases. Breivik, E., Baumgarten, A., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Jordán, A. Soil mapping, classification, and modelling: history and future directions. Geoderma (in press), doi:10.1016/j.geoderma.2015.05.017 Jordanova D., Jordanova N., Atanasova A., Tsacheva T., Petrov P

  3. Process-based image analysis for agricultural mapping: A case study in Turkgeldi region, Turkey

    NASA Astrophysics Data System (ADS)

    Damla Uca Avci, Z.; Sunar, Filiz

    2015-10-01

    The need for timely, accurate, and interoperable geospatial information is steadily increasing. In this context, process-based image processing systems will be the initial segment for future's automatic systems. A process-based system is believed to be a good approach for agricultural purpose because agricultural activities are carried out according to a periodic (annual) cycle. Therefore, a process-based image analysis procedure was designed for routine crop classification for an agricultural region in Kırklareli, Turkey. The process tree developed uses a multi-temporal image data set as an input and gives the final crop classification as an output by using an incremental rule set. The test data set was composed of five images of Satellite Pour l'Observation de la Terre 4 (SPOT 4) data acquired in 2007. Basically, image objects were first extracted and then classified. A rule set was structured depending on class definitions. A decision-based process was executed and formed a multilevel image classification system. The final classification was obtained by merging classes from the appropriate levels where they were extracted. To evaluate the success of the application the accuracy of the classification was assessed. The overall accuracy and kappa index of agreement was found to be 80% and 0.78, respectively. At the end of the study, problems of segmentation and classification operations were discussed and solution approaches were outlined. To assess the process in terms of its scope for automation, the efficiency and success of the rule set were also discussed.

  4. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  5. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  6. An Indexed, Mapped Mutant Library Enables Reverse Genetics Studies of Biological Processes in Chlamydomonas reinhardtii[OPEN

    PubMed Central

    Gang, Spencer S.; Blum, Sean R.; Ivanova, Nina; Yue, Rebecca; Grossman, Arthur R.

    2016-01-01

    The green alga Chlamydomonas reinhardtii is a leading unicellular model for dissecting biological processes in photosynthetic eukaryotes. However, its usefulness has been limited by difficulties in obtaining mutants in specific genes of interest. To allow generation of large numbers of mapped mutants, we developed high-throughput methods that (1) enable easy maintenance of tens of thousands of Chlamydomonas strains by propagation on agar media and by cryogenic storage, (2) identify mutagenic insertion sites and physical coordinates in these collections, and (3) validate the insertion sites in pools of mutants by obtaining >500 bp of flanking genomic sequences. We used these approaches to construct a stably maintained library of 1935 mapped mutants, representing disruptions in 1562 genes. We further characterized randomly selected mutants and found that 33 out of 44 insertion sites (75%) could be confirmed by PCR, and 17 out of 23 mutants (74%) contained a single insertion. To demonstrate the power of this library for elucidating biological processes, we analyzed the lipid content of mutants disrupted in genes encoding proteins of the algal lipid droplet proteome. This study revealed a central role of the long-chain acyl-CoA synthetase LCS2 in the production of triacylglycerol from de novo-synthesized fatty acids. PMID:26764374

  7. A Web-based tool for processing and visualizing body surface potential maps.

    PubMed

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-01-01

    The body surface potential map (BSPM) is potentially more accurate for diagnosing cardiac pathologies when compared to the standard 12-lead electrocardiogram (ECG). However, a contributing factor to the lack of widespread adoption of the BSPM is the shortage of standard methods for its storage and visualization. Based on these observations, a BSPM storage format based on the eXtensible Markup Language has been developed within this study, alongside a Web-based BSPM viewer. This viewer was created using a lossless vector graphics tool (Adobe Flash) to maintain the quality of the ECG waveforms when they are enlarged. The viewer also runs inside the Web browser to facilitate BSPM visualization independent of the clinician's geographical location. This online nature enabled the creation of a comments system that can be used to assist in a collaborative diagnosis. This is useful because BSPM diagnostic criteria are not well established. Moreover, using the viewer's innovative tools (ie, calipers, isopotential maps), the clinician can explore BSPM datasets. Algorithms have also been integrated within the system to extract and display the 12-lead ECG and the vectorcardiogram from the BSPM. This viewer has been available online for 10 months alongside a Weblog, which has been used to record the user's feedback. During this period, 12 experts from both the clinical and visualization domains evaluated the viewer and contributed to its design. It has been the general consensus of all experts that the application is an effective solution for visualizing BSPMs. This viewer has been tested to visualize 2 different BSPMs using a PC (3 GHz CPU, 3 GB RAM, 6 MB broadband). The Lux-192 BSPM and the Kornreich-117 BSPM where both uploaded and visualized within 3.8 seconds (mean time from 10 trials). This BSPM storage format and its associated viewer provide a framework for a BSPM management system. If this system is made widely available, it has the potential to provide BSPM

  8. Modeling and hazard mapping of complex cascading mass movement processes: the case of glacier lake 513, Carhuaz, Peru

    NASA Astrophysics Data System (ADS)

    Schneider, Demian; Huggel, Christian; García, Javier; Ludeña, Sebastian; Cochachin, Alejo

    2013-04-01

    that complex cascades of mass movement processes can realistically be modeled using different models and model parameters. The method to semi-automatically produce hazard maps is promising and should be applied in other case studies. Verification of model based results in the field remains an important requirement. Results from this study are important for the GLOF early warning system that is currently in an implementation phase, and for risk reduction efforts in general.

  9. Digital and photographic processing study for shallow seas mapping from landsat

    USGS Publications Warehouse

    Bauer, Brian P.; Perry, Lincoln

    1978-01-01

    The application of contrast stretch and haze removal techniques to Landsat/MSS imagery for shallow seas bathymetry is discussed. The application of these techniques is based upon procedures inherent in the EDIPS system processing. Application of both MSS band 4 and band 5 data are discussed in lx and 3x gain mode. Both quantitative and qualitative (imagery) data are used to demonstrate the existence of bathymetric information after EDIPS processing.

  10. Language representation and processing in fluent bilinguals: electrophysiological evidence for asymmetric mapping in bilingual memory.

    PubMed

    Palmer, Shekeila D; van Hooff, Johanna C; Havelka, Jelena

    2010-04-01

    The purpose of this investigation was to test the assumption of asymmetric mapping between words and concepts in bilingual memory as proposed by the Revised Hierarchical Model (RHM, Kroll & Stewart, 1994). Twenty four Spanish-English bilinguals (experiment 1) and twenty English-Spanish bilinguals (experiment 2) were presented with pairs of words, one in English and one in Spanish, and asked to indicate whether or not the words had the same meaning. In half the trials the Spanish word preceded the English, and in the other half the English word preceded the Spanish. In each condition half of the words had the same meaning, and the experiment included both concrete and abstract word trials. Event-related potentials (ERPs) were used to examine lexical-semantic activation during word translation. As predicted, a direction-dependent translation asymmetry was observed in the magnitude of the N400 repetition effect. Specifically, the N400 effect was larger during backward translation (L2-L1) than during forward translation (L1-L2) in both groups of bilinguals. Results are considered in the context of different models of bilingual memory.

  11. Wakefield accelerators

    SciTech Connect

    Simpson, J.D.

    1990-01-01

    The search for new methods to accelerate particle beams to high energy using high gradients has resulted in a number of candidate schemes. One of these, wakefield acceleration, has been the subject of considerable R D in recent years. This effort has resulted in successful proof of principle experiments and in increased understanding of many of the practical aspects of the technique. Some wakefield basics plus the status of existing and proposed experimental work is discussed, along with speculations on the future of wake field acceleration. 10 refs., 6 figs.

  12. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  13. Design of the processing chain for a high-altitude, airborne, single-photon lidar mapping instrument

    NASA Astrophysics Data System (ADS)

    Gluckman, Joshua

    2016-05-01

    Processing data from high-altitude, airborne lidar instruments that employ single-photon sensitive, arrayed detectors poses several challenges. Arrayed detectors produce large volumes of data; single-photon sensitive detectors produce high levels of noise; and high-altitude operation makes accurate geolocation difficult to achieve. To address these challenges, a unique and highly automated processing chain for high-altitude, single-photon, airborne lidar mapping instruments has been developed. The processing chain includes algorithms for coincidence processing, noise reduction, self-calibration, data registration, and geolocation accuracy enhancement. Common to all single-photon sensitive systems is a high level of background photon noise. A key step in the processing chain is a fast and accurate algorithm for density estimation, which is used to separate the lidar signal from the background photon noise, permitting the use of a wide-range gate and daytime operation. Additional filtering algorithms are used to remove or reduce other sources of system and detector noise. An optimization algorithm that leverages the conical scan pattern of the instrument is used to improve geolocation and to self-calibrate the system.

  14. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  15. The connectome mapper: an open-source processing pipeline to map connectomes with MRI.

    PubMed

    Daducci, Alessandro; Gerhard, Stephan; Griffa, Alessandra; Lemkaddem, Alia; Cammoun, Leila; Gigandet, Xavier; Meuli, Reto; Hagmann, Patric; Thiran, Jean-Philippe

    2012-01-01

    Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

  16. Breakthrough: Fermilab Accelerator Technology

    ScienceCinema

    None

    2016-07-12

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  17. Improvement in Visual Search with Practice: Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    PubMed Central

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus–response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior–contralateral component (N2pc, 170–250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300–400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. PMID:25834059

  18. Improvement in visual search with practice: mapping learning-related changes in neurocognitive stages of processing.

    PubMed

    Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G

    2015-04-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance.

  19. Neural information processing and self-organizing maps as a tool in safeguarding storage facilities

    SciTech Connect

    Howell, J.A.; Fuyat, C.

    1993-08-01

    Storage facilities for nuclear materials and weapons dismantlement facilities could have a large number of sensors with the potential for generating large amounts of data. Because of the anticipated complexity and diversity of the data, efficient automatic algorithms are necessary to make interpretations and ensure secure and safe operation. New, advanced safeguards systems are needed to process the information gathered from monitors and make interpretations that are in the best interests of the facility or agency. In this paper we present a conceptual design for software to assist with processing these large quantities of data from storage facilities.

  20. ION ACCELERATOR

    DOEpatents

    Bell, J.S.

    1959-09-15

    An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.

  1. Acceleration switch

    DOEpatents

    Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.

    1979-08-29

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  2. Acceleration switch

    DOEpatents

    Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.

    1982-08-17

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  3. Establishing New Mappings between Familiar Phones: Neural and Behavioral Evidence for Early Automatic Processing of Nonnative Contrasts

    PubMed Central

    Barrios, Shannon L.; Namyst, Anna M.; Lau, Ellen F.; Feldman, Naomi H.; Idsardi, William J.

    2016-01-01

    To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /deɪ/ “day” and /ðeɪ/ “they”). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[iɾi], and [iði]-[iɾi]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones. PMID:27445949

  4. Establishing New Mappings between Familiar Phones: Neural and Behavioral Evidence for Early Automatic Processing of Nonnative Contrasts.

    PubMed

    Barrios, Shannon L; Namyst, Anna M; Lau, Ellen F; Feldman, Naomi H; Idsardi, William J

    2016-01-01

    To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /deɪ/ "day" and /ðeɪ/ "they"). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[iɾi], and [iði]-[iɾi]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones. PMID:27445949

  5. Establishing New Mappings between Familiar Phones: Neural and Behavioral Evidence for Early Automatic Processing of Nonnative Contrasts.

    PubMed

    Barrios, Shannon L; Namyst, Anna M; Lau, Ellen F; Feldman, Naomi H; Idsardi, William J

    2016-01-01

    To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /deɪ/ "day" and /ðeɪ/ "they"). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[iɾi], and [iði]-[iɾi]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones.

  6. The i-Map: A Process-Centered Response to Plagiarism

    ERIC Educational Resources Information Center

    Walden, Kim; Peacock, Alan

    2006-01-01

    In recent years there has been a marked change in our cultural relationship with information which has implications for our teaching and learning practices. Current concerns about the identification of, and responses to, plagiarism are grounded in that process of change. In this paper we take the position that it is better to address and respond…

  7. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  8. Application of a low level, uniform ultrasound field for the acceleration of enzymatic bio-processing of cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. Our research has found that th...

  9. Functional connectivity mapping of regions associated with self- and other-processing.

    PubMed

    Murray, Ryan J; Debbané, Martin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Simon B

    2015-04-01

    Neuroscience literature increasingly suggests a conceptual self composed of interacting neural regions, rather than independent local activations, yet such claims have yet to be investigated. We, thus, combined task-dependent meta-analytic connectivity modeling (MACM) with task-independent resting-state (RS) connectivity analysis to delineate the neural network of the self, across both states. Given psychological evidence implicating the self's interdependence on social information, we also delineated the neural network underlying conceptual other-processing. To elucidate the relation between the self-/other-networks and their function, we mined the MACM metadata to generate a cognitive-behavioral profile for an empirically identified region specific to conceptual self, the pregenual anterior cingulate (pACC), and conceptual other, posterior cingulate/precuneus (PCC/PC). Mining of 7,200 published, task-dependent, neuroimaging studies, using healthy human subjects, yielded 193 studies activating the self-related seed and were conjoined with RS connectivity analysis to delineate a differentiated self-network composed of the pACC (seed) and anterior insula, relative to other functional connectivity. Additionally, 106 studies activating the other-related seed were conjoined with RS connectivity analysis to delineate a differentiated other-network of PCC/PC (seed) and angular gyrus/temporoparietal junction, relative to self-functional connectivity. The self-network seed related to emotional conflict resolution and motivational processing, whereas the other-network seed related to socially oriented processing and contextual information integration. Notably, our findings revealed shared RS connectivity between ensuing self-/other-networks within the ventromedial prefrontal cortex and medial orbitofrontal cortex, suggesting self-updating via integration of self-relevant social information. We, therefore, present initial neurobiological evidence corroborating the increasing

  10. Functional Connectivity Mapping of Regions Associated with Self- and Other-Processing

    PubMed Central

    Murray, Ryan J.; Debbané, Martin; Fox, Peter T.; Bzdok, Danilo; Eickhoff, Simon B.

    2016-01-01

    Neuroscience literature increasingly suggests a conceptual self composed of interacting neural regions, rather than independent local activations, yet such claims have yet to be investigated. We, thus, combined task-dependent meta-analytic connectivity modeling (MACM) with task-independent resting-state (RS) connectivity analysis to delineate the neural network of the self, across both states. Given psychological evidence implicating the self’s interdependence on social information, we also delineated the neural network underlying conceptual other-processing. To elucidate the relation between the self-/other-networks and their function, we mined the MACM metadata to generate a cognitive–behavioral profile for an empirically identified region specific to conceptual self, the pregenual anterior cingulate (pACC), and conceptual other, posterior cingulate/precuneus (PCC/PC). Mining of 7,200 published, task-dependent, neuroimaging studies, using healthy human subjects, yielded 193 studies activating the self-related seed and were conjoined with RS connectivity analysis to delineate a differentiated self-network composed of the pACC (seed) and anterior insula, relative to other functional connectivity. Additionally, 106 studies activating the other-related seed were conjoined with RS connectivity analysis to delineate a differentiated other-network of PCC/PC (seed) and angular gyrus/temporoparietal junction, relative to self-functional connectivity. The self-network seed related to emotional conflict resolution and motivational processing, whereas the other-network seed related to socially oriented processing and contextual information integration. Notably, our findings revealed shared RS connectivity between ensuing self-/other-networks within the ventromedial prefrontal cortex and medial orbitofrontal cortex, suggesting self-updating via integration of self-relevant social information. We, therefore, present initial neurobiological evidence corroborating the

  11. A scale‐down mimic for mapping the process performance of centrifugation, depth and sterile filtration

    PubMed Central

    Joseph, Adrian; Kenty, Brian; Mollet, Michael; Hwang, Kenneth; Rose, Steven; Goldrick, Stephen; Bender, Jean; Farid, Suzanne S.

    2016-01-01

    ABSTRACT In the production of biopharmaceuticals disk‐stack centrifugation is widely used as a harvest step for the removal of cells and cellular debris. Depth filters followed by sterile filters are often then employed to remove residual solids remaining in the centrate. Process development of centrifugation is usually conducted at pilot‐scale so as to mimic the commercial scale equipment but this method requires large quantities of cell culture and significant levels of effort for successful characterization. A scale‐down approach based upon the use of a shear device and a bench‐top centrifuge has been extended in this work towards a preparative methodology that successfully predicts the performance of the continuous centrifuge and polishing filters. The use of this methodology allows the effects of cell culture conditions and large‐scale centrifugal process parameters on subsequent filtration performance to be assessed at an early stage of process development where material availability is limited. Biotechnol. Bioeng. 2016;113: 1934–1941. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:26927621

  12. A scale-down mimic for mapping the process performance of centrifugation, depth and sterile filtration.

    PubMed

    Joseph, Adrian; Kenty, Brian; Mollet, Michael; Hwang, Kenneth; Rose, Steven; Goldrick, Stephen; Bender, Jean; Farid, Suzanne S; Titchener-Hooker, Nigel

    2016-09-01

    In the production of biopharmaceuticals disk-stack centrifugation is widely used as a harvest step for the removal of cells and cellular debris. Depth filters followed by sterile filters are often then employed to remove residual solids remaining in the centrate. Process development of centrifugation is usually conducted at pilot-scale so as to mimic the commercial scale equipment but this method requires large quantities of cell culture and significant levels of effort for successful characterization. A scale-down approach based upon the use of a shear device and a bench-top centrifuge has been extended in this work towards a preparative methodology that successfully predicts the performance of the continuous centrifuge and polishing filters. The use of this methodology allows the effects of cell culture conditions and large-scale centrifugal process parameters on subsequent filtration performance to be assessed at an early stage of process development where material availability is limited. Biotechnol. Bioeng. 2016;113: 1934-1941. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:26927621

  13. Mapping letters from the future: exploring narrative processes of imagining the future.

    PubMed

    Sools, Anneke M; Tromp, Thijs; Mooren, Jan H

    2015-03-01

    This article uses Letters from the Future (a health promotion instrument) to explore the human capacity of imagining the future. From a narrative perspective, letters from the future are considered to be indicative of a variety of forms through which human beings construct and understand their future selves and worlds. This is consistent with an interpretive approach to understanding the human mind, which offers an alternative for the current dominant causal-explanatory approach in psychology. On the basis of qualitative analysis of 480 letters from the future, collected online from a diverse group of Dutch and German persons, we first identified five narrative processes operating in the letters: imagining, evaluating, orienting, expressing emotions and engaging in dialogue. Second, using comparative analysis, we identified six types of how these processes are organized in the letters as a whole. These types differ regarding functionality (which of the five processes was dominant); temporality (prospective, retrospective and present-oriented); the extent to which a path between present and future was described; and the vividness of the imagination. We suggest that these types can be used in narrative health practice as 'pathways' to locate where letter writers are on their path to imagine the future, rather than as a normative taxonomy. Future research should focus on how these pathways can be used to navigate to health and well-being. PMID:25762389

  14. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    NASA Astrophysics Data System (ADS)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial

  15. Digital Mapping of Coastal Erosion on the Baldwin Peninsula, NW Alaska: Past Rates, Present Processes and Future Implications

    NASA Astrophysics Data System (ADS)

    Olson, N. F.; Crosby, B. T.

    2007-12-01

    As Arctic temperatures warm and increase the period of ice-free seas, coasts are exposed to longer periods of wave-based erosion. In addition, warming accelerates permafrost degradation and thus decreases the mechanical stability of coastal bluffs. In this study we examine the Baldwin Peninsula which extends 108 km northwest into Kotzebue Sound but narrows to a neck less than 700 meters wide at its midpoint. Currently, water discharge and fish runs from Selawik and Kobuk Rivers are routed around the northern tip of the peninsula, adjacent to where the Noatak River enters the sea. The eventual breach of the narrowest part of the peninsula will result in drastic changes in fish passage, the morphology of the Noatak River delta and local economies dependent on subsistence and commercial fishing. To constrain past and present rates of bluff erosion, we completed a high resolution (~3 m spacing) topographic survey along ~5 km of the narrowest segment of the Baldwin Peninsula. The total station survey (georeferenced using GPS measurements at stations) defined the current bluff edge position on both the seaward and estuary side of the peninsula. In addition, the position of the bluff base was collected on the seaward side. Bluff retreat is accomplished by failure of adjoining arcuate-shaped thermal slumps. Seasonal wave erosion at the base of the bluff prevents slumps from ever stabilizing. To determine historical rates of retreat, we used geo-referenced historical aerial photographs from 1953 to the present processed in ArcGIS. Future surveys will determine if retreat is accelerating or maintaining current rates and identify if certain portions are retreating faster than others or if retreat is even along the coast.

  16. The Mars Express High Resolution Stereo Camera (HRSC): Mapping Mars and Implications for Geological Processes

    NASA Astrophysics Data System (ADS)

    Jaumann, Ralf; Tirsch, Daniela; Hauber, Ernst; Hoffmann, Harald; Neukum, Gerhard

    2015-04-01

    After 10 years of ESA's Mars Express orbiting the planet its High Resolution Stereo Camera (HRSC) covered about 90 % of the surface in stereo and color with resolutions up to 10 m/pixel. Digital elevation models of up to 50 m grid spacing [1], generated from all suitable datasets of the stereo coverage, currently cover about 40 % of the surface [2]. The geomorphological analysis of surface features, observed by the HRSC indicate major surface modifications by endogenic and exogenic processes on all scales. Endogenic landforms (e.g., tectonic rifts, small basaltic shield volcanoes) were found to be very similar to their equivalents on Earth, suggesting that no unique processes are required to explain their formation. Volcanism may have been active up to the very recent past or even to the present, putting important constraints on thermal evolution models [e.g., 3]. The analysis of diverse landforms produced by aqueous processes revealed that surface water activity was likely episodic, but ranged in age from very ancient to very recent [e.g., 3]. Particularly important is prominent glaciation and periglacial features at several latitudes, including mountain glaciers [e.g., 3]. The identification of aqueous alteration minerals and their geological context has enabled a better understanding of paleoenvironmental conditions and pedogenetic processes [e.g., 4]. Dark dunes contain volcanic material and are evidence for the significantly dynamic surface environment, characterized by widespread erosion, transport, and redeposition [e.g., 3, 5]. Since basically all geologic interpretations of extraterrestrial features require profound knowledge of the Earth as key reference, studies of terrestrial analogues are mandatory in planetary geology. Field work in Antarctica, Svalbard and Iceland [e.g., 6] provided a basis for the analysis of periglacial and volcanic processes, respectively. References: [1] Jaumann et al., 2007, PSS 55, 928-952; [2] Gwinner et al., 2010, EPSL 294

  17. Preparation of magnetic anomaly profile and contour maps from DOE-NURE aerial survey data. Volume I. Processing procedures

    SciTech Connect

    Tinnel, E.P.; Hinze, W.J.

    1981-09-01

    Total intensity magnetic anomaly data acquired as a supplement to radiometric data in the DOE National Uranium Resource Evaluation (NURE) Program are useful in preparing regional profile and contour maps. Survey-contractor-supplied magnetic anomaly data are subjected to a multiprocess, computer-based procedure which prepares these data for presentation. This procedure is used to produce the following machine plotted maps of National Topographic Map Series quadrangle units at a 1:250,000 scale: (1) profile map of contractor-supplied magnetic anomaly data, (2) profile map of high-cut filtered data with contour levels of each profile marked and annotated on the associated flight track, (3) profile map of critical-point data with contour levels indicated, and (4) contour map of filtered and selected data. These quadrangle maps are supplemented with a range of statistical measures of the data which are useful in quality evaluation.

  18. FDG-PET mapping the brain substrates of visuo-constructive processing in Alzheimer's disease.

    PubMed

    Förster, Stefan; Teipel, Stefan; Zach, Christian; Rominger, Axel; Cumming, Paul; Fougere, Christian la; Yakushev, Igor; Haslbeck, Marianne; Hampel, Harald; Bartenstein, Peter; Bürger, Katharina

    2010-05-01

    The anatomical basis of visuo-constructive impairment in AD is widely unexplored. FDG-PET can be used to determine functional neuronal networks underlying specific cognitive performance in the human brain. In the present study, we determined the pattern of cortical metabolism that was associated with visuo-constructive performance in AD. We employed two widely used visuo-constructive tests that differ in their demand on visual perception and processing capacity. Resting state FDG-PET scans were obtained in 29 probable AD patients, and cognitive tests were administered. We made a voxel-based regression analysis of FDG uptake to scores in visual test performance, using the SPM5 software. Performance in the CERAD Drawing test correlated with FDG uptake in the bilateral inferior temporal gyri, bilateral precuneus, right cuneus, right supramarginal gyrus and right middle temporal gyrus covering areas of dorsal and ventral visual streams. In contrast, performance in the more complex RBANS Figure Copy test correlated with FDG uptake in the bilateral fusiform gyri, right inferior temporal gyrus, left anterior cingulate gyrus, left parahippocampal gyrus, right middle temporal gyrus and right insula, encompassing the ventral visual stream and areas of higher-level visual processing. The study revealed neuronal networks underlying impaired visual test performance in AD. The extent of involvement of visual and higher order association cortex increased with greater test complexity. From a clinical point of view, both of these widely used visual tests evaluate the integrity of complementary cortical networks and may contribute complementary information on the integrity of visual processing in AD. PMID:19875130

  19. Basic forest cover mapping using digitized remote sensor data and automated data processing techniques

    NASA Technical Reports Server (NTRS)

    Coggeshall, M. E.; Hoffer, R. M.

    1973-01-01

    Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.

  20. Combination of techniques for mapping structural and functional connectivity of soil erosion processes: a case study in a small watershed

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel; Taguas, Encarnación; Brings, Christine; Wirtz, Stefan; Rodrigo Comino, Jesus; Albert, Enrique; Ries, Johabbes B.

    2016-04-01

    Sediment connectivity is understood as the interaction of sediment sources, the sinks and the pathways which connect them. During the last decade, the research on connectivity has increased, as it is crucial to understand the relation between the observed sediments at a certain point, and the processes leading them to that location. Thus, the knowledge of the biogeophysical features involved in sediment connectivity in an area of interest is essential to understand its functioning and to design treatments allowing its management, e. g. to reduce soil erosion. The structural connectivity is given by landscape elements which enable the production, transport and deposition of sediments, whereas the functional connectivity is understood here as variable processes that lead the sediments through a catchment. Therefore, 2 different levels of connectivity have been considered which superpose each other according to the catchments conditions. We studied the different connectivity features in a catchment almost completely covered by an olive grove. It is located south of Córdoba (Spain), close to the city of Puente Genil. The olive plantation type is of low productivity. The soil management was no tillage for the least 9 years. The farmer allow weed growing in the lanes although he applied herbicide treatment and tractor passes usually in the end of spring. Firstly, a detailed mapping of geomorphodynamic features was carried out. We identified spatially distributed areas of increased sheet-wash and crusting, but also areas where rill erosion has leadedto a high density of rills and small gullies. Especially within these areas rock outcrops up to several m² were mapped, showing like this (former) intense erosion processes. In addition, field measurements with different methodologies were applied on infiltration (single ring infiltrometers, rainfall simulations), soil permeability (Guelph permeameter), interrill erosion (rainfall simulator) and concentrated flow (rill