Science.gov

Sample records for accelerated processing map

  1. 77 FR 21991 - Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)-Lender and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... URBAN DEVELOPMENT Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)--Lender and Underwriter Eligibility Criteria and Credit Watch for MAP Lenders AGENCY: Office of the Assistant... processes for determining lender and underwriter eligibility and tier qualification for MAP...

  2. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  3. Acceleration mapping on Consort 5

    NASA Astrophysics Data System (ADS)

    Naumann, Robert J.

    1994-09-01

    The Consort 5 rocket carrying a set of commercial low-gravity experiments experienced a significant side thrust from an apparent burn-through of the second-stage motor just prior to cut-off. The resulting angular momentum could not be removed by the attitude rate control system, thus the payload was left in an uncontrollable rocking/tumbling mode. Although the primary low-gravity emphasis mission requirements could not be met, it was hoped that some science could be salvaged by mapping the acceleration field over the vehicle so that each investigator could correlate his or her results with the acceleration environment at his or her experiment location. This required some detective work to obtain the body rates and moment of inertia ratios required to solve the full set of Euler equations for a tri-axial rigid body. The techniques for acceleration mapping described in this paper may be applicable to other low-gravity emphasis missions.

  4. A hybrid short read mapping accelerator

    PubMed Central

    2013-01-01

    Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance

  5. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, Nathan

    2016-04-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high

  6. Accelerator simulation of astrophysical processes

    NASA Technical Reports Server (NTRS)

    Tombrello, T. A.

    1983-01-01

    Phenomena that involve accelerated ions in stellar processes that can be simulated with laboratory accelerators are described. Stellar evolutionary phases, such as the CNO cycle, have been partially explored with accelerators, up to the consumption of He by alpha particle radiative capture reactions. Further experimentation is indicated on reactions featuring N-13(p,gamma)O-14, O-15(alpha, gamma)Ne-19, and O-14(alpha,p)F-17. Accelerated beams interacting with thin foils produce reaction products that permit a determination of possible elemental abundances in stellar objects. Additionally, isotopic ratios observed in chondrites can be duplicated with accelerator beam interactions and thus constraints can be set on the conditions producing the meteorites. Data from isotopic fractionation from sputtering, i.e., blasting surface atoms from a material using a low energy ion beam, leads to possible models for processes occurring in supernova explosions. Finally, molecules can be synthesized with accelerators and compared with spectroscopic observations of stellar winds.

  7. Diffusive Shock Acceleration and Reconnection Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; Le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.; Cummings, A.; Stone, E.; Decker, R.

    2015-12-01

    Shock waves, as shown by simulations and observations, can generate high levels of downstream vortical turbulence, including magnetic islands. We consider a combination of diffusive shock acceleration (DSA) and downstream magnetic-island-reconnection-related processes as an energization mechanism for charged particles. Observations of electron and ion distributions downstream of interplanetary shocks and the heliospheric termination shock (HTS) are frequently inconsistent with the predictions of classical DSA. We utilize a recently developed transport theory for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets. Particle energization associated with the anti-reconnection electric field, a consequence of magnetic island merging, and magnetic island contraction, are considered. For the former only, we find that (i) the spectrum is a hard power law in particle speed, and (ii) the downstream solution is constant. For downstream plasmoid contraction only, (i) the accelerated spectrum is a hard power law in particle speed; (ii) the particle intensity for a given energy peaks downstream of the shock, and the distance to the peak location increases with increasing particle energy, and (iii) the particle intensity amplification for a particular particle energy, f(x,c/{c}0)/f(0,c/{c}0), is not 1, as predicted by DSA, but increases with increasing particle energy. The general solution combines both the reconnection-induced electric field and plasmoid contraction. The observed energetic particle intensity profile observed by Voyager 2 downstream of the HTS appears to support a particle acceleration mechanism that combines both DSA and magnetic-island-reconnection-related processes.

  8. The US Muon Accelerator Program (MAP)

    SciTech Connect

    Bross, Alan D.; /Fermilab

    2010-12-01

    The US Department of Energy Office of High Energy Physics has recently approved a Muon Accelerator Program (MAP). The primary goal of this effort is to deliver a Design Feasibility Study for a Muon Collider after a 7 year R&D program. This paper presents a brief physics motivation for, and the description of, a Muon Collider facility and then gives an overview of the program. I will then describe in some detail the primary components of the effort.

  9. Accelerated stochastic diffusion processes

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr

    1990-07-01

    We give a purely probabilistic demonstration that all effects of non-random (external, conservative) forces on the diffusion process can be encoded in the Nelson ansatz for the second Newton law. Each random path of the process together with a probabilistic weight carries a phase accumulation (complex valued) weight. Random path summation (integration) of these weights leads to the transition probability density and transition amplitude respectively between two spatial points in a given time interval. The Bohm-Vigier, Fenyes-Nelson-Guerra and Feynman descriptions of the quantum particle behaviours are in fact equivalent.

  10. ESS Accelerator Cryoplant Process Design

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Arnold, P.; Hees, W.; Hildenbeutel, J.; Weisend, J. G., II

    2015-12-01

    The European Spallation Source (ESS) is a neutron-scattering facility being built with extensive international collaboration in Lund, Sweden. The ESS accelerator will deliver protons with 5 MW of power to the target at 2.0 GeV, with a nominal current of 62.5 mA. The superconducting part of the accelerator is about 300 meters long and contains 43 cryomodules. The ESS accelerator cryoplant (ACCP) will provide the cooling for the cryomodules and the cryogenic distribution system that delivers the helium to the cryomodules. The ACCP will cover three cryogenic circuits: Bath cooling for the cavities at 2 K, the thermal shields at around 40 K and the power couplers thermalisation with 4.5 K forced helium cooling. The open competitive bid for the ACCP took place in 2014 with Linde Kryotechnik AG being selected as the vendor. This paper summarizes the progress in the ACCP development and engineering. Current status including final cooling requirements, preliminary process design, system configuration, machine concept and layout, main parameters and features, solution for the acceptance tests, exergy analysis and efficiency is presented.

  11. Maximal acceleration and radiative processes

    NASA Astrophysics Data System (ADS)

    Papini, Giorgio

    2015-08-01

    We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.

  12. One map policy (OMP) implementation strategy to accelerate mapping of regional spatial planing (RTRW) in Indonesia

    NASA Astrophysics Data System (ADS)

    Hasyim, Fuad; Subagio, Habib; Darmawan, Mulyanto

    2016-06-01

    A preparation of spatial planning documents require basic geospatial information and thematic accuracies. Recently these issues become important because spatial planning maps are impartial attachment of the regional act draft on spatial planning (PERDA). The needs of geospatial information in the preparation of spatial planning maps preparation can be divided into two major groups: (i). basic geospatial information (IGD), consist of of Indonesia Topographic maps (RBI), coastal and marine environmental maps (LPI), and geodetic control network and (ii). Thematic Geospatial Information (IGT). Currently, mostly local goverment in Indonesia have not finished their regulation draft on spatial planning due to some constrain including technical aspect. Some constrain in mapping of spatial planning are as follows: the availability of large scale ofbasic geospatial information, the availability of mapping guidelines, and human resources. Ideal conditions to be achieved for spatial planning maps are: (i) the availability of updated geospatial information in accordance with the scale needed for spatial planning maps, (ii) the guideline of mapping for spatial planning to support local government in completion their PERDA, and (iii) capacity building of local goverment human resources to completed spatial planning maps. The OMP strategies formulated to achieve these conditions are: (i) accelerating of IGD at scale of 1:50,000, 1: 25,000 and 1: 5,000, (ii) to accelerate mapping and integration of Thematic Geospatial Information (IGT) through stocktaking availability and mapping guidelines, (iii) the development of mapping guidelines and dissemination of spatial utilization and (iv) training of human resource on mapping technology.

  13. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Kooney, Alex; Bjorkman, Gerry; Russell, Carolyn; Smelser, Jerry (Technical Monitor)

    2002-01-01

    In FSW (friction stir welding), the weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule. The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  14. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Bjorkman, Gerry; Kooney, Alex; Russell, Carolyn

    2003-01-01

    The weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  15. Auroral plasma acceleration processes at Mars

    NASA Astrophysics Data System (ADS)

    Lundin, R.; Barabash, S.; Winningham, D.

    2012-09-01

    Following the first Mars Express (MEX) findings of auroral plasma acceleration above Martian magnetic anomalies[1, 2], a more detailed analysis is carried out regarding the physical processes that leads to plasma acceleration, and how they connect to the dynamo-, and energy source regions. The ultimate energy source for Martian plasma acceleration is the solar wind. The question is, by what mechanisms is solar wind energy and momentum transferred into the magnetic flux tubes that connect to Martian magnetic anomalies? What are the key plasma acceleration processes that lead to aurora and the associated ionospheric plasma outflow from Mars? The experimental setup on MEX limits our capability to carry out "auroral physics" at Mars. However, with knowledge acquired from the Earth, we may draw some analogies with terrestrial auroral physics. Using the limited data set available, consisting of primarily ASPERA and MARSIS data, an interesting picture of aurora at Mars emerges. There are some strong similarities between accelerated/heated electrons and ions in the nightside high altitude region above Mars and the electron/ion acceleration above Terrestrial discrete aurora. Nearly monoenergetic downgoing electrons are observed in conjunction with nearly monoenergetic upgoing ions. Monoenergetic counterstreaming ions and electrons is the signature of plasma acceleration in quasi-static electric fields. However, compared to the Earth's aurora, with auroral process guided by a dipole field, aurora at Mars is expected to form complex patterns in the multipole environment governed by the Martian crustal magnetic field regions. Moreover, temporal/spatial scales are different at Mars. It is therefore of interest to mention another common characteristics that exist for Earth and Mars, plasma acceleration by waves. Low-frequency, Alfvén, waves is a very powerful means of plasma acceleration in the Earth's magnetosphere. Low-frequency waves associated with plasma acceleration

  16. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  17. Experiment specific processing of residual acceleration data

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. I. D.

    1992-01-01

    To date, most Spacelab residual acceleration data collection projects have resulted in data bases that are overwhelming to the investigator of low-gravity experiments. This paper introduces a simple passive accelerometer system to measure low-frequency accelerations. Model responses for experiments using actual acceleration data are produced and correlations are made between experiment response and the accelerometer time history in order to test the idea that recorded acceleration data and experimental responses can be usefully correlated. Spacelab 3 accelerometer data are used as input to a variety of experiment models, and sensitivity limits are obtained for particular experiment classes. The modeling results are being used to create experiment-specific residual acceleration data processing schemes for interested investigators.

  18. Symplectic maps and chromatic optics in particle accelerators

    SciTech Connect

    Cai, Yunhai

    2015-07-06

    We have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. Lastly, the cell becomes nearly perfect up to the third-order of the momentum deviation.

  19. Symplectic maps and chromatic optics in particle accelerators

    DOE PAGESBeta

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less

  20. Symplectic maps and chromatic optics in particle accelerators

    SciTech Connect

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.

  1. Road Map for Studies to Produce Consistent and High Performance SRF Accelerator Structures

    SciTech Connect

    Ganapati Rao Myneni; John F. O’Hanlon

    2007-06-20

    Superconducting Radio Frequency (SRF) accelerator structures made from high purity niobium are becoming the technological choice for a large number of future accelerators and energy recovery LINAC’s (ERL). Most of the presently planned accelerators and ERL requirements will be met with some effort by the current SRF technology where accelerating gradients of about 20 MV/m can be produced on a routine basis with an acceptable yield. However, the XFEL at DESY and the planned ILC require acceleration gradients more than 28 MV/m and 35 MV/m respectively. At the recent ILC meeting at Snowmass (2005) concern was expressed regarding the wide spread in the achieved accelerator gradients and the relatively low yields. For obtaining accelerating gradients of 35 MV/m in SRF accelerator structures consistently, a deeper understanding of the causes for the spread has to be gained and advances have to be made in many scientific and high technology fields, including materials, surface and vacuum sciences, application of reliable processes and procedures, which provide contamination –free surfaces and avoid recontamination and cryogenics related technologies. In this contribution a road map for studies needed to produce consistent and high performance SRF accelerator structures from the needed materials development to clean and non-recontaminating processes and procedures will be presented.

  2. Process mapping: A user-friendly tool for process improvement

    SciTech Connect

    Carson, M.L.; Levine, L.O.

    1993-09-01

    Process maps aid administrative process improvement efforts by documenting processes in a rigorous yet understandable way. Icons, graphics, and text support process documentation, analysis, and improvement.

  3. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  4. Radiative processes of uniformly accelerated entangled atoms

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2016-05-01

    We study radiative processes of uniformly accelerated entangled atoms, interacting with an electromagnetic field prepared in the Minkowski vacuum state. We discuss the structure of the rate of variation of the atomic energy for two atoms traveling in different hyperbolic world lines. We identify the contributions of vacuum fluctuations and radiation reaction to the generation of entanglement as well as to the decay of entangled states. Our results resemble the situation in which two inertial atoms are coupled individually to two spatially separated cavities at different temperatures. In addition, for equal accelerations we obtain that one of the maximally entangled antisymmetric Bell state is a decoherence-free state.

  5. Accelerated erosion: Process, problems, and prognosis

    NASA Astrophysics Data System (ADS)

    Toy, Terrence J.

    1982-10-01

    Soil erosion may well be the world's most serious environmental problem. A variety of human activities accelerates the rate of this geomorphie process by altering the natural characteristics of a site. The problems arising from accelerated erosion and subsequent deposition provide numerous research opportunities to both physical and social scientists whose cooperation will be necessary in our search for solutions. Factors of demography, economics, geography, and historical inertia to change suggest that man-induced erosion is likely to continue into the future unless this trend is abated by timely and forceful action.

  6. Accelerating multidimensional NMR and MRI experiments using iterated maps

    NASA Astrophysics Data System (ADS)

    Barrett, Sean; Frey, Merideth; Sethna, Zachary; Manley, Gregory; Sengupta, Suvrajit; Zilm, Kurt; Loria, J. Patrick

    2014-03-01

    Techniques that accelerate data acquisition without sacrificing the advantages of fast Fourier transform (FFT) reconstruction could benefit a wide variety of magnetic resonance experiments. Here we discuss an approach for reconstructing multidimensional nuclear magnetic resonance (NMR) spectra and MR images from sparsely-sampled time domain data, by way of iterated maps. This method exploits the computational speed of the FFT algorithm and is done in a deterministic way, by reformulating any a priori knowledge or constraints into projections, and then iterating. In this paper we explain the motivation behind this approach, the formulation of the specific projections, the benefits of using a `QUasi-Even Sampling, plus jiTter' (QUEST) sampling schedule, and various methods for handling noise. Applying the iterated maps method to real 2D NMR and 3D MRI of solids data, we show that it is flexible and robust enough to handle large data sets with significant noise and artifacts.

  7. Accelerating multidimensional NMR and MRI experiments using iterated maps

    NASA Astrophysics Data System (ADS)

    Frey, Merideth A.; Sethna, Zachary M.; Manley, Gregory A.; Sengupta, Suvrajit; Zilm, Kurt W.; Loria, J. Patrick; Barrett, Sean E.

    2013-12-01

    Techniques that accelerate data acquisition without sacrificing the advantages of fast Fourier transform (FFT) reconstruction could benefit a wide variety of magnetic resonance experiments. Here we discuss an approach for reconstructing multidimensional nuclear magnetic resonance (NMR) spectra and MR images from sparsely-sampled time domain data, by way of iterated maps. This method exploits the computational speed of the FFT algorithm and is done in a deterministic way, by reformulating any a priori knowledge or constraints into projections, and then iterating. In this paper we explain the motivation behind this approach, the formulation of the specific projections, the benefits of using a ‘QUasi-Even Sampling, plus jiTter' (QUEST) sampling schedule, and various methods for handling noise. Applying the iterated maps method to real 2D NMR and 3D MRI of solids data, we show that it is flexible and robust enough to handle large data sets with significant noise and artifacts.

  8. Process in high energy heavy ion acceleration

    NASA Astrophysics Data System (ADS)

    Dinev, D.

    2009-03-01

    A review of processes that occur in high energy heavy ion acceleration by synchrotrons and colliders and that are essential for the accelerator performance is presented. Interactions of ions with the residual gas molecules/atoms and with stripping foils that deliberately intercept the ion trajectories are described in details. These interactions limit both the beam intensity and the beam quality. The processes of electron loss and capture lie at the root of heavy ion charge exchange injection. The review pays special attention to the ion induced vacuum pressure instability which is one of the main factors limiting the beam intensity. The intrabeam scattering phenomena which restricts the average luminosity of ion colliders is discussed. Some processes in nuclear interactions of ultra-relativistic heavy ions that could be dangerous for the performance of ion colliders are represented in the last chapter.

  9. Process Mapping: Tools, Techniques, & Critical Success Factors.

    ERIC Educational Resources Information Center

    Kalman, Howard K.

    2002-01-01

    Explains process mapping as an analytical tool and a process intervention that performance technologists can use to improve human performance by reducing error variance. Highlights include benefits of process mapping; and critical success factors, including organizational readiness, time commitment by participants, and the availability of a…

  10. Mapping of acceleration field in FSA configuration of a LIS

    NASA Astrophysics Data System (ADS)

    Nassisi, V.; Delle Side, D.; Monteduro, L.; Giuffreda, E.

    2016-05-01

    The Front Surface Acceleration (FSA) obtained in Laser Ion Source (LIS) systems is one of the most interesting methods to produce accelerated protons and ions. We implemented a LIS to study the ion acceleration mechanisms. In this device, the plasma is generated by a KrF excimer laser operating at 248 nm, focused on an aluminum target mounted inside a vacuum chamber. The laser energy was varied from 28 to 56 mJ/pulse and focused onto the target by a 15 cm focal lens forming a spot of 0.05 cm in diameter. A high impedance resistive probe was used to map the electric potential inside the chamber, near the target. In order to avoid the effect of plasma particles investing the probe, a PVC shield was realized. Particles inevitably streaked the shield but their influence on the probe was negligible. We detected the time resolved profiles of the electric potential moving the probe from 4.7 cm to 6.2 cm with respect to the main target axis, while the height of the shield from the surface normal on the target symmetry center was about 3 cm. The corresponding electric field can be very important to elucidate the phenomenon responsible of the accelerating field formation. The behavior of the field depends on the distance x as 1/x1.85 with 28 mJ laser energy, 1/x1.77 with 49 mJ and 1/x1.74 with 56 mJ. The dependence of the field changes slightly for our three cases, the power degree decreases at increasing laser energy. It is possible to hypothesize that the electric field strength stems from the contribution of an electrostatic and an induced field. Considering exclusively the induced field at the center of the created plasma, a strength of some tenth kV/m could be reached, which could deliver ions up to 1 keV of energy. These values were justified by measurement performed with an electrostatic barrier.

  11. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1983-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  12. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1985-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  13. Particle acceleration processes in the solar corona

    NASA Astrophysics Data System (ADS)

    Melrose, D. B.

    A review is presented of some theoretical ideas on particle acceleration associated with solar flares. Various acceleration mechanisms are discussed, including forms of stochastic acceleration, shock drift acceleration, resonant acceleration, diffusive acceleration at shock fronts, acceleration during magnetic reconnection, and acceleration by parallel electric fields in double layers or electrostatic shocks. Particular attention is given to first phase acceleration of electrons in solar flares, which is usually attributed to bulk energization of electrons. It is proposed that the dissipation cannot be due to classical resistivity and entails anomalous resistivity or hyperresistivity, such as in multiple double layers. A model is developed for bulk energization due to the continual formation and decay of weak double layers.

  14. Supporting Learning Process with Concept Map Scripts.

    ERIC Educational Resources Information Center

    Rautama, Erkki; Sutinen, Erkki; Tarhio, Jorma

    1997-01-01

    Describes a framework for computer-aided concept mapping that provides the means to easily trace the learning process. Presents the construction of a concept map as a script which consists of elementary operations. This approach can be applied in presentation tools, in evaluating the learning process, and in computer-aided learning. (Author/AEF)

  15. Speech processing using maximum likelihood continuity mapping

    SciTech Connect

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  16. Speech processing using maximum likelihood continuity mapping

    SciTech Connect

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  17. Mapping iterative medical imaging algorithm on cell accelerator.

    PubMed

    Xu, Meilian; Thulasiraman, Parimala

    2011-01-01

    Algebraic reconstruction techniques require about half the number of projections as that of Fourier backprojection methods, which makes these methods safer in terms of required radiation dose. Algebraic reconstruction technique (ART) and its variant OS-SART (ordered subset simultaneous ART) are techniques that provide faster convergence with comparatively good image quality. However, the prohibitively long processing time of these techniques prevents their adoption in commercial CT machines. Parallel computing is one solution to this problem. With the advent of heterogeneous multicore architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE). We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE) and eight SIMD coprocessors known as synergetic processor elements (SPEs). The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm. PMID:21922018

  18. Mapping Iterative Medical Imaging Algorithm on Cell Accelerator

    PubMed Central

    Xu, Meilian; Thulasiraman, Parimala

    2011-01-01

    Algebraic reconstruction techniques require about half the number of projections as that of Fourier backprojection methods, which makes these methods safer in terms of required radiation dose. Algebraic reconstruction technique (ART) and its variant OS-SART (ordered subset simultaneous ART) are techniques that provide faster convergence with comparatively good image quality. However, the prohibitively long processing time of these techniques prevents their adoption in commercial CT machines. Parallel computing is one solution to this problem. With the advent of heterogeneous multicore architectures that exploit data parallel applications, medical imaging algorithms such as OS-SART can be studied to produce increased performance. In this paper, we map OS-SART on cell broadband engine (Cell BE). We effectively use the architectural features of Cell BE to provide an efficient mapping. The Cell BE consists of one powerPC processor element (PPE) and eight SIMD coprocessors known as synergetic processor elements (SPEs). The limited memory storage on each of the SPEs makes the mapping challenging. Therefore, we present optimization techniques to efficiently map the algorithm on the Cell BE for improved performance over CPU version. We compare the performance of our proposed algorithm on Cell BE to that of Sun Fire ×4600, a shared memory machine. The Cell BE is five times faster than AMD Opteron dual-core processor. The speedup of the algorithm on Cell BE increases with the increase in the number of SPEs. We also experiment with various parameters, such as number of subsets, number of processing elements, and number of DMA transfers between main memory and local memory, that impact the performance of the algorithm. PMID:21922018

  19. Details and justifications for the MAP concept specification for acceleration above 63 GeV

    SciTech Connect

    Berg, J. Scott

    2014-02-28

    The Muon Accelerator Program (MAP) requires a concept specification for each of the accelerator systems. The Muon accelerators will bring the beam energy from a total energy of 63 GeV to the maximum energy that will fit on the Fermilab site. Justifications and supporting references are included, providing more detail than will appear in the concept specification itself.

  20. Interstellar Mapping and Acceleration Probe (IMAP) - Its Time Has Come!

    NASA Astrophysics Data System (ADS)

    Schwadron, N.; Kasper, J. C.; Mewaldt, R. A.; Moebius, E.; Opher, M.; Spence, H. E.; Zurbuchen, T.

    2014-12-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence -- an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. The enigmatic IBEX ribbon is an unanticipated discovery demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the vicinity of the IBEX ribbon and its plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive cosmic ray, energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment and understand the fundamental origins of particle acceleration. Thus, IMAP is a mission whose time has come. IMAP is the highest ranked next Solar Terrestrial Probe in the Decadal

  1. Accelerating sparse linear algebra using graphics processing units

    NASA Astrophysics Data System (ADS)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  2. Mapping the Collaborative Research Process

    ERIC Educational Resources Information Center

    Kochanek, Julie Reed; Scholz, Carrie; Garcia, Alicia N.

    2015-01-01

    Despite significant federal investments in the production of high-quality education research, the direct use of that research in policy and practice is not evident. Some education researchers are increasingly employing collaborative research models that use structures and processes to integrate practitioners into the research process in an effort…

  3. TAC Proton Accelerator Facility: The Status and Road Map

    SciTech Connect

    Algin, E.; Akkus, B.; Caliskan, A.; Yilmaz, M.; Sahin, L.

    2011-06-28

    Proton Accelerator (PA) Project is at a stage of development, working towards a Technical Design Report under the roof of a larger-scale Turkish Accelerator Center (TAC) Project. The project is supported by the Turkish State Planning Organization. The PA facility will be constructed in a series of stages including a 3 MeV test stand, a 55 MeV linac which can be extended to 100+ MeV, and then a full 1-3 GeV proton synchrotron or superconducting linac. In this article, science applications, overview, and current status of the PA Project will be given.

  4. Granger-causality maps of diffusion processes

    NASA Astrophysics Data System (ADS)

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A.

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  5. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes. PMID:26986337

  6. Use of process mapping in service improvement.

    PubMed

    Phillips, Joanna; Simmonds, Lorraine

    This article, the last of our three-part series on change management tools, analyses how process mapping can be used to show how processes are currently carried out and identify any changes that may improve the patient experience. The tool takes into account patient opinions so staff are able to see the pathway from patients' perspectives. It offers advice on how to write up the results and how they can be analysed to identify where changes can be made. PMID:23741910

  7. Ultrasonic acceleration of enzymatic processing of cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. It has been found that the intr...

  8. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  9. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  10. A CLASSIFICATION SCHEME FOR TURBULENT ACCELERATION PROCESSES IN SOLAR FLARES

    SciTech Connect

    Bian, Nicolas; Kontar, Eduard P.; Emslie, A. Gordon E-mail: eduard@astro.gla.ac.uk

    2012-08-01

    We establish a classification scheme for stochastic acceleration models involving low-frequency plasma turbulence in a strongly magnetized plasma. This classification takes into account both the properties of the accelerating electromagnetic field, and the nature of the transport of charged particles in the acceleration region. We group the acceleration processes as either resonant, non-resonant, or resonant-broadened, depending on whether the particle motion is free-streaming along the magnetic field, diffusive, or a combination of the two. Stochastic acceleration by moving magnetic mirrors and adiabatic compressions are addressed as illustrative examples. We obtain expressions for the momentum-dependent diffusion coefficient D(p), both for general forms of the accelerating force and for the situation when the electromagnetic force is wave-like, with a specified dispersion relation {omega} = {omega}(k). Finally, for models considered, we calculate the energy-dependent acceleration time, a quantity that can be directly compared with observations of the time profile of the radiation field produced by the accelerated particles, such as those occuring during solar flares.

  11. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  12. Value Stream Mapping: Foam Collection and Processing.

    SciTech Connect

    Sorensen, Christian

    2015-07-01

    The effort to collect and process foam for the purpose of recycling performed by the Material Sustainability and Pollution Prevention (MSP2) team at Sandia National Laboratories is an incredible one, but in order to make it run more efficiently it needed some tweaking. This project started in June of 2015. We used the Value Stream Mapping process to allow us to look at the current state of the foam collection and processing operation. We then thought of all the possible ways the process could be improved. Soon after that we discussed which of the "dreams" were feasible. And finally, we assigned action items to members of the team so as to ensure that the improvements actually occur. These improvements will then, due to varying factors, continue to occur over the next couple years.

  13. Adaptive control technique for accelerators using digital signal processing

    SciTech Connect

    Eaton, L.; Jachim, S.; Natter, E.

    1987-01-01

    The use of present Digital Signal Processing (DSP) techniques can drastically reduce the residual rf amplitude and phase error in an accelerating rf cavity. Accelerator beam loading contributes greatly to this residual error, and the low-level rf field control loops cannot completely absorb the fast transient of the error. A feedforward technique using DSP is required to maintain the very stringent rf field amplitude and phase specifications. 7 refs.

  14. Field size dependent mapping of medical linear accelerator radiation leakage

    NASA Astrophysics Data System (ADS)

    Vũ Bezin, Jérémi; Veres, Attila; Lefkopoulos, Dimitri; Chavaudra, Jean; Deutsch, Eric; de Vathaire, Florent; Diallo, Ibrahima

    2015-03-01

    The purpose of this study was to investigate the suitability of a graphics library based model for the assessment of linear accelerator radiation leakage. Transmission through the shielding elements was evaluated using the build-up factor corrected exponential attenuation law and the contribution from the electron guide was estimated using the approximation of a linear isotropic radioactive source. Model parameters were estimated by a fitting series of thermoluminescent dosimeter leakage measurements, achieved up to 100 cm from the beam central axis along three directions. The distribution of leakage data at the patient plane reflected the architecture of the shielding elements. Thus, the maximum leakage dose was found under the collimator when only one jaw shielded the primary beam and was about 0.08% of the dose at isocentre. Overall, we observe that the main contributor to leakage dose according to our model was the electron beam guide. Concerning the discrepancies between the measurements used to calibrate the model and the calculations from the model, the average difference was about 7%. Finally, graphics library modelling is a readily and suitable way to estimate leakage dose distribution on a personal computer. Such data could be useful for dosimetric evaluations in late effect studies.

  15. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  16. Plasma acceleration processes in an ablative pulsed plasma thruster

    SciTech Connect

    Koizumi, Hiroyuki; Noji, Ryosuke; Komurasaki, Kimiya; Arakawa, Yoshihiro

    2007-03-15

    Plasma acceleration processes in an ablative pulsed plasma thruster (APPT) were investigated. APPTs are space propulsion options suitable for microspacecraft, and have recently attracted much attention because of their low electric power requirements and simple, compact propellant system. The plasma acceleration mechanism, however, has not been well understood. In the present work, emission spectroscopy, high speed photography, and magnetic field measurements are conducted inside the electrode channel of an APPT with rectangular geometry. The successive images of neutral particles and ions give us a comprehensive understanding of their behavior under electromagnetic acceleration. The magnetic field profile clarifies the location where the electromagnetic force takes effect. As a result, it is shown that high density, ablated neutral gas stays near the propellant surface, and only a fraction of the neutrals is converted into plasma and electromagnetically accelerated, leaving the residual neutrals behind.

  17. Probabilistic earthquake acceleration and velocity maps for the United States and Puerto Rico

    USGS Publications Warehouse

    Algermissen, S.T.; Perkins, D.M.; Thenhaus, P.C.; Hanson, S.L.; Bender, B.L.

    1990-01-01

    The ground-motion maps presented here (maps A-D) show the expected seismic induced or earthquake caused maximum horizontal acceleration and velocity in rock in the contiguous United States, Alaska, Hawaii, and Puerto Rico.  There is a 90 percent probability that the maximum horizontal acceleration and velocity shown on the maps will not be exceeded in the time periods of 50 and 250 years (average return period for the expected ground motion of 474 and 2,372 years).  Rock is taken here to mean material having a shear-wave velocity of between 0.75 and 0.90 kilometers per second. (Algermissen and Perkins, 1976).  

  18. Observation of laser multiple filamentation process and multiple electron beams acceleration in a laser wakefield accelerator

    SciTech Connect

    Li, Wentao; Liu, Jiansheng; Wang, Wentao; Chen, Qiang; Zhang, Hui; Tian, Ye; Zhang, Zhijun; Qi, Rong; Wang, Cheng; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2013-11-15

    The multiple filaments formation process in the laser wakefield accelerator (LWFA) was observed by imaging the transmitted laser beam after propagating in the plasma of different density. During propagation, the laser first self-focused into a single filament. After that, it began to defocus with energy spreading in the transverse direction. Two filaments then formed from it and began to propagate independently, moving away from each other. We have also demonstrated that the laser multiple filamentation would lead to the multiple electron beams acceleration in the LWFA via ionization-induced injection scheme. Besides, its influences on the accelerated electron beams were also analyzed both in the single-stage LWFA and cascaded LWFA.

  19. New Image Reconstruction Methods for Accelerated Quantitative Parameter Mapping and Magnetic Resonance Angiography

    NASA Astrophysics Data System (ADS)

    Velikina, J. V.; Samsonov, A. A.

    2016-02-01

    Advanced MRI techniques often require sampling in additional (non-spatial) dimensions such as time or parametric dimensions, which significantly elongate scan time. Our purpose was to develop novel iterative image reconstruction methods to reduce amount of acquired data in such applications using prior knowledge about signal in the extra dimensions. The efforts have been made to accelerate two applications, namely, time resolved contrast enhanced MR angiography and T1 mapping. Our result demonstrate that significant acceleration (up to 27x times) may be achieved using our proposed iterative reconstruction techniques.

  20. Secondary electron emission from plasma processed accelerating cavity grade niobium

    NASA Astrophysics Data System (ADS)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for higher energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were treated

  1. Detecting chaos in particle accelerators through the frequency map analysis method

    SciTech Connect

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  2. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    PubMed Central

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m−1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  3. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity.

    PubMed

    Clayton, C E; Adli, E; Allen, J; An, W; Clarke, C I; Corde, S; Frederico, J; Gessner, S; Green, S Z; Hogan, M J; Joshi, C; Litos, M; Lu, W; Marsh, K A; Mori, W B; Vafaei-Najafabadi, N; Xu, X; Yakimenko, V

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m(-1) to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  4. Effects of Map Processing upon Text Comprehension.

    ERIC Educational Resources Information Center

    Kirby, John R.; And Others

    A study investigated the effects of a spatial adjunct aid--maps--upon probed comprehension and free recall with respect to a text in which map-related information (macropropositions) could be clearly distinguished from more abstract information (micropropositions). Forty-eight tenth grade students were randomly assigned to either a control group…

  5. Induction linear accelerators for commercial photon irradiation processing

    SciTech Connect

    Matthews, S.M.

    1989-01-13

    A number of proposed irradiation processes requires bulk rather than surface exposure with intense applications of ionizing radiation. Typical examples are irradiation of food packaged into pallet size containers, processing of sewer sludge for recycling as landfill and fertilizer, sterilization of prepackaged medical disposals, treatment of municipal water supplies for pathogen reduction, etc. Volumetric processing of dense, bulky products with ionizing radiation requires high energy photon sources because electrons are not penetrating enough to provide uniform bulk dose deposition in thick, dense samples. Induction Linear Accelerator (ILA) technology developed at the Lawrence Livermore National Laboratory promises to play a key role in providing solutions to this problem. This is discussed in this paper.

  6. Enzyme clustering accelerates processing of intermediates through metabolic channeling.

    PubMed

    Castellana, Michele; Wilson, Maxwell Z; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M; Rabinowitz, Joshua D; Gitai, Zemer; Wingreen, Ned S

    2014-10-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  7. Enzyme clustering accelerates processing of intermediates through metabolic channeling

    PubMed Central

    Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.

    2015-01-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  8. Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications

    SciTech Connect

    Brown, Michael R.

    2006-11-16

    Project Title: Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications PI: Michael R. Brown, Swarthmore College The purpose of the project was to provide theoretical and modeling support to the Swarthmore Spheromak Experiment (SSX). Accordingly, the theoretical effort was tightly integrated into the SSX experimental effort. During the grant period, Michael Brown and his experimental collaborators at Swarthmore, with assistance from W. Matthaeus as appropriate, made substantial progress in understanding the physics SSX plasmas.

  9. Model-Based Acceleration of Look-Locker T1 Mapping

    PubMed Central

    Tran-Gia, Johannes; Wech, Tobias; Bley, Thorsten; Köstler, Herbert

    2015-01-01

    Mapping the longitudinal relaxation time T1 has widespread applications in clinical MRI as it promises a quantitative comparison of tissue properties across subjects and scanners. Due to the long scan times of conventional methods, however, the use of quantitative MRI in clinical routine is still very limited. In this work, an acceleration of Inversion-Recovery Look-Locker (IR-LL) T1 mapping is presented. A model-based algorithm is used to iteratively enforce an exponential relaxation model to a highly undersampled radially acquired IR-LL dataset obtained after the application of a single global inversion pulse. Using the proposed technique, a T1 map of a single slice with 1.6mm in-plane resolution and 4mm slice thickness can be reconstructed from data acquired in only 6s. A time-consuming segmented IR experiment was used as gold standard for T1 mapping in this work. In the subsequent validation study, the model-based reconstruction of a single-inversion IR-LL dataset exhibited a T1 difference of less than 2.6% compared to the segmented IR-LL reference in a phantom consisting of vials with T1 values between 200ms and 3000ms. In vivo, the T1 difference was smaller than 5.5% in WM and GM of seven healthy volunteers. Additionally, the T1 values are comparable to standard literature values. Despite the high acceleration, all model-based reconstructions were of a visual quality comparable to fully sampled references. Finally, the reproducibility of the T1 mapping method was demonstrated in repeated acquisitions. In conclusion, the presented approach represents a promising way for fast and accurate T1 mapping using radial IR-LL acquisitions without the need of any segmentation. PMID:25860381

  10. Modeling the Acceleration Process of Dust in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Jia, Y. D.; Lai, H.; Russell, C. T.; Wei, H.

    2015-12-01

    In previous studies we have identified structures created by nano-dust in the solar wind, and we have observed the expected draping and diverting signatures of such structures using well-spaced multi-spacecraft observations. In this study, we reproduce such an interaction event with our multi-fluid MHD model, modeling the dust particles as a fluid. When the number density of dust particles is comparable to the solar wind ions, a significant draping in the IMF is created, with amplitude larger than the ambient fluctuations. We note that such a density is well above several nano dust particles per Debye sphere and a dusty fluid is appropriate for modeling the dust-solar wind interaction. We assume a spherical cloud of dust travelling with 90% solar wind speed. In addition to reproducing the IMF response to the nano-dust at the end-stage of dust acceleration, we model the entire process of such acceleration in the gravity field of the inner heliosphere. It takes hours for the smallest dust with 3000 amu per proton charge to reach the solar wind speed. We find the dust cloud stretched along the solar wind flow. Such stretching enhances the draping of IMF, compared to the spherical cloud we used in an earlier stage of this study. This model will be further used to examine magnetic perturbations at an earlier stage of dust cloud acceleration, and then determine the size, density, and total mass of dust cloud, as well as its creation and acceleration.

  11. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P. ); Elliott, A. )

    1992-01-01

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  12. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P.; Elliott, A.

    1992-12-31

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  13. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations. PMID:26406070

  14. Polymer processing by a low energy ion accelerator

    NASA Astrophysics Data System (ADS)

    Lorusso, A.; Velardi, L.; Nassisi, V.; Paladini, F.; Visco, A. M.; Campo, N.; Torrisi, L.; Margarone, D.; Giuffrida, L.; Rainò, A.

    2008-05-01

    Ion implantation is a process in which ions are accelerated toward a substrate at energies high enough to bury them just below the surface substrate in order to modify the surface characteristics. Laser-produced plasma is a very suitable and low cost technique in the production of ion sources. In this work, a laser ion source is developed by a UV pulsed laser of about 108 W/cm2 power density, employing a C target and a post ion acceleration of 40 kV to increase the ion energy. In this work, we implanted C ions on ultra-high-molecular-weight-polyethylene (UHMWPE) and low-density polyethylene (LDPE). We present the preliminary results of surface property modifications for both samples. In particular, we have studied the modifications of the surface micro-hardness of the polymers by applying the "scratch test" method as well as the hydrophilicity modifications by the contact angle measurements.

  15. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  16. Uav Data Processing for Rapid Mapping Activities

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2015-08-01

    During disaster and emergency situations, geospatial data plays an important role to serve as a framework for decision support system. As one component of basic geospatial data, large scale topographical maps are mandatory in order to enable geospatial analysis within quite a number of societal challenges. The increasing role of geo-information in disaster management nowadays consequently needs to include geospatial aspects on its analysis. Therefore different geospatial datasets can be combined in order to produce reliable geospatial analysis especially in the context of disaster preparedness and emergency response. A very well-known issue in this context is the fast delivery of geospatial relevant data which is expressed by the term "Rapid Mapping". Unmanned Aerial Vehicle (UAV) is the rising geospatial data platform nowadays that can be attractive for modelling and monitoring the disaster area with a low cost and timely acquisition in such critical period of time. Disaster-related object extraction is of special interest for many applications. In this paper, UAV-borne data has been used for supporting rapid mapping activities in combination with high resolution airborne Interferometric Synthetic Aperture Radar (IFSAR) data. A real disaster instance from 2013 in conjunction with Mount Sinabung eruption, Northern Sumatra, Indonesia, is used as the benchmark test for the rapid mapping activities presented in this paper. On this context, the reliable IFSAR dataset from airborne data acquisition in 2011 has been used as a comparable dataset for accuracy investigation and assessment purpose in 3 D reconstructions. After all, this paper presents a proper geo-referencing and feature extraction method of UAV data to support rapid mapping activities.

  17. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    NASA Astrophysics Data System (ADS)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching

  18. Acceleration of Topographic Map Production Using Semi-Automatic DTM from Dsm Radar Data

    NASA Astrophysics Data System (ADS)

    Rizaldy, Aldino; Mayasari, Ratna

    2016-06-01

    Badan Informasi Geospasial (BIG) is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using "stereo-mate", the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image), and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  19. Mapping individual logical processes in information searching

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.

    1974-01-01

    An interactive dialog with a computerized information collection was recorded and plotted in the form of a flow chart. The process permits one to identify the logical processes employed in considerable detail and is therefore suggested as a tool for measuring individual thought processes in a variety of situations. A sample of an actual test case is given.

  20. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries. PMID:26897100

  1. Particle acceleration via reconnection processes in the supersonic solar wind

    SciTech Connect

    Zank, G. P.; Le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-10

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = –(3 + M{sub A} )/2, where M{sub A} is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index –3(1 + τ {sub c}/(8τ{sub diff})), where τ {sub c}/τ{sub diff} is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τ{sub diff}/τ {sub c}. Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c {sup –5} (c particle

  2. The application of a linear electron accelerator in radiation processing

    NASA Astrophysics Data System (ADS)

    Ruiying, Zhou; Binglin, Wang; Wenxiu, Chen; Yongbao, Gu; Yinfen, Zhang; Simin, Qian; Andong, Liu; Peide, Wang

    A 3-5 MeV electron beam generated by a BF-5 type linear electron accelerator has been used in some radiation processing works, such as, (1) The cross-linking technology by radiation for the polyethylene foaming processing --- the correlation between the cross-linkage and the absorbed dose, the relation between the elongation of foaming polyethylene and the dose, the relation between the size of the cavities and the gelatin rate and the optimum range of dosage for foaming have been found. (2) The research work on the fast switch thyristor irradiated by electron beam --- The relation between the absorbed dose and the life-time of minority carriers has been studied and the optimum condition for radiation processing was determined. This process is much better than the conventional gold diffusion in raising the quality and end-product rate of these devices. Besides, we have made some testing works on the hereditary mutation of plant seeds and microorganism mutation induced by electron radiation and radiation sterilization for some medical instruments and foods.

  3. Selective sinoatrial node optical mapping to investigate the mechanism of sinus rate acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Shien-Fong; Shinohara, Tetsuji; Joung, Boyoung; Chen, Peng-Sheng

    2011-03-01

    Studies using isolated sinoatrial node (SAN) cells indicate that rhythmic spontaneous sarcoplasmic reticulum Ca release (Ca clock) plays an important role in SAN automaticity. However, it is difficult to translate these findings into intact SAN because the SAN is embedded in the right atrium (RA). Cross contamination of the optical signals between SAN and RA prevented the definitive testing of Ca clock hypothesis in intact SAN. We use a novel approach to selectively map intact SAN to examine the Ca clock function in intact RA. We simultaneously mapped intracellular Ca (Cai) and membrane potential (Vm) in 7 isolated, Langendorff perfused normal canine RA. Electrical conduction from the SAN to RA was inhibited with high potassium (10 mmol/L) Tyrode's solution, allowing selective optical mapping of Vm and Cai of the SAN. Isoproterenol (ISO, 0.03 μmol/L) decreased cycle length of the sinus beats from 586+/-17 ms at baseline to 366+/-32 ms, and shifted the leading pacemaker site from the middle or inferior SAN to the superior SAN in all RAs. The Cai upstroke preceded the Vm in the leading pacemaker site by up to 18+/-2 ms. ISO-induced changes to SAN were inhibited by ryanodine (3 μmol/L), but not ZD7288 (3 μmol/L), a selective If blocker. We conclude that a high extracellular potassium concentration results in intermittent SAN-RA conduction block, allowing selective optical mapping of the intact SAN. Acceleration of Ca cycling in the superior SAN underlies the mechanism of sinus tachycardia during sympathetic stimulation.

  4. BioThreads: a novel VLIW-based chip multiprocessor for accelerating biomedical image processing applications.

    PubMed

    Stevens, David; Chouliaras, Vassilios; Azorin-Peris, Vicente; Zheng, Jia; Echiadis, Angelos; Hu, Sijung

    2012-06-01

    We discuss BioThreads, a novel, configurable, extensible system-on-chip multiprocessor and its use in accelerating biomedical signal processing applications such as imaging photoplethysmography (IPPG). BioThreads is derived from the LE1 open-source VLIW chip multiprocessor and efficiently handles instruction, data and thread-level parallelism. In addition, it supports a novel mechanism for the dynamic creation, and allocation of software threads to uncommitted processor cores by implementing key POSIX Threads primitives directly in hardware, as custom instructions. In this study, the BioThreads core is used to accelerate the calculation of the oxygen saturation map of living tissue in an experimental setup consisting of a high speed image acquisition system, connected to an FPGA board and to a host system. Results demonstrate near-linear acceleration of the core kernels of the target blood perfusion assessment with increasing number of hardware threads. The BioThreads processor was implemented on both standard-cell and FPGA technologies; in the first case and for an issue width of two, full real-time performance is achieved with 4 cores whereas on a mid-range Xilinx Virtex6 device this is achieved with 10 dual-issue cores. An 8-core LE1 VLIW FPGA prototype of the system achieved 240 times faster execution time than the scalar Microblaze processor demonstrating the scalability of the proposed solution to a state-of-the-art FPGA vendor provided soft CPU core. PMID:23853147

  5. GPU accelerated processing of astronomical high frame-rate videosequences

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  6. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data.

    PubMed

    Sinha, A; Smith, A D

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions. PMID:11543426

  7. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data

    NASA Technical Reports Server (NTRS)

    Sinha, A.; Smith, A. D.

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.

  8. Mapping the x-ray emission region in a laser-plasma accelerator.

    PubMed

    Corde, S; Thaury, C; Phuoc, K Ta; Lifschitz, A; Lambert, G; Faure, J; Lundh, O; Benveniste, E; Ben-Ismail, A; Arantchuk, L; Marciniak, A; Stordeur, A; Brijesh, P; Rousse, A; Specka, A; Malka, V

    2011-11-18

    The x-ray emission in laser-plasma accelerators can be a powerful tool to understand the physics of relativistic laser-plasma interaction. It is shown here that the mapping of betatron x-ray radiation can be obtained from the x-ray beam profile when an aperture mask is positioned just beyond the end of the emission region. The influence of the plasma density on the position and the longitudinal profile of the x-ray emission is investigated and compared to particle-in-cell simulations. The measurement of the x-ray emission position and length provides insight on the dynamics of the interaction, including the electron self-injection region, possible multiple injection, and the role of the electron beam driven wakefield. PMID:22181891

  9. Modular Automated Processing System (MAPS) for analysis of biological samples.

    SciTech Connect

    Gil, Geun-Cheol; Chirica, Gabriela S.; Fruetel, Julia A.; VanderNoot, Victoria A.; Branda, Steven S.; Schoeniger, Joseph S.; Throckmorton, Daniel J.; Brennan, James S.; Renzi, Ronald F.

    2010-10-01

    We have developed a novel modular automated processing system (MAPS) that enables reliable, high-throughput analysis as well as sample-customized processing. This system is comprised of a set of independent modules that carry out individual sample processing functions: cell lysis, protein concentration (based on hydrophobic, ion-exchange and affinity interactions), interferent depletion, buffer exchange, and enzymatic digestion of proteins of interest. Taking advantage of its unique capacity for enclosed processing of intact bioparticulates (viruses, spores) and complex serum samples, we have used MAPS for analysis of BSL1 and BSL2 samples to identify specific protein markers through integration with the portable microChemLab{trademark} and MALDI.

  10. Preliminary map of peak horizontal ground acceleration for the Hanshin-Awaji earthquake of January 17, 1995, Japan - Description of Mapped Data Sets

    USGS Publications Warehouse

    Borcherdt, R.D.; Mark, R.K.

    1995-01-01

    The Hanshin-Awaji earthquake (also known as the Hyogo-ken Nanbu and the Great Hanshin earthquake) provided an unprecedented set of measurements of strong ground shaking. The measurements constitute the most comprehensive set of strong- motion recordings yet obtained for sites underlain by soft soil deposits of Holocene age within a few kilometers of the crustal rupture zone. The recordings, obtained on or near many important structures, provide an important new empirical data set for evaluating input ground motion levels and site amplification factors for codes and site-specific design procedures world wide. This report describes the data used to prepare a preliminary map summarizing the strong motion data in relation to seismicity and underlying geology (Wentworth, Borcherdt, and Mark., 1995; Figure 1, hereafter referred to as Figure 1/I). The map shows station locations, peak acceleration values, and generalized acceleration contours superimposed on pertinent seismicity and the geologic map of Japan. The map (Figure 1/I) indicates a zone of high acceleration with ground motions throughout the zone greater than 400 gal and locally greater than 800 gal. This zone encompasses the area of most intense damage mapped as JMA intensity level 7, which extends through Kobe City. The zone of most intense damage is parallel, but displaced slightly from the surface projection of the crustal rupture zone implied by aftershock locations. The zone is underlain by soft-soil deposits of Holocene age.

  11. A new microwave EB accelerator for radiation processing

    NASA Astrophysics Data System (ADS)

    Cracknell, P. J.

    1995-02-01

    A new high beam power microwave electron linear accelerator, LINTEC 1020, has been built and installed for the AEA, EBIS (Harwell) Limited medical sterilisation irradiation facility. LINTEC microwave electron beam accelerator designs are based upon travelling wave RF structures working at 1300 MHz, with beam powers from 10 to 45 k Watts at 5 to 12 MeV. The accelerator design, installation and operating details are described together with performance characteristics of alternative equipments.

  12. A nonlinear particle dynamics map of wakefield acceleration in a linear collider

    SciTech Connect

    Tajima, T.; Cheshkov, S.; Horton, W.; Yokoya, K.

    1998-08-01

    The performance of a wakefield accelerator in a high energy collider application is analyzed. In order to carry out this task, it is necessary to construct a strawman design system (no matter how preliminary) and build a code of the systems approach. A nonlinear dynamics map built on a simple theoretical model of the wakefield generated by the laser pulse (or whatever other method) is obtained and they employ this as a base for building a system with multi-stages (and components) as a high energy collider. The crucial figures of merit for such a system other than the final energy include the emittance (that determines the luminosity). The more complex the system is, the more opportunities the system has to degrade the emittance (or entropy of the beam). Thus the map gu ides one to identify where the crucial elements lie that affect the emittance. They find that a strong focusing force of the wakefield coupled with a possible jitter of the axis (or laser aiming) of each stage and a spread in the betatron frequencies arising from different phase space positions for individual particles leads to a phase space mixing. This sensitively controls the emittance degradation. They show that in the case of a uniform plasma the effect of emittance growth is large and may cause serious problems. They discuss possibilities to avoid it and control the situation.

  13. Accelerating the Next Generation Long Read Mapping with the FPGA-Based System.

    PubMed

    Chen, Peng; Wang, Chao; Li, Xi; Zhou, Xuehai

    2014-01-01

    To compare the newly determined sequences against the subject sequences stored in the databases is a critical job in the bioinformatics. Fortunately, recent survey reports that the state-of-the-art aligners are already fast enough to handle the ultra amount of short sequence reads in the reasonable time. However, for aligning the long sequence reads (>400 bp) generated by the next generation sequencing (NGS) technology, it is still quite inefficient with present aligners. Furthermore, the challenge becomes more and more serious as the lengths and the amounts of the sequence reads are both keeping increasing with the improvement of the sequencing technology. Thus, it is extremely urgent for the researchers to enhance the performance of the long read alignment. In this paper, we propose a novel FPGA-based system to improve the efficiency of the long read mapping. Compared to the state-of-the-art long read aligner BWA-SW, our accelerating platform could achieve a high performance with almost the same sensitivity. Experiments demonstrate that, for reads with lengths ranging from 512 up to 4,096 base pairs, the described system obtains a 10x -48x speedup for the bottleneck of the software. As to the whole mapping procedure, the FPGA-based platform could achieve a 1.8x -3:3x speedup versus the BWA-SW aligner, reducing the alignment cycles from weeks to days. PMID:26356857

  14. Development and validation of a processing map for zirconium alloys

    NASA Astrophysics Data System (ADS)

    Narayana Murty, S. V. S.; Nageswara Rao, B.; Kashyap, B. P.

    2002-09-01

    For the development of processing maps to zirconium alloys, a simple instability condition based on the Ziegler's continuum principles as applied to large plastic flow is extended for delineating the regions of unstable metal flow/occurrence of fracture or defects, utilizing the flow stress data of Zr-2.5Nb-0.5Cu. An attempt is made to fit the measured flow stress data in a constitutive equation, useful in the finite element process models. Instability maps at different strain levels were superimposed while delineating the unstable regions in the processing maps. This phenomenon takes into account the dependence of strain rate sensitivity and strain hardening coefficient of the material on the plastic instability during hot deformation. The applicability of the developed processing map has been examined by comparing with the reported microstructural observations of the deformed compression specimens of various zirconium alloys. It is found that the processing map is practically usable in the real fabrication process for the zirconium alloys.

  15. Experimental quantum process tomography of non-trace-preserving maps

    SciTech Connect

    Bongioanni, Irene; Sansoni, Linda; Sciarrino, Fabio; Mataloni, Paolo; Vallone, Giuseppe

    2010-10-15

    The ability of fully reconstructing quantum maps is a fundamental task of quantum information, in particular when coupling with the environment and experimental imperfections of devices are taken into account. In this context, we carry out a quantum process tomography approach for a set of non-trace-preserving maps. We introduce an operator P to characterize the state-dependent probability of success for the process under investigation. We also evaluate the result of approximating the process with a trace-preserving one.

  16. Monitoring oil displacement processes with k-t accelerated spin echo SPI.

    PubMed

    Li, Ming; Xiao, Dan; Romero-Zerón, Laura; Balcom, Bruce J

    2016-03-01

    Magnetic resonance imaging (MRI) is a robust tool to monitor oil displacement processes in porous media. Conventional MRI measurement times can be lengthy, which hinders monitoring time-dependent displacements. Knowledge of the oil and water microscopic distribution is important because their pore scale behavior reflects the oil trapping mechanisms. The oil and water pore scale distribution is reflected in the magnetic resonance T2 signal lifetime distribution. In this work, a pure phase-encoding MRI technique, spin echo SPI (SE-SPI), was employed to monitor oil displacement during water flooding and polymer flooding. A k-t acceleration method, with low-rank matrix completion, was employed to improve the temporal resolution of the SE-SPI MRI measurements. Comparison to conventional SE-SPI T2 mapping measurements revealed that the k-t accelerated measurement was more sensitive and provided higher-quality results. It was demonstrated that the k-t acceleration decreased the average measurement time from 66.7 to 20.3 min in this work. A perfluorinated oil, containing no (1) H, and H2 O brine were employed to distinguish oil and water phases in model flooding experiments. High-quality 1D water saturation profiles were acquired from the k-t accelerated SE-SPI measurements. Spatially and temporally resolved T2 distributions were extracted from the profile data. The shift in the (1) H T2 distribution of water in the pore space to longer lifetimes during water flooding and polymer flooding is consistent with increased water content in the pore space. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26626141

  17. An Unusual Process of Accelerated Weathering of a Marly Limestone

    NASA Astrophysics Data System (ADS)

    Ercoli, L.; Rizzo, G.; Algozzini, G.

    2003-04-01

    This work deals with a singular case of stone deterioration, which occurred during the restoration of the Cathedral of Cefalù. In particular, a significant process of stone decohesion started after a consolidation treatment on ashlars of the external face of the cloister portico. A study was carried out to characterize the stone and to investigate the deterioration process. Petrographical, chemical and physical analyses were performed on samples taken from the wall. The results indicate that the medieval monument was built using a Pliocene marly limestone, called "trubo", quarried from outcrops of the environs of Cefalù. The rock is soft and uniformely cemented. The carbonatic fraction of the rock is due to foraminifera shells; the rock also contains detritic quartz, feldspate and glauconite. The clay minerals, mainly illite and montmorillonite, are widespread in the rock in the form of thin layers. The use of such a stone in a building of relevant artistic value is definitely unusual. In fact, the "trubo" is a rock subjected to natural decay because of its mineralogical composition and fabric; as effect of natural weathering, in the outcrops the rock disaggregates uniformely, producing silt. In the cloister this effect was magnified by extreme environmental conditions (marine spray, severe excursions of both relative humidity and temperature). Furthermore, after soluble salts removing and subsequent consolidation with ethyl silicate, a significant acceleration of the decay process was observed, producing friable scales detach for a depth of about 3 cm into the ashlars. The stone appeared corroded and uneven. Experimental tests were performed in laboratory in order to evidence any origin of incompatibility between such stone composition and the treatments carried out, which on the other hand are the most generally adopted in restoration interventions.

  18. Analyzing Collision Processes with the Smartphone Acceleration Sensor

    ERIC Educational Resources Information Center

    Vogt, Patrik; Kuhn, Jochen

    2014-01-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1 ). The physical issues in that case are manifold and apply, for example, to free fall, radial acceleration, several pendula, or the exploitation…

  19. Investigation of purification process stresses on erythropoietin peptide mapping profile

    PubMed Central

    Sepahi, Mina; Kaghazian, Hooman; Hadadian, Shahin; Norouzian, Dariush

    2015-01-01

    Background: Full compliance of recombinant protein peptide mapping chromatogram with the standard reference material, is one of the most basic quality control tests of biopharmaceuticals. Changing a single amino acid substitution or side chain diversity for a given peptide changes protein hydrophobicity and causes peak shape or retention time alteration in a peptide mapping assay. In this work, the effect of different stresses during the recombinant erythropoietin (EPO) purification process, including pH 4, pH 5, and room temperature were checked on product peptide mapping results. Materials and Methods: Cell culture harvest was purified under stress by different chromatographic techniques consisting of gel filtration, anionic ion exchange, concentration by ultrafiltration, and high resolution size exclusion chromatography. To induce more pH stresses, the purified EPO was exposed to pH stress 4 and 5 by exchanging buffer by a 10 KDa dialysis sac overnight. The effects of temperature and partial deglycosylation (acid hydrolysis) on purified EPO were also studied by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and peptide mapping analysis. Removal of sialic acid by mild hydrolysis was performed by exposure to two molar acetic acid at 80°C for 3 h. Results: No significant effect was observed between intact and stressed erythropoietin peptide mapping profiles and SDS-PAGE results. To validate the sensibility of the technique, erythropoietin was partially acid hydrolyzed and significant changes in the chromatographic peptide map of the intact form and a reduction on its molecular weight were detected, which indicates some partial deglycosylation. Conclusions: Purification process does not alter the peptide mapping profile and purification process stresses are not the cause of peptide mapping noncompliance. PMID:26261816

  20. Analyzing collision processes with the smartphone acceleration sensor

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kuhn, Jochen

    2014-02-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1). The physical issues in that case are manifold and apply, for example, to free fall,2 radial acceleration,3 several pendula, or the exploitation of everyday contexts.6 This paper supplements these applications and presents an experiment to study elastic and inelastic collisions. In addition to the masses of the two impact partners, their velocities before and after the collision are of importance, and these velocities can be determined by numerical integration of the measured acceleration profile.

  1. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  2. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  3. Heralded processes on continuous-variable spaces as quantum maps

    SciTech Connect

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi; Barbieri, Marco; Tualle-Brouri, Rosa

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  4. Quantum stochastic processes for maps on Hilbert C*-modules

    SciTech Connect

    Heo, Jaeseong; Ji, Un Cig

    2011-05-15

    We discuss pairs ({phi}, {Phi}) of maps, where {phi} is a map between C*-algebras and {Phi} is a {phi}-module map between Hilbert C*-modules, which are generalization of representations of Hilbert C*-modules. A covariant version of Stinespring's theorem for such a pair ({phi}, {Phi}) is established, and quantum stochastic processes constructed from pairs ({l_brace}{phi}{sub t{r_brace}}, {l_brace}{Phi}{sub t{r_brace}}) of families of such maps are studied. We prove that the quantum stochastic process J={l_brace}J{sub t{r_brace}} constructed from a {phi}-quantum dynamical semigroup {Phi}={l_brace}{Phi}{sub t{r_brace}} is a j-map for the quantum stochastic process j={l_brace}j{sub t{r_brace}} constructed from the given quantum dynamical semigroup {phi}={l_brace}{phi}{sub t{r_brace}}, and that J is covariant if the {phi}-quantum dynamical semigroup {Phi} is covariant.

  5. Anomalous/Fractional Diffusion in Particle Acceleration Processes.

    NASA Astrophysics Data System (ADS)

    Bian, Nicolas

    2016-07-01

    This talk is aimed at reviewing a certain number of theoretical aspects concerning the relation between stochastic acceleration and anomalous/fractional transport of particles. As a matter of fact, anomalous velocity-space diffusion is required within any stochastic acceleration scenario to explain the formation of the ubiquitous power-law tail of non-thermal particles, as observed e.g. in the accelerated distribution of electrons during solar flares. I will establish a classification scheme for stochastic acceleration models involving turbulence in magnetized plasmas. This classification takes into account both the properties of the accelerating electromagnetic field, and the nature of the spatial transport (possibly fractional) of charged particles in the acceleration region. I will also discuss recent attempts to obtain spatially non-local and fractional diffusion equations directly from first principles, starting either from the Fokker-Planck equation in the large mean free-path regime or the Boltzmann equation involving velocity-space relaxation toward the kappa distribution instead of the standard Maxwellian distribution.

  6. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  7. Use of transfer maps for modeling beam dynamics in a nonscaling fixed-field alternating-gradient accelerator

    NASA Astrophysics Data System (ADS)

    Giboudot, Y.; Wolski, A.

    2012-04-01

    Transfer maps for magnetic components are fundamental to studies of beam dynamics in accelerators. In the work presented here, transfer maps are computed in Taylor form for a particle moving through any specified magnetostatic field by applying an explicit symplectic integrator in a differential algebra code. The techniques developed are illustrated by their application to study the beam dynamics in the electron model for many applications (EMMA), the first nonscaling fixed-field alternating-gradient accelerator ever built. The EMMA lattice has 4 degrees of freedom (strength and transverse position of each of the two quadrupoles in each periodic cell). Transfer maps may be used to predict efficiently the dynamics in any lattice configuration. The transfer map is represented by a mixed variable generating function, obtained by interpolation between the maps for a set of reference configurations: use of mixed variable generating functions ensures the symplecticity of the map. An optimization routine uses the interpolation technique to look for a lattice defined by four constraints on the time of flight at different beam energies. This provides a way to determine the lattice configuration required to produce the desired dynamical characteristics. These tools are benchmarked against data from the recent EMMA commissioning.

  8. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    SciTech Connect

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Serianni, G.; Nakano, H.; Takeiri, Y.; Tsumori, K.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  9. Accelerating molecular docking calculations using graphics processing units.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2011-04-25

    The generation of molecular conformations and the evaluation of interaction potentials are common tasks in molecular modeling applications, particularly in protein-ligand or protein-protein docking programs. In this work, we present a GPU-accelerated approach capable of speeding up these tasks considerably. For the evaluation of interaction potentials in the context of rigid protein-protein docking, the GPU-accelerated approach reached speedup factors of up to over 50 compared to an optimized CPU-based implementation. Treating the ligand and donor groups in the protein binding site as flexible, speedup factors of up to 16 can be observed in the evaluation of protein-ligand interaction potentials. Additionally, we introduce a parallel version of our protein-ligand docking algorithm PLANTS that can take advantage of this GPU-accelerated scoring function evaluation. We compared the GPU-accelerated parallel version to the same algorithm running on the CPU and also to the highly optimized sequential CPU-based version. In terms of dependence of the ligand size and the number of rotatable bonds, speedup factors of up to 10 and 7, respectively, can be observed. Finally, a fitness landscape analysis in the context of rigid protein-protein docking was performed. Using a systematic grid-based search methodology, the GPU-accelerated version outperformed the CPU-based version with speedup factors of up to 60. PMID:21434638

  10. Understanding Metaphorical Expressions: Conventionality, Mappings, and Comparison Processes

    ERIC Educational Resources Information Center

    Lai, Vicky Tzuyin

    2009-01-01

    Metaphorical expressions appear once every twenty words in everyday language, and play a central role in communication. Some cognitive linguistic theories propose that understanding metaphorical expressions requires mappings from one conceptual domain to the other. My research uses Event-Related Potentials to examine the processing, the…

  11. Motion processing across multiple topographic maps in the electrosensory system

    PubMed Central

    Khosravi‐Hashemi, Navid; Chacron, Maurice J.

    2014-01-01

    Abstract Animals can efficiently process sensory stimuli whose attributes vary over orders of magnitude by devoting specific neural pathways to process specific features in parallel. Weakly electric fish offer an attractive model system as electrosensory pyramidal neurons responding to amplitude modulations of their self‐generated electric field are organized into three parallel maps of the body surface. While previous studies have shown that these fish use parallel pathways to process stationary stimuli, whether a similar strategy is used to process motion stimuli remains unknown to this day. We recorded from electrosensory pyramidal neurons in the weakly electric fish Apteronotus leptorhynchus across parallel maps of the body surface (centromedial, centrolateral, and lateral) in response to objects moving at velocities spanning the natural range. Contrary to previous observations made with stationary stimuli, we found that all cells responded in a similar fashion to moving objects. Indeed, all cells showed a stronger directionally nonselective response when the object moved at a larger velocity. In order to explain these results, we built a mathematical model incorporating the known antagonistic center–surround receptive field organization of these neurons. We found that this simple model could quantitatively account for our experimentally observed differences seen across E and I‐type cells across all three maps. Our results thus provide strong evidence against the hypothesis that weakly electric fish use parallel neural pathways to process motion stimuli and we discuss their implications for sensory processing in general. PMID:24760508

  12. Accelerated Schools: The Inquiry Process and the Prospects for School Change.

    ERIC Educational Resources Information Center

    Polkinghorn, Robert, Jr.; And Others

    An assessment of two pilot accelerated schools using the inquiry process model for the transformation of school culture and classroom practices in serving at-risk students is presented in this report. The inquiry process is a central feature of the accelerated school, a comprehensive school renewal initiative. The traditional approach to changing…

  13. UAV Data Processing for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial

  14. Mapping Perinatal Nursing Process Measurement Concepts to Standard Terminologies.

    PubMed

    Ivory, Catherine H

    2016-07-01

    The use of standard terminologies is an essential component for using data to inform practice and conduct research; perinatal nursing data standardization is needed. This study explored whether 76 distinct process elements important for perinatal nursing were present in four American Nurses Association-recognized standard terminologies. The 76 process elements were taken from a valid paper-based perinatal nursing process measurement tool. Using terminology-supported browsers, the elements were manually mapped to the selected terminologies by the researcher. A five-member expert panel validated 100% of the mapping findings. The majority of the process elements (n = 63, 83%) were present in SNOMED-CT, 28% (n = 21) in LOINC, 34% (n = 26) in ICNP, and 15% (n = 11) in CCC. SNOMED-CT and LOINC are terminologies currently recommended for use to facilitate interoperability in the capture of assessment and problem data in certified electronic medical records. Study results suggest that SNOMED-CT and LOINC contain perinatal nursing process elements and are useful standard terminologies to support perinatal nursing practice in electronic health records. Terminology mapping is the first step toward incorporating traditional paper-based tools into electronic systems. PMID:27081756

  15. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%. PMID:24816550

  16. Subcortical mapping of calculation processing in the right parietal lobe.

    PubMed

    Della Puppa, Alessandro; De Pellegrin, Serena; Lazzarini, Anna; Gioffrè, Giorgio; Rustemi, Oriela; Cagnin, Annachiara; Scienza, Renato; Semenza, Carlo

    2015-05-01

    Preservation of calculation processing in brain surgery is crucial for patients' quality of life. Over the last decade, surgical electrostimulation was used to identify and preserve the cortical areas involved in such processing. Conversely, subcortical connectivity among different areas implicated in this function remains unclear, and the role of surgery in this domain has not been explored so far. The authors present the first 2 cases in which the subcortical functional sites involved in calculation were identified during right parietal lobe surgery. Two patients affected by a glioma located in the right parietal lobe underwent surgery with the aid of MRI neuronavigation. No calculation deficits were detected during preoperative assessment. Cortical and subcortical mapping were performed using a bipolar stimulator. The current intensity was determined by progressively increasing the amplitude by 0.5-mA increments (from a baseline of 1 mA) until a sensorimotor response was elicited. Then, addition and multiplication calculation tasks were administered. Corticectomy was performed according to both the MRI neuronavigation data and the functional findings obtained through cortical mapping. Direct subcortical electrostimulation was repeatedly performed during tumor resection. Subcortical functional sites for multiplication and addition were detected in both patients. Electrostimulation interfered with calculation processing during cortical mapping as well. Functional sites were spared during tumor removal. The postoperative course was uneventful, and calculation processing was preserved. Postoperative MRI showed complete resection of the tumor. The present preliminary study shows for the first time how functional mapping can be a promising method to intraoperatively identify the subcortical functional sites involved in calculation processing. This report therefore supports direct electrical stimulation as a promising tool to improve the current knowledge on

  17. Accelerators for E-beam and X-ray processing

    NASA Astrophysics Data System (ADS)

    Auslender, V. L.; Bryazgin, A. A.; Faktorovich, B. L.; Gorbunov, V. A.; Kokin, E. N.; Korobeinikov, M. V.; Krainov, G. S.; Lukin, A. N.; Maximov, S. A.; Nekhaev, V. E.; Panfilov, A. D.; Radchenko, V. N.; Tkachenko, V. O.; Tuvik, A. A.; Voronin, L. A.

    2002-03-01

    During last years the demand for pasteurization and desinsection of various food products (meat, chicken, sea products, vegetables, fruits, etc.) had increased. The treatment of these products in industrial scale requires the usage of powerful electron accelerators with energy 5-10 MeV and beam power at least 50 kW or more. The report describes the ILU accelerators with energy range up to 10 MeV and beam power up to 150 kW.The different irradiation schemes in electron beam and X-ray modes for various products are described. The design of the X-ray converter and 90° beam bending system are also given.

  18. Microscopic Processes On Radiation from Accelerated Particles in Relativistic Jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hardee, P. E.; Mizuno, Y.; Medvedev, M.; Zhang, B.; Sol, H.; Niemiec, J.; Pohl, M.; Nordlund, A.; Fredriksen, J.; Lyubarsky, Y.; Hartmann, D. H.; Fishman, G. J.

    2009-01-01

    Nonthermal radiation observed from astrophysical systems containing relativistic jets and shocks, e.g., gamma-ray bursts (GRBs), active galactic nuclei (AGNs), and Galactic microquasar systems usually have power-law emission spectra. Recent PIC simulations of relativistic electron-ion (electro-positron) jets injected into a stationary medium show that particle acceleration occurs within the downstream jet. In the collisionless relativistic shock particle acceleration is due to plasma waves and their associated instabilities (e.g., the Buneman instability, other two-streaming instability, and the Weibel (filamentation) instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection behind the jet head. The jitter'' radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  19. A co-design method for parallel image processing accelerator based on DSP and FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Weng, Kaijian; Cheng, Zhao; Yan, Luxin; Guan, Jing

    2011-11-01

    In this paper, we present a co-design method for parallel image processing accelerator based on DSP and FPGA. DSP is used as application and operation subsystem to execute the complex operations, and in which the algorithms are resolving into commands. FPGA is used as co-processing subsystem for regular data-parallel processing, and operation commands and image data are transmitted to FPGA for processing acceleration. A series of experiments have been carried out, and up to a half or three quarter time is saved which supports that the proposed accelerator will consume less time and get better performance than the traditional systems.

  20. Physical mapping in large genomes: accelerating anchoring of BAC contigs to genetic maps through in silico analysis.

    PubMed

    Paux, Etienne; Legeai, Fabrice; Guilhot, Nicolas; Adam-Blondon, Anne-Françoise; Alaux, Michaël; Salse, Jérôme; Sourdille, Pierre; Leroy, Philippe; Feuillet, Catherine

    2008-02-01

    Anchored physical maps represent essential frameworks for map-based cloning, comparative genomics studies, and genome sequencing projects. High throughput anchoring can be achieved by polymerase chain reaction (PCR) screening of bacterial artificial chromosome (BAC) library pools with molecular markers. However, for large genomes such as wheat, the development of high dimension pools and the number of reactions that need to be performed can be extremely large making the screening laborious and costly. To improve the cost efficiency of anchoring in such large genomes, we have developed a new software named Elephant (electronic physical map anchoring tool) that combines BAC contig information generated by FingerPrinted Contig with results of BAC library pools screening to identify BAC addresses with a minimal amount of PCR reactions. Elephant was evaluated during the construction of a physical map of chromosome 3B of hexaploid wheat. Results show that a one dimensional pool screening can be sufficient to anchor a BAC contig while reducing the number of PCR by 384-fold thereby demonstrating that Elephant is an efficient and cost-effective tool to support physical mapping in large genomes. PMID:18038165

  1. Conceptual framework for the mapping of management process with information technology in a business process.

    PubMed

    Rajarathinam, Vetrickarthick; Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. PMID:25861688

  2. Conceptual Framework for the Mapping of Management Process with Information Technology in a Business Process

    PubMed Central

    Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. PMID:25861688

  3. Next generation tools to accelerate the synthetic biology process.

    PubMed

    Shih, Steve C C; Moraes, Christopher

    2016-05-16

    Synthetic biology follows the traditional engineering paradigm of designing, building, testing and learning to create new biological systems. While such approaches have enormous potential, major challenges still exist in this field including increasing the speed at which this workflow can be performed. Here, we present recently developed microfluidic tools that can be used to automate the synthetic biology workflow with the goal of advancing the likelihood of producing desired functionalities. With the potential for programmability, automation, and robustness, the integration of microfluidics and synthetic biology has the potential to accelerate advances in areas such as bioenergy, health, and biomaterials. PMID:27146265

  4. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  5. Accelerated conformational entropy calculations using graphic processing units.

    PubMed

    Zhang, Qian; Wang, Junmei; Guerrero, Ginés D; Cecilia, José M; García, José M; Li, Youyong; Pérez-Sánchez, Horacio; Hou, Tingjun

    2013-08-26

    Conformational entropy calculation, usually computed by normal-mode analysis (NMA) or quasi harmonic analysis (QHA), is extremely time-consuming. Here, instead of NMA or QHA, a solvent accessible surface area (SASA) based model was employed to compute the conformational entropy, and a new fast GPU-based method called MURCIA (Molecular Unburied Rapid Calculation of Individual Areas) was implemented to accelerate the calculation of SASA for each atom. MURCIA employs two different kernels to determine the neighbors of each atom. The first kernel (K1) uses brute force for the calculation of the neighbors of atoms, while the second one (K2) uses an advanced algorithm involving hardware interpolations via GPU texture memory unit for such purpose. These two kernels yield very similar results. Each kernel has its own advantages depending on the protein size. K1 performs better than K2 when the size is small and vice versa. The algorithm was extensively evaluated for four protein data sets and achieves good results for all of them. This GPU-accelerated version is ∼600 times faster than the former sequential algorithm when the number of the atoms in a protein is up to 10⁵. PMID:23862733

  6. Estimating and mapping ecological processes influencing microbial community assembly

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.

  7. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  8. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGESBeta

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  9. Polymeric flocculants processing by accelerated electron beams and microwave heating

    NASA Astrophysics Data System (ADS)

    Martin, Diana I.; Mateescu, Elena; Craciun, Gabriela; Ighigeanu, Daniel; Ighigeanu, Adelina

    2002-08-01

    Results obtained by accelerated electron beam, microwave and simultaneous microwave and electron beam application in the chemistry of acrylamide and acrylic acid copolymers (polymeric flocculants used for wastewater treatment) are presented. Comparative results concerning the molecular weight and Huggins' constant for the acrylamide and acrylic acid copolymers obtained by classical heating, microwave heating, electron beam irradiation and simultaneous microwave and electron beam treatment are reported. Microwave heating produces high water solubility of the polymeric flocculants but median molecular weight values. Electron beam irradiation gives high molecular weight values but associated with a cross-linked structure (poor water solubility) while microwave energy addition to electron beam energy gives simultaneously high molecular weight values and high water solubility.

  10. Application of quantitative trait locus mapping and transcriptomics to studies of the senescence-accelerated phenotype in rats

    PubMed Central

    2014-01-01

    Background Etiology of complex disorders, such as cataract and neurodegenerative diseases including age-related macular degeneration (AMD), remains poorly understood due to the paucity of animal models, fully replicating the human disease. Previously, two quantitative trait loci (QTLs) associated with early cataract, AMD-like retinopathy, and some behavioral aberrations in senescence-accelerated OXYS rats were uncovered on chromosome 1 in a cross between OXYS and WAG rats. To confirm the findings, we generated interval-specific congenic strains, WAG/OXYS-1.1 and WAG/OXYS-1.2, carrying OXYS-derived loci of chromosome 1 in the WAG strain. Both congenic strains displayed early cataract and retinopathy but differed clinically from OXYS rats. Here we applied a high-throughput RNA sequencing (RNA-Seq) strategy to facilitate nomination of the candidate genes and functional pathways that may be responsible for these differences and can contribute to the development of the senescence-accelerated phenotype of OXYS rats. Results First, the size and map position of QTL-derived congenic segments were determined by comparative analysis of coding single-nucleotide polymorphisms (SNPs), which were identified for OXYS, WAG, and congenic retinal RNAs after sequencing. The transferred locus was not what we expected in WAG/OXYS-1.1 rats. In rat retina, 15442 genes were expressed. Coherent sets of differentially expressed genes were identified when we compared RNA-Seq retinal profiles of 20-day-old WAG/OXYS-1.1, WAG/OXYS-1.2, and OXYS rats. The genes most different in the average expression level between the congenic strains included those generally associated with the Wnt, integrin, and TGF-β signaling pathways, widely involved in neurodegenerative processes. Several candidate genes (including Arhgap33, Cebpg, Gtf3c1, Snurf, Tnfaip3, Yme1l1, Cbs, Car9 and Fn1) were found to be either polymorphic in the congenic loci or differentially expressed between the strains. These genes may

  11. Development of A Real-Time Shaking Map System Using Low Cost Acceleration Sensors and Its Application for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Kuo, K.; Wu, Y.; Lin, T.; Hsiao, N.; Chen, D.

    2013-12-01

    Real-time signals from Palert (low-cost MEMS acceleration sensor) network are used to develop a real-time shaking map system based on the Earthworm environment in Taiwan. This system could provide real-time intensity map and estimate magnitude which is determined by shaking covered areas without earthquake location process. In order to derive an empirical strong motion attenuation relationship between the shaking covered areas and their corresponding earthquake magnitudes, we collected the strong motion records from 42 large crustal earthquakes recorded by the Taiwan Strong Motion Instrumentation Program (TSMIP) stations. The shaking covered areas (A) of certain peak ground acceleration (PGA) could be related to MA as follow: MA=0.002*(PGA)*log(A)+0.279*log(A)+4.236 Records of three earthquakes are used to test our system performance, they are the Hualien earthquake (ML 5.6, MW 5.5) occurred on 7th March 2013, the Nantou earthquake (ML 6.1, MW 5.9) occurred on 27th March 2013, and the Nantou earthquake (ML 6.3, MW 6.2) occurred on 2nd June 2013. Results show that the first report could be provided at about 10 seconds after the earthquake occurrence and the magnitudes are reported as 5.7, 5.7 and 5.8 for Hualien and two Nantou events, respectively. Finally, the stable report could be obtained at about 20, 19 and 17 seconds after the earthquake occurred and the magnitudes are reported as 5.5, 5.9 and 6.0 for Hualien and two Nantou events, respectively. Base on the result from this study, the real-time shaking map system could provide rapidly real-time shaking map and estimate earthquake magnitude within 1 minute even tens of seconds. It will play an important role in seismic hazard mitigation.

  12. Blocking the association of HDAC4 with MAP1S accelerates autophagy clearance of mutant Huntingtin

    PubMed Central

    Yue, Fei; Li, Wenjiao; Zou, Jing; Chen, Qi; Xu, Guibin; Huang, Hai; Xu, Zhen; Zhang, Sheng; Gallinari, Paola; Wang, Fen; McKeehan, Wallace L.; Liu, Leyuan

    2015-01-01

    Autophagy controls and executes the turnover of abnormally aggregated proteins. MAP1S interacts with the autophagy marker LC3 and positively regulates autophagy flux. HDAC4 associates with the aggregation-prone mutant huntingtin protein (mHTT) that causes Huntington's disease, and colocalizes with it in cytosolic inclusions. It was suggested HDAC4 interacts with MAP1S in a yeast two-hybrid screening. Here, we found that MAP1S interacts with HDAC4 via a HDAC4-binding domain (HBD). HDAC4 destabilizes MAP1S, suppresses autophagy flux and promotes the accumulation of mHTT aggregates. This occurs by an increase in the deacetylation of the acetylated MAP1S. Either suppression of HDAC4 with siRNA or overexpression of the MAP1S HBD leads to stabilization of MAP1S, activation of autophagy flux and clearance of mHTT aggregates. Therefore, specific interruption of the HDAC4-MAP1S interaction with short peptides or small molecules to enhance autophagy flux may relieve the toxicity of mHTT associated with Huntington's disease and improve symptoms of HD patients. PMID:26540094

  13. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  14. Accelerate!

    PubMed

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves. PMID:23155997

  15. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU. PMID:21541963

  16. Using pattern enumeration to accelerate process development and ramp yield

    NASA Astrophysics Data System (ADS)

    Zhuang, Linda; Pang, Jenny; Xu, Jessy; Tsai, Mengfeng; Wang, Amy; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua

    2016-03-01

    During a new technology node process setup phase, foundries do not initially have enough product chip designs to conduct exhaustive process development. Different operational teams use manually designed simple test keys to set up their process flows and recipes. When the very first version of the design rule manual (DRM) is ready, foundries enter the process development phase where new experiment design data is manually created based on these design rules. However, these IP/test keys contain very uniform or simple design structures. This kind of design normally does not contain critical design structures or process unfriendly design patterns that pass design rule checks but are found to be less manufacturable. It is desired to have a method to generate exhaustive test patterns allowed by design rules at development stage to verify the gap of design rule and process. This paper presents a novel method of how to generate test key patterns which contain known problematic patterns as well as any constructs which designers could possibly draw based on current design rules. The enumerated test key patterns will contain the most critical design structures which are allowed by any particular design rule. A layout profiling method is used to do design chip analysis in order to find potential weak points on new incoming products so fab can take preemptive action to avoid yield loss. It can be achieved by comparing different products and leveraging the knowledge learned from previous manufactured chips to find possible yield detractors.

  17. Gaussian process style transfer mapping for historical Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Feng, Jixiong; Peng, Liangrui; Lebourgeois, Franck

    2015-01-01

    Historical Chinese character recognition is very important to larger scale historical document digitalization, but is a very challenging problem due to lack of labeled training samples. This paper proposes a novel non-linear transfer learning method, namely Gaussian Process Style Transfer Mapping (GP-STM). The GP-STM extends traditional linear Style Transfer Mapping (STM) by using Gaussian process and kernel methods. With GP-STM, existing printed Chinese character samples are used to help the recognition of historical Chinese characters. To demonstrate this framework, we compare feature extraction methods, train a modified quadratic discriminant function (MQDF) classifier on printed Chinese character samples, and implement the GP-STM model on Dunhuang historical documents. Various kernels and parameters are explored, and the impact of the number of training samples is evaluated. Experimental results show that accuracy increases by nearly 15 percentage points (from 42.8% to 57.5%) using GP-STM, with an improvement of more than 8 percentage points (from 49.2% to 57.5%) compared to the STM approach.

  18. Standard fluctuation-dissipation process from a deterministic mapping

    NASA Astrophysics Data System (ADS)

    Bianucci, Marco; Mannella, Riccardo; Fan, Ximing; Grigolini, Paolo; West, Bruce J.

    1993-03-01

    We illustrate a derivation of a standard fluctuation-dissipation process from a discrete deterministic dynamical model. This model is a three-dimensional mapping, driving the motion of three variables, w, ξ, and π. We show that for suitable values of the parameters of this mapping, the motion of the variable w is indistinguishable from that of a stochastic variable described by a Fokker-Planck equation with well-defined friction γ and diffusion D. This result can be explained as follows. The bidimensional system of the two variables ξ and π is a nonlinear, deterministic, and chaotic system, with the key property of resulting in a finite correlation time for the variable ξ and in a linear response of ξ to an external perturbation. Both properties are traced back to the fully chaotic nature of this system. When this subsystem is coupled to the variable w, via a very weak coupling guaranteeing a large-time-scale separation between the two systems, the variable w is proven to be driven by a standard fluctuation-dissipation process. We call the subsystem a booster whose chaotic nature triggers the standard fluctuation-dissipation process exhibited by the variable w. The diffusion process is a trivial consequence of the central-limit theorem, whose validity is assured by the finite time scale of the correlation function of ξ. The dissipation affecting the variable w is traced back to the linear response of the booster, which is evaluated adopting a geometrical procedure based on the properties of chaos rather than the conventional perturbation approach.

  19. Accelerating COTS Middleware Acquisition: The i-Mate Process

    SciTech Connect

    Liu, Anna; Gorton, Ian

    2003-03-05

    Most major organizations now use some commercial-off-the-shelf middleware components to run their businesses. Key drivers behind this growth include ever-increasing Internet usage and the ongoing need to integrate heterogeneous legacy systems to streamline business processes. As organizations do more business online, they need scalable, high-performance software infrastructures to handle transactions and provide access to core systems.

  20. The STIRAP-based unitary decelerating and accelerating processes of a single free atom

    NASA Astrophysics Data System (ADS)

    Miao, Xijia

    2008-05-01

    The STIRAP-based unitary decelerating and accelerating processes have been constructed for the physical system of a single free atom. The present theoretical work is focused on investigating analytically how the momentum distribution of a momentum superposition state of a quantum system such as a momentum Gaussian wave-packet state of a single freely moving atom affects the STIRAP state transfer in these decelerating and accelerating processes. The complete STIRAP state transfer and the unitarity of these processes are stressed highly in the investigation. It has been shown that the momentum distribution has an important influence upon the STIRAP state-transfer efficiency. In the ideal adiabatic condition these unitary decelerating and accelerating processes are studied in detail for a freely moving atom. A general adiabatic condition for the basic STIRAP unitary decelerating and accelerating processes is also derived analytically. The unitary decelerating and accelerating processes may be used to manipulate and control in time and space a Gaussian wave-packet motional state of a free atom. The detail work see: Xijia Miao, http://arxiv.org/abs/quant-ph/0707.0063.

  1. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    SciTech Connect

    Snopok, Pavel; /Michigan State U.

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  2. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  3. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  4. Graphics processing units accelerated semiclassical initial value representation molecular dynamics.

    PubMed

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly. PMID:24811627

  5. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  6. Accelerating chemical database searching using graphics processing units.

    PubMed

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097. PMID:21696144

  7. Using Qualitative Observation To Document Group Processes in Accelerated Schools Training: Techniques and Results.

    ERIC Educational Resources Information Center

    McFarland, Katherine; Batten, Constance

    This paper describes the use of qualitative observation techniques for gathering and analyzing data related to group processes during an Accelerated Schools Model training session. The purposes for this research were to observe the training process in order better to facilitate present continuation and future training, to develop questions for…

  8. Integrative Acoustic Mapping Reveals Hudson RIver Sediment Processes an Habitats

    NASA Astrophysics Data System (ADS)

    Nitsche, F. O.; Bell, R.; Carbotte, S. M.; Ryan, W. B. F.; Slagle, A.; Chillrud, S.; Kenna, T.; Flood, R.; Ferrini, V.; Cerrato, R.; McHugh, C.; Strayer, D.

    2005-06-01

    Rivers and estuaries around the world are the focus of human settlements and activities. Needs for clean water, ecosystem preservation, commercial navigation, industrial development, and recreational access compete for the use of estuaries, and management of these resources requires a detailed understanding of estuarine morphology and sediment dynamics. This article presents an overview of the first estuary-wide study of a heavily used estuary, the Hudson River, based on high-resolution acoustic mapping of the river bottom. The integration of three high-resolution acoustic methods with extensive sampling reveals an unexpected complexity of bottom features and allows detailed classification of the benthic environment in terms of riverbed morphology, sediment type, and sedimentary processes.

  9. Accelerator Production of Tritium project process waste assessment

    SciTech Connect

    Carson, S.D.; Peterson, P.K.

    1995-09-01

    DOE has made a commitment to compliance with all applicable environmental regulatory requirements. In this respect, it is important to consider and design all tritium supply alternatives so that they can comply with these requirements. The management of waste is an integral part of this activity and it is therefore necessary to estimate the quantities and specific wastes that will be generated by all tritium supply alternatives. A thorough assessment of waste streams includes waste characterization, quantification, and the identification of treatment and disposal options. The waste assessment for APT has been covered in two reports. The first report was a process waste assessment (PWA) that identified and quantified waste streams associated with both target designs and fulfilled the requirements of APT Work Breakdown Structure (WBS) Item 5.5.2.1. This second report is an expanded version of the first that includes all of the data of the first report, plus an assessment of treatment and disposal options for each waste stream identified in the initial report. The latter information was initially planned to be issued as a separate Waste Treatment and Disposal Options Assessment Report (WBS Item 5.5.2.2).

  10. Accelerating compartmental modeling on a graphical processing unit.

    PubMed

    Ben-Shalom, Roy; Liberman, Gilad; Korngreen, Alon

    2013-01-01

    Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs), which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than the CPU was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost-effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets. PMID:23508232

  11. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  12. Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference

    NASA Astrophysics Data System (ADS)

    Marzouk, Y.; Parno, M.

    2014-12-01

    We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.

  13. Smartphone-based noise mapping: Integrating sound level meter app data into the strategic noise mapping process.

    PubMed

    Murphy, Enda; King, Eoin A

    2016-08-15

    The strategic noise mapping process of the EU has now been ongoing for more than ten years. However, despite the fact that a significant volume of research has been conducted on the process and related issues there has been little change or innovation in how relevant authorities and policymakers are conducting the process since its inception. This paper reports on research undertaken to assess the possibility for smartphone-based noise mapping data to be integrated into the traditional strategic noise mapping process. We compare maps generated using the traditional approach with those generated using smartphone-based measurement data. The advantage of the latter approach is that it has the potential to remove the need for exhaustive input data into the source calculation model for noise prediction. In addition, the study also tests the accuracy of smartphone-based measurements against simultaneous measurements taken using traditional sound level meters in the field. PMID:27115622

  14. Mapping seafloor volcanism and its record of tectonic processes

    NASA Astrophysics Data System (ADS)

    Kalnins, L. M.; Valentine, A. P.; Trampert, J.

    2013-12-01

    One relatively obvious surface reflection of certain types of tectonic and mantle processes is volcanic activity. Ocean covers two thirds of our planet, so naturally much of this evidence will be marine, yet the evidence of volcanic activity in the oceans remains very incompletely mapped. Many seamounts, the products of 'excess' volcanism, have been identified (10,000--20,000 over 1 km in height, depending on the study), but it is estimated that up to 60% of seamounts in this height range remain unmapped. Given the scale of the task, identification of probable seamounts is a process that clearly needs to be automated, but identifying naturally occurring features such as these is difficult because of the degree of inherent variation. A very promising avenue for these questions lies in the use of learning algorithms, such as neural networks, designed to have complex pattern recognition capabilities. Building on the work of Valentine et al. (2013), we present preliminary results of a new global seamount study based on neural network methods. Advantages of this approach include an intrinsic measure of confidence in the seamount identification and full automation, allowing easy re-picking to suit the requirements of different types of studies. Here, we examine the resulting spatial and temporal distribution of marine volcanism and consider what insights this offers into the shifting patterns of plate tectonics and mantle activity. We also consider the size distribution of the seamounts and explore possible classes based on shape and their distributions, potentially reflecting both differing formational processes and later erosional processes. Valentine, A. P., L. M. Kalnins, and J. Trampert (2013), Discovery and analysis of topographic features using learning algorithms: A seamount case study, Geophysical Research Letters, 40(12), p. 3048--3054.

  15. Representing physiological processes and their participants with PhysioMaps

    PubMed Central

    2013-01-01

    Background As the number and size of biological knowledge resources for physiology grows, researchers need improved tools for searching and integrating knowledge and physiological models. Unfortunately, current resources—databases, simulation models, and knowledge bases, for example—are only occasionally and idiosyncratically explicit about the semantics of the biological entities and processes that they describe. Results We present a formal approach, based on the semantics of biophysics as represented in the Ontology of Physics for Biology, that divides physiological knowledge into three partitions: structural knowledge, process knowledge and biophysical knowledge. We then computationally integrate these partitions across multiple structural and biophysical domains as computable ontologies by which such knowledge can be archived, reused, and displayed. Our key result is the semi-automatic parsing of biosimulation model code into PhysioMaps that can be displayed and interrogated for qualitative responses to hypothetical perturbations. Conclusions Strong, explicit semantics of biophysics can provide a formal, computational basis for integrating physiological knowledge in a manner that supports visualization of the physiological content of biosimulation models across spatial scales and biophysical domains. PMID:23735231

  16. Swarm accelerometer data processing from raw accelerations to thermospheric neutral densities

    NASA Astrophysics Data System (ADS)

    Siemes, Christian; de Teixeira da Encarnação, João; Doornbos, Eelco; van den IJssel, Jose; Kraus, Jiří; Pereštý, Radek; Grunwaldt, Ludwig; Apelbaum, Guy; Flury, Jakob; Holmdahl Olsen, Poul Erik

    2016-05-01

    The Swarm satellites were launched on November 22, 2013, and carry accelerometers and GPS receivers as part of their scientific payload. The GPS receivers do not only provide the position and time for the magnetic field measurements, but are also used for determining non-gravitational forces like drag and radiation pressure acting on the spacecraft. The accelerometers measure these forces directly, at much finer resolution than the GPS receivers, from which thermospheric neutral densities can be derived. Unfortunately, the acceleration measurements suffer from a variety of disturbances, the most prominent being slow temperature-induced bias variations and sudden bias changes. In this paper, we describe the new, improved four-stage processing that is applied for transforming the disturbed acceleration measurements into scientifically valuable thermospheric neutral densities. In the first stage, the sudden bias changes in the acceleration measurements are manually removed using a dedicated software tool. The second stage is the calibration of the accelerometer measurements against the non-gravitational accelerations derived from the GPS receiver, which includes the correction for the slow temperature-induced bias variations. The identification of validity periods for calibration and correction parameters is part of the second stage. In the third stage, the calibrated and corrected accelerations are merged with the non-gravitational accelerations derived from the observations of the GPS receiver by a weighted average in the spectral domain, where the weights depend on the frequency. The fourth stage consists of transforming the corrected and calibrated accelerations into thermospheric neutral densities. We present the first results of the processing of Swarm C acceleration measurements from June 2014 to May 2015. We started with Swarm C because its acceleration measurements contain much less disturbances than those of Swarm A and have a higher signal-to-noise ratio

  17. Quasi-steady stages in the process of premixed flame acceleration in narrow channels

    NASA Astrophysics Data System (ADS)

    Valiev, D. M.; Bychkov, V.; Akkerman, V.; Eriksson, L.-E.; Law, C. K.

    2013-09-01

    The present paper addresses the phenomenon of spontaneous acceleration of a premixed flame front propagating in micro-channels, with subsequent deflagration-to-detonation transition. It has recently been shown experimentally [M. Wu, M. Burke, S. Son, and R. Yetter, Proc. Combust. Inst. 31, 2429 (2007)], 10.1016/j.proci.2006.08.098, computationally [D. Valiev, V. Bychkov, V. Akkerman, and L.-E. Eriksson, Phys. Rev. E 80, 036317 (2009)], 10.1103/PhysRevE.80.036317, and analytically [V. Bychkov, V. Akkerman, D. Valiev, and C. K. Law, Phys. Rev. E 81, 026309 (2010)], 10.1103/PhysRevE.81.026309 that the flame acceleration undergoes different stages, from an initial exponential regime to quasi-steady fast deflagration with saturated velocity. The present work focuses on the final saturation stages in the process of flame acceleration, when the flame propagates with supersonic velocity with respect to the channel walls. It is shown that an intermediate stage may occur during acceleration with quasi-steady velocity, noticeably below the Chapman-Jouguet deflagration speed. The intermediate stage is followed by additional flame acceleration and subsequent saturation to the Chapman-Jouguet deflagration regime. We elucidate the intermediate stage by the joint effect of gas pre-compression ahead of the flame front and the hydraulic resistance. The additional acceleration is related to viscous heating at the channel walls, being of key importance at the final stages. The possibility of explosion triggering is also demonstrated.

  18. FAST Observations of Acceleration Processes in the Cusp--Evidence for Parallel Electric Fields

    NASA Technical Reports Server (NTRS)

    Pfaff, R. F.. Jr.; Carlson, C.; McFadden, J.; Ergun, R.; Clemmons, J.; Klumpar D.; Strangeway, R.

    1999-01-01

    The existence of precipitating keV ions in the Earth's cusp originating at the magnetosheath provide unique means to test our understanding of particle acceleration and parallel electric fields in the lower altitude acceleration region. On numerous occasions, the FAST (The Fast Auroral Snapshot) spacecraft has encountered the Earth's cusp regions near its apogee of 4175 km which are characterized by their signatures of dispersed keV ion injections. The FAST instruments also reveal a complex microphysics inherent to many, but not all, of the cusp regions encountered by the spacecraft, that include upgoing ion beams and conics, inverted-V electrons, upgoing electron beams, and spikey DC-coupled electric fields and plasma waves. Detailed inspection of the FAST data often show clear modulation of the precipitating magnetosheath ions that indicate that they are affected by local electric potentials. For example, the magnetosheath ion precipitation is sometimes abruptly shut off precisely in regions where downgoing localized inverted-V electrons are observed. Such observations support the existence of a localized process, such as parallel electric fields, above the spacecraft which accelerate the electrons downward and consequently impede the precipitating ion precipitation. Other acceleration events in the cusp are sometimes organized with an apparent cellular structure that suggests Alfven waves or other large-scale phenomena are controlling the localized potentials. We examine several cusp encounters by the FAST satellite where the modulation of energetic session on acceleration particle populations reveals evidence of localized acceleration, most likely by parallel electric fields.

  19. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  20. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    PubMed Central

    Wang, Xin; Zhang, Bin; Cao, Xu; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-01-01

    Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (Gd and W modules) were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly. PMID:23606899

  1. Nanomanufacturing Portfolio: Manufacturing Processes and Applications to Accelerate Commercial Use of Nanomaterials

    SciTech Connect

    Industrial Technologies Program

    2011-01-05

    This brochure describes the 31 R&D projects that AMO supports to accelerate the commercial manufacture and use of nanomaterials for enhanced energy efficiency. These cost-shared projects seek to exploit the unique properties of nanomaterials to improve the functionality of industrial processes and products.

  2. Sampling frequency affects the processing of Actigraph raw acceleration data to activity counts.

    PubMed

    Brønd, Jan Christian; Arvidsson, Daniel

    2016-02-01

    ActiGraph acceleration data are processed through several steps (including band-pass filtering to attenuate unwanted signal frequencies) to generate the activity counts commonly used in physical activity research. We performed three experiments to investigate the effect of sampling frequency on the generation of activity counts. Ideal acceleration signals were produced in the MATLAB software. Thereafter, ActiGraph GT3X+ monitors were spun in a mechanical setup. Finally, 20 subjects performed walking and running wearing GT3X+ monitors. Acceleration data from all experiments were collected with different sampling frequencies, and activity counts were generated with the ActiLife software. With the default 30-Hz (or 60-Hz, 90-Hz) sampling frequency, the generation of activity counts was performed as intended with 50% attenuation of acceleration signals with a frequency of 2.5 Hz by the signal frequency band-pass filter. Frequencies above 5 Hz were eliminated totally. However, with other sampling frequencies, acceleration signals above 5 Hz escaped the band-pass filter to a varied degree and contributed to additional activity counts. Similar results were found for the spinning of the GT3X+ monitors, although the amount of activity counts generated was less, indicating that raw data stored in the GT3X+ monitor is processed. Between 600 and 1,600 more counts per minute were generated with the sampling frequencies 40 and 100 Hz compared with 30 Hz during running. Sampling frequency affects the processing of ActiGraph acceleration data to activity counts. Researchers need to be aware of this error when selecting sampling frequencies other than the default 30 Hz. PMID:26635347

  3. Graphics processing unit-accelerated double random phase encoding for fast image encryption

    NASA Astrophysics Data System (ADS)

    Lee, Jieun; Yi, Faliu; Saifullah, Rao; Moon, Inkyu

    2014-11-01

    We propose a fast double random phase encoding (DRPE) algorithm using a graphics processing unit (GPU)-based stream-processing model. A performance analysis of the accelerated DRPE implementation that employs the Compute Unified Device Architecture programming environment is presented. We show that the proposed methodology executed on a GPU can dramatically increase encryption speed compared with central processing unit sequential computing. Our experimental results demonstrate that in encryption data of an image with a pixel size of 1000×1000, where one pixel has a 32-bit depth, our GPU version of the DRPE scheme can be approximately two times faster than the advanced encryption standard algorithm implemented on a GPU. In addition, the quality of parallel processing on the presented DRPE acceleration method is evaluated with performance parameters, such as speedup, efficiency, and redundancy.

  4. Acceleration and transport processes - Verification and observations. [of particles in interstellar medium

    NASA Technical Reports Server (NTRS)

    Jokipii, J. R.

    1983-01-01

    The general problem of diffusive transport and acceleration of energetic charged particles is considered. The transport of solar-flare particles, solar modulation of galactic cosmic rays and shock acceleration processes on the solar wind are examined and observational tests are summarized. It is concluded that the basic diffusive transport equation is a useful approximation in situations like the solar wind, where turbulent scattering by magnetic irregularities is sufficient to maintain near isotropy. The application of this equation to the interstellar medium andd other, more distant astrophysical regimes is then discussed and implications for gamma-ray astrophysics are outlined. Finally the evidence for interstellar turbulence is reviewed and its consequences briefly discussed.

  5. Measurement of dynamic strength at high pressures using magnetically applied pressure-shear (MAPS) on the Sandia Z accelerator

    NASA Astrophysics Data System (ADS)

    Alexander, C.; Haill, T.; Dalton, D.; Rovang, D.; Lamppa, D.

    2013-06-01

    The recently developed magnetically applied pressure-shear (MAPS) technique used to measure dynamic material strength at high pressures on magneto-hydrodynamic (MHD) drive pulsed power platforms has been implemented on the Sandia Z accelerator. MAPS relies on an external magnetic field normal to the plane of the MHD drive current to directly induce a shear stress wave in addition to the usual longitudinal stress wave. This shear wave is used to directly probe the strength of a sample. By implementing this technique on Z, far greater pressures can be attained than were previously available using other MHD facilities. In addition, the use of isentropic compression will limit sample heating allowing the measurement to be made at a much lower temperature than under shock compression. Details of the experimental approach, including design considerations and analysis of the results, will be presented along with the results of Z experiments measuring the strength of tantalum at pressures up to 50 GPa, a five-fold increase in pressure over previous results using this technique. Sandia National Labs is a multi-program laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Risk-Based Decision Process for Accelerated Closure of a Nuclear Weapons Facility

    SciTech Connect

    Butler, L.; Norland, R. L.; DiSalvo, R.; Anderson, M.

    2003-02-25

    Nearly 40 years of nuclear weapons production at the Rocky Flats Environmental Technology Site (RFETS or Site) resulted in contamination of soil and underground systems and structures with hazardous substances, including plutonium, uranium and hazardous waste constituents. The Site was placed on the National Priority List in 1989. There are more than 370 Individual Hazardous Substance Sites (IHSSs) at RFETS. Accelerated cleanup and closure of RFETS is being achieved through implementation and refinement of a regulatory framework that fosters programmatic and technical innovations: (1) extensive use of ''accelerated actions'' to remediate IHSSs, (2) development of a risk-based screening process that triggers and helps define the scope of accelerated actions consistent with the final remedial action objectives for the Site, (3) use of field instrumentation for real time data collection, (4) a data management system that renders near real time field data assessment, and (5) a regulatory agency consultative process to facilitate timely decisions. This paper presents the process and interim results for these aspects of the accelerated closure program applied to Environmental Restoration activities at the Site.

  7. Recent developments in the application of electron accelerators for polymer processing

    NASA Astrophysics Data System (ADS)

    Chmielewski, A. G.; Al-Sheikhly, M.; Berejka, A. J.; Cleland, M. R.; Antoniak, M.

    2014-01-01

    There are now over 1700 high current, electron beam (EB) accelerators being used world-wide in industrial applications, most of which involve polymer processing. In contrast to the use of heat, which transfers only about 5-10% of input energy into energy useful for materials modification, radiation processing is very energy efficient, with 60% or more of the input energy to an accelerator being available for affecting materials. Historic markets, such as the crosslinking of wire and cable jacketing, of heat shrinkable tubings and films, of partial crosslinking of tire components and of low-energy EB to cure or dry inks and coatings remain strong. Accelerator manufacturers have made equipment more affordable by down-sizing units while maintaining high beam currents. Very powerful accelerators with 700 kW output have made X-ray conversion a practical alternative to the historic use of radioisotopes, mainly cobalt-60, for applications as medical device sterilization. New EB end-uses are emerging, such as the development of nano-composites and nano-gels and the use of EB processing to facilitate biofuel production. These present opportunities for future research and development.

  8. Acceleration processes in the quasi-steady magnetoplasmadynamic discharge. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Boyle, M. J.

    1974-01-01

    The flow field characteristics within the discharge chamber and exhaust of a quasi-steady magnetoplasmadynamic (MPD) arcjet were examined to clarify the nature of the plasma acceleration process. The observation of discharge characteristics unperturbed by insulator ablation and terminal voltage fluctuations, first requires the satisfaction of three criteria: the use of refractory insulator materials; a mass injection geometry tailored to provide propellant to both electrode regions of the discharge; and a cathode of sufficient surface area to permit nominal MPD arcjet operation for given combinations of arc current and total mass flow. The axial velocity profile and electromagnetic discharge structure were measured for an arcjet configuration which functions nominally at 15.3 kA and 6 g/sec argon mass flow. An empirical two-flow plasma acceleration model is advanced which delineates inner and outer flow regions and accounts for the observed velocity profile and calculated thrust of the accelerator.

  9. General description of electromagnetic radiation processes based on instantaneous charge acceleration in ''endpoints''

    SciTech Connect

    James, Clancy W.; Falcke, Heino; Huege, Tim; Ludwig, Marianne

    2011-11-15

    We present a methodology for calculating the electromagnetic radiation from accelerated charged particles. Our formulation - the 'endpoint formulation' - combines numerous results developed in the literature in relation to radiation arising from particle acceleration using a complete, and completely general, treatment. We do this by describing particle motion via a series of discrete, instantaneous acceleration events, or 'endpoints', with each such event being treated as a source of emission. This method implicitly allows for particle creation and destruction, and is suited to direct numerical implementation in either the time or frequency domains. In this paper we demonstrate the complete generality of our method for calculating the radiated field from charged particle acceleration, and show how it reduces to the classical named radiation processes such as synchrotron, Tamm's description of Vavilov-Cherenkov, and transition radiation under appropriate limits. Using this formulation, we are immediately able to answer outstanding questions regarding the phenomenology of radio emission from ultra-high-energy particle interactions in both the earth's atmosphere and the moon. In particular, our formulation makes it apparent that the dominant emission component of the Askaryan effect (coherent radio-wave radiation from high-energy particle cascades in dense media) comes from coherent 'bremsstrahlung' from particle acceleration, rather than coherent Vavilov-Cherenkov radiation.

  10. Real-time Process Monitoring and Temperature Mapping of the 3D Polymer Printing Process

    SciTech Connect

    Dinwiddie, Ralph Barton; Love, Lonnie J; Rowe, John C

    2013-01-01

    An extended range IR camera was used to make temperature measurements of samples as they are being manufactured. The objective is to quantify the temperature variation inside the system as parts are being fabricated, as well as quantify the temperature of a part during fabrication. The IR camera was used to map the temperature within the build volume of the oven and surface temperature measurement of a part as it was being manufactured. The development of the temperature map of the oven provides insight into the global temperature variation within the oven that may lead to understanding variations in the properties of parts as a function of location. The observation of the temperature variation of a part that fails during construction provides insight into how the deposition process itself impacts temperature distribution within a single part leading to failure.

  11. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  12. Plastic Deformation Behavior and Processing Maps of 35CrMo Steel

    NASA Astrophysics Data System (ADS)

    Xiao, Zheng-bing; Huang, Yuan-chun; Liu, Yu

    2016-03-01

    Hot deformation behavior of 35CrMo steel was investigated by compression tests in the temperature range of 850 to 1150 °C and strain rate range of 0.01 to 20 s-1 on a Gleeble-3810 thermal simulator. According to processing maps constructed based on the experimental data and using the principle of dynamic materials modeling (DMM), when the strain is 0.8, three safe regions with comparatively high efficiency of power dissipation were identified: (850 to 920) °C/(0.01 to 0.02) s-1, (850 to 900) °C/(10 to 20) s-1, and (1050 to 1150) °C/(0.01 to 1) s-1. And the domain of (920 to 1150) °C/(2.7 to 20) s-1 is within the instability range, whose efficiency of power dissipation is around 0.05. The deformed optical microstructure indicated that the combination of low deformation temperature (850 °C) and a relatively high strain rate (20 s-1) resulted in the smallest dynamic recrystallized grains, but coarser grains were obtained when a much higher strain rate was employed (50 s-1). A lower strain rate or a higher temperature will accelerate the growth of grains, and both high temperature and high strain rate can cause microcracks in the deformed steel. Integration of the processing map into the optical microstructure identified the region of (850 to 900) °C/(10 to 20) s-1 as the ideal condition for the hot deformation of 35CrMo steel.

  13. Draw-in Map - A Road Map for Simulation-Guided Die Tryout and Stamping Process Control

    SciTech Connect

    Wang Chuantao; Zhang, Jimmy J.; Goan, Norman

    2005-08-05

    Sheet metal forming is a displacement or draw-in controlled manufacturing process in which a flat blank is drawn into die cavity to form an automotive body panel. Draw-in amount is the single most important stamping manufacturing index that controls all forming characteristics (strains, stresses, thinning, etc.), stamping failures (splits, wrinkles, surface distortion, etc.) and line die operations and automations. Draw-in Map is engineered for math-based die developments via advanced stamping simulation technology. Then the Draw-in Map is provided to die makers in plants as a road map for math-guided die tryout in which the die tryout workers follow the engineered tryout conditions and matches the engineered draw-in amount so that the tryout time and cost are greatly reduced, and quality is ensured. The Map can also be used as a math-based trouble-shooting tool to identify the causes of formability problems in stamping production. The engineered Draw-in Map has been applied to all draw die tryout for all GM vehicle programs since 1998. A minimum 50% reduction in both lead-time and cost and significant improvement in panel quality in tryout have been reported. This paper presents the concept and process to apply the engineered Draw-in Map in die tryout.

  14. Classifying fractionated electrograms in human atrial fibrillation using monophasic action potentials and activation mapping: Evidence for localized drivers, rate acceleration, and nonlocal signal etiologies

    PubMed Central

    Narayan, Sanjiv M.; Wright, Matthew; Derval, Nicolas; Jadidi, Amir; Forclaz, Andrei; Nault, Isabelle; Miyazaki, Shinsuke; Sacher, Frédéric; Bordachar, Pierre; Clémenty, Jacques; Jaïs, Pierre; Haïssaguerre, Michel; Hocini, Mélèze

    2011-01-01

    BACKGROUND Complex fractionated electrograms (CFAEs) detected during substrate mapping for atrial fibrillation (AF) reflect etiologies that are difficult to separate. Without knowledge of local refractoriness and activation sequence, CFAEs may represent rapid localized activity, disorganized wave collisions, or far-field electrograms. OBJECTIVE The purpose of this study was to separate CFAE types in human AF, using monophasic action potentials (MAPs) to map local refractoriness in AF and multipolar catheters to map activation sequence. METHODS MAP and adjacent activation sequences at 124 biatrial sites were studied in 18 patients prior to AF ablation (age 57 ± 13 years, left atrial diameter 45 ± 8 mm). AF cycle length, bipolar voltage, and spectral dominant frequency were measured to characterize types of CFAE. RESULTS CFAE were observed at 91 sites, most of which showed discrete MAPs and (1) pansystolic local activity (8%); (2) CFAE after AF acceleration, often with MAP alternans (8%); or (3) nonlocal (far-field) signals (67%). A fourth CFAE pattern lacked discrete MAPs (17%), consistent with spatial disorganization. CFAE with discrete MAPs and pansystolic activation (consistent with rapid localized AF sites) had shorter cycle length (P <.05) and lower voltage (P <.05) and trended to have higher dominant frequency than other CFAE sites. Many CFAEs, particularly at the septa and coronary sinus, represented far-field signals. CONCLUSION CFAEs in human AF represent distinct functional types that may be separated using MAPs and activation sequence. In a minority of cases, CFAEs indicate localized rapid AF sites. The majority of CFAEs reflect far-field signals, AF acceleration, or disorganization. These results may help to interpret CFAE during AF substrate mapping. PMID:20955820

  15. Optimization of process parameters for the manufacturing of rocket casings: A study using processing maps

    NASA Astrophysics Data System (ADS)

    Avadhani, G. S.

    2003-12-01

    Maraging steels possess ultrahigh strength combined with ductility and toughness and could be easily fabricated and heat-treated. Bulk metalworking of maraging steels is an important step in the component manufacture. To optimize the hot-working parameters (temperature and strain rate) for the ring rolling process of maraging steel used for the manufacture of rocket casings, a systematic study was conducted to characterize the hot working behavior by developing processing maps for γ-iron and an indigenous 250 grade maraging steel. The hot deformation behavior of binary alloys of iron with Ni, Co, and Mo, which are major constituents of maraging steel, is also studied. Results from the investigation suggest that all the materials tested exhibit a domain of dynamic recrystallization (DRX). From the instability maps, it was revealed that strain rates above 10 s-1 are not suitable for hot working of these materials. An important result from the stress-strain behavior is that while Co strengthens γ-iron, Ni and Mo cause flow softening. Temperatures around 1125 °C and strain rate range between 0.001 and 0.1 s-1 are suitable for the hot working of maraging steel in the DRX domain. Also, higher strain rates may be used in the meta-dynamic recrystallization domain above 1075 °C for high strain rate applications such as ring rolling. The microstructural mechanisms identified from the processing maps along with grain size analyses and hot ductility measurements could be used to design hot-working schedules for maraging steel.

  16. Challenges Encountered during the Processing of the BNL ERL 5 Cell Accelerating Cavity

    SciTech Connect

    A. Burrill; I. Ben-Zvi; R. Calaga; H. Hahn; V. Litvinenko; G. T. McIntyre; P. Kneisel; J. Mammosser; J. P. Preble; C. E. Reece; R. A. Rimmer; J. Saunders

    2007-08-01

    One of the key components for the Energy Recovery Linac being built by the Electron cooling group in the Collider Accelerator Department is the 5 cell accelerating cavity which is designed to accelerate 2 MeV electrons from the gun up to 15-20 MeV, allow them to make one pass through the ring and then decelerate them back down to 2 MeV prior to sending them to the dump. This cavity was designed by BNL and fabricated by AES in Medford, NY. Following fabrication it was sent to Thomas Jefferson Lab in VA for chemical processing, testing and assembly into a string assembly suitable for shipment back to BNL and integration into the ERL. The steps involved in this processing sequence will be reviewed and the deviations from processing of similar SRF cavities will be discussed. The lessons learned from this process are documented to help future projects where the scope is different from that normally encountered.

  17. Distinguishing Between Quasi-static and Alfvénic Auroral Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Lysak, R. L.; Song, Y.

    2013-12-01

    Models for the acceleration of auroral particles fall into two general classes. Quasi-static processes, such as double layers or magnetic mirror supported potential drops, produce a nearly monoenergetic beam of precipitating electrons and upward flowing ion beams. Time-dependent acceleration processes, often associated with kinetic Alfvén waves, can produce a broader range of energies and often have a strongly field-aligned pitch angle distribution. Both processes are associated with strong perpendicular electric fields as well as the parallel electric fields that are largely responsible for the particle acceleration. These electric fields and the related magnetic perturbations can be characterized by the ratio of the electric field to a perpendicular magnetic perturbation, which is related to the Pedersen conductivity in the static case and the Alfvén velocity in the time-dependent case. However, these considerations can be complicated by the interaction between upward and downward propagating waves. The relevant time and space scales of these processes will be assessed and the consequences for observation by orbiting spacecraft and ground-based instrumentation will be determined. These features will be illustrated by numerical simulations of the magnetosphere-ionosphere coupling with emphasis on what a virtual spacecraft passing through the simulation would be expected to observe.

  18. CHALLENGES ENCOUNTERED DURING THE PROCESSING OF THE BNL ERL 5 CELL ACCELERATING CAVITY

    SciTech Connect

    BURRILL,A.

    2007-06-25

    One of the key components for the Energy Recovery Linac being built by the Electron cooling group in the Collider Accelerator Department is the 5 cell accelerating cavity which is designed to accelerate 2 MeV electrons from the gun up to 15-20 MeV, allow them to make one pass through the ring and then decelerate them back down to 2 MeV prior to sending them to the dump. This cavity was designed by BNL and fabricated by AES in Medford, NY. Following fabrication it was sent to Thomas Jefferson Lab in VA for chemical processing, testing and assembly into a string assembly suitable for shipment back to BNL for integration into the ERL. The steps involved in this processing sequence will be reviewed and the deviations from processing of similar SRF cavities will be discussed. The lessons learned from this process are documented to help future projects where the scope is different from that normally encountered.

  19. A whole body vibration perception map and associated acceleration loads at the lower leg, hip and head.

    PubMed

    Sonza, Anelise; Völkel, Nina; Zaro, Milton A; Achaval, Matilde; Hennig, Ewald M

    2015-07-01

    Whole-body vibration (WBV) training has become popular in recent years. However, WBV may be harmful to the human body. The goal of this study was to determine the acceleration magnitudes at different body segments for different frequencies of WBV. Additionally, vibration sensation ratings by subjects served to create perception vibration magnitude and discomfort maps of the human body. In the first of two experiments, 65 young adults mean (± SD) age range of 23 (± 3.0) years, participated in WBV severity perception ratings, based on a Borg scale. Measurements were performed at 12 different frequencies, two intensities (3 and 5 mm amplitudes) of rotational mode WBV. On a separate day, a second experiment (n = 40) included vertical accelerometry of the head, hip and lower leg with the same WBV settings. The highest lower limb vibration magnitude perception based on the Borg scale was extremely intense for the frequencies between 21 and 25 Hz; somewhat hard for the trunk region (11-25 Hz) and fairly light for the head (13-25 Hz). The highest vertical accelerations were found at a frequency of 23 Hz at the tibia, 9 Hz at the hip and 13 Hz at the head. At 5 mm amplitude, 61.5% of the subjects reported discomfort in the foot region (21-25 Hz), 46.2% for the lower back (17, 19 and 21 Hz) and 23% for the abdominal region (9-13 Hz). The range of 3-7 Hz represents the safest frequency range with magnitudes less than 1 g(*)sec for all studied regions. PMID:25962379

  20. Integrating Clinical Experiences in a TESOL Teacher Education Program: Curriculum Mapping as Process

    ERIC Educational Resources Information Center

    Baecher, Laura

    2012-01-01

    Across all certification areas, teacher education is being challenged to better integrate clinical experiences with coursework. This article describes the process of curriculum mapping and its impact on the organization of clinical experiences in a master's TESOL program over a 1-year redesign process. Although curriculum mapping has been…

  1. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    NASA Astrophysics Data System (ADS)

    Dey, T.; Rodrigue, P.

    2015-07-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 103 to 104 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called `intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The examination

  2. Redesign of Process Map to Increase Efficiency: Reducing Procedure Time 1 in Cervical-Cancer Brachytherapy

    PubMed Central

    Damato, Antonio L.; Cormack, Robert A.; Bhagwat, Mandar S.; Buzurovic, Ivan; Finucane, Susan; Hansen, Jorgen L.; O’Farrell, Desmond A.; Offiong, Alecia; Randall, Una; Friesen, Scott; Lee, Larissa J.; Viswanathan, Akila N.

    2014-01-01

    Purpose To increase intra-procedural efficiency in the use of clinical resources and to decrease planning time for cervical-cancer brachytherapy treatments through redesign of the procedure’s process map. Methods and Materials A multi-disciplinary team identified all tasks and associated resources involved in cervical-cancer brachytherapy in our institution, and arranged them in a process map. A redesign of the treatment planning component of the process map was conducted with the goal of minimizing planning time. Planning time was measured on 20 consecutive insertions, of which 10 were performed with standard procedures and 10 with the redesigned process map, and results compared. Statistical significance (p <0.05) was measured with a 2-tailed T-test. Results Twelve tasks involved in cervical-cancer brachytherapy treatments were identified. The process map showed that in standard procedures, the treatment planning tasks were performed sequentially. The process map was redesigned to specify that contouring and some planning tasks are performed concomitantly. Some quality assurance (QA) tasks were reorganized to minimize adverse effects of a possible error on procedure time. Test “dry runs” followed by live implementation confirmed the applicability of the new process map to clinical conditions. A 29% reduction in planning time (p <0.01) was observed with the introduction of the redesigned process map. Conclusions A process map for cervical-cancer brachytherapy was generated. The treatment planning component of the process map was redesigned, resulting in a 29% decrease in planning time and a streamlining of the QA process. PMID:25572438

  3. Fast Mapping Across Time: Memory Processes Support Children’s Retention of Learned Words

    PubMed Central

    Vlach, Haley A.; Sandhofer, Catherine M.

    2012-01-01

    Children’s remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children’s (N = 216) and adults’ (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children’s retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children’s word learning and the role of one such process, forgetting, is discussed in detail – forgetting supports extended mapping by promoting the memory and generalization of words and categories. PMID:22375132

  4. Small scale particle acceleration processes in the auroral region: Remote sensing and "in situ" measurements

    NASA Astrophysics Data System (ADS)

    Pottelette, R.; Berthomier, M.; Pickett, J.

    2012-04-01

    Among the many problems in auroral physics that are barely understood the microphysical problems occupy a majority position. Thanks to high sampling and telemetry rates implemented on spacecraft devoted to the study of the auroral regions such as POLAR, FAST, CLUSTER 2…, it has become possible to determine the role of transient nonlinear structures in the basic microscopic processes regulating the Magnetosphere-Ionosphere interactions. Great progress has been achieved in probing the nonlinear turbulent plasma processes which accelerate energetic particles and generate radiation such as the Auroral Kilometric Radiation (AKR) in the auroral upward current region. Narrow-in-altitude acceleration layers (double layers) can be identified by using both particles and waves measurements. Double layers once immersed in the plasma necessarily accelerate particles along the magnetic field, thereby generating locally strong turbulent processes leading to the formation of phase-space holes. As predicted by numerical simulations, we will emphasize the asymmetric character of the turbulence generated in the regions located upstream (low-potential side) or downstream (high-potential side) of a double layer. We will also point out that monitoring the time variation of the frequency drift of the elementary AKR radiators allows to qualitatively infer about the dynamics and the spatial extension of the field-aligned potential drops.

  5. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  6. Real-time reprogrammable low-level image processing: edge detection and edge tracking accelerator

    NASA Astrophysics Data System (ADS)

    Meribout, M.; Hou, Kun M.

    1993-10-01

    Currently, in image processing, segmentation algorithms comprise between real time video rate processing and accurate results. In this paper, we present an efficient and not recursive algorithm filter originated from Deriche filter. This algorithm is implemented in hardware by using FPGA technology. Thus, it permits video rate edge detection. In addition, the FPGA board is used as an edge tracking accelerator, it allows us to greatly reduce execution time by avoiding scanning the whole image. We also present the architecture of our vision system dedicated to build 3D scene every 200 ms.

  7. Measurement of the velocities in the transient acceleration process using all-fiber photonic Doppler velocimetry

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Wu, Chong-qing; Song, Hong-wei; Yu, Tao; Xu, Jing-jing

    2011-05-01

    Based on analysis of basic photonic Doppler velocimetry (PDV), a formula to measure velocity variation in a single cycle is put forward. PDV has been improved in three aspects, namely, the laser, the detector and the data processing. A measurement system for velocity of the initial stage of a shock motion has been demonstrated. Instantaneous velocity measurements have been performed. The experimental results have a good agreement with the values obtained from the accelerometer. Compared with the traditional fringe method, the proposed method in this paper can identify instantaneous velocity variation. So it is particularly suitable for measuring the velocity in the transient acceleration process of shock waves and detonation waves.

  8. An accelerated method of computing nonlinear processes in instruments with longitudinal interaction

    NASA Astrophysics Data System (ADS)

    Pikunov, V. M.; Prokopev, V. E.; Sandalov, A. N.

    1985-04-01

    The use of the reference particle method for investigating nonlinear processes in instruments with longitudinal interaction is considered in an attempt to accelerate the computation of the processes. It is demonstrated that, coupled with interpolation formulas based on Kotelnikov series, the method yields effective numerical algorithms in the framework of discrete models of electron flux. A comparison of the method with a disk model of an electron flux for the case of multiresonator clistron was performed for clistron bunchers with 50 to 80-percent efficiency. It is concluded that the computation time was reduced by a factor of 3-10 while maintaining satisfactory accuracy.

  9. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    SciTech Connect

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; Degraff, Brian D.; Gold, Steven W.; Hannah, Brian S.; Howell, Matthew P.; Kim, Sang-Ho; Mammosser, John; McMahan, Christopher J.; Neustadt, Thomas S.; Saunders, Jeffrey W.; Tyagi, Puneet V.; Vandygriff, Daniel J.; Vandygriff, David M.; Ball, Jeffrey Allen; Blokland, Willem; Crofford, Mark T.; Lee, Sung-Woo; Stewart, Stephen; Strong, William Herb

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipacting issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.

  10. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    DOE PAGESBeta

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; Degraff, Brian D.; Gold, Steven W.; Hannah, Brian S.; Howell, Matthew P.; Kim, Sang-Ho; Mammosser, John; McMahan, Christopher J.; et al

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipactingmore » issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.« less

  11. In-situ plasma processing to increase the accelerating gradients of superconducting radio-frequency cavities

    NASA Astrophysics Data System (ADS)

    Doleans, M.; Tyagi, P. V.; Afanador, R.; McMahan, C. J.; Ball, J. A.; Barnhart, D. L.; Blokland, W.; Crofford, M. T.; Degraff, B. D.; Gold, S. W.; Hannah, B. S.; Howell, M. P.; Kim, S.-H.; Lee, S.-W.; Mammosser, J.; Neustadt, T. S.; Saunders, J. W.; Stewart, S.; Strong, W. H.; Vandygriff, D. J.; Vandygriff, D. M.

    2016-03-01

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipacting issues. In this article, the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus are discussed.

  12. Investigation of physico-chemical processes in hypervelocity MHD-gas acceleration wind tunnels

    SciTech Connect

    Alfyorov, V.I.; Dmitriev, L.M.; Yegorov, B.V.; Markachev, Yu.E.

    1995-12-31

    The calculation results for nonequilibrium physicochemical processes in the circuit of the hypersonic MHD-gas acceleration wind tunnel are presented. The flow in the primary nozzle is shown to be in thermodynamic equilibrium at To=3400 K, Po=(2{approximately}3)x10{sup 5} Pa, M=2 used in the plenum chamber. Variations in the static pressure due to oxidation reaction of Na, K are pointed out. The channels of energy transfer from the electric field to different degrees of freedom of an accelerated gas with Na, K seeds are considered. The calculation procedure for gas dynamic and kinetic processes in the MHD-channel using measured parameters is suggested. The calculated results are compared with the data obtained in a thermodynamic gas equilibrium assumption. The flow in the secondary nozzle is calculated under the same assumptions and the gas parameters at its exit are evaluated. Particular attention is given to the influence of seeds on flows over bodies. It is shown that the seeds exert a very small influence on the flow behind a normal shock wave. The seeds behind an oblique shock wave accelerate deactivation of vibrations of N{sub 2}, but this effect is insignificant.

  13. Novel mapping in non-equilibrium stochastic processes

    NASA Astrophysics Data System (ADS)

    Heseltine, James; Kim, Eun-jin

    2016-04-01

    We investigate the time-evolution of a non-equilibrium system in view of the change in information and provide a novel mapping relation which quantifies the change in information far from equilibrium and the proximity of a non-equilibrium state to the attractor. Specifically, we utilize a nonlinear stochastic model where the stochastic noise plays the role of incoherent regulation of the dynamical variable x and analytically compute the rate of change in information (information velocity) from the time-dependent probability distribution function. From this, we quantify the total change in information in terms of information length { L } and the associated action { J }, where { L } represents the distance that the system travels in the fluctuation-based, statistical metric space parameterized by time. As the initial probability density function’s mean position (μ) is decreased from the final equilibrium value {μ }* (the carrying capacity), { L } and { J } increase monotonically with interesting power-law mapping relations. In comparison, as μ is increased from {μ }*,{ L } and { J } increase slowly until they level off to a constant value. This manifests the proximity of the state to the attractor caused by a strong correlation for large μ through large fluctuations. Our proposed mapping relation provides a new way of understanding the progression of the complexity in non-equilibrium system in view of information change and the structure of underlying attractor.

  14. Status of development of actinide blanket processing flowsheets for accelerator transmutation of nuclear waste

    SciTech Connect

    Dewey, H.J.; Jarvinen, G.D.; Marsh, S.F.; Schroeder, N.C.; Smith, B.F.; Villarreal, R.; Walker, R.B.; Yarbro, S.L.; Yates, M.A.

    1993-09-01

    An accelerator-driven subcritical nuclear system is briefly described that transmutes actinides and selected long-lived fission products. An application of this accelerator transmutation of nuclear waste (ATW) concept to spent fuel from a commercial nuclear power plant is presented as an example. The emphasis here is on a possible aqueous processing flowsheet to separate the actinides and selected long-lived fission products from the remaining fission products within the transmutation system. In the proposed system the actinides circulate through the thermal neutron flux as a slurry of oxide particles in heavy water in two loops with different average residence times: one loop for neptunium and plutonium and one for americium and curium. Material from the Np/Pu loop is processed with a short cooling time (5-10 days) because of the need to keep the total actinide inventory, low for this particular ATW application. The high radiation and thermal load from the irradiated material places severe constraints on the separation processes that can be used. The oxide particles are dissolved in nitric acid and a quarternary, ammonium anion exchanger is used to extract neptunium, plutonium, technetium, and palladium. After further cooling (about 90 days), the Am, Cm and higher actinides are extracted using a TALSPEAK-type process. The proposed operations were chosen because they have been successfully tested for processing high-level radioactive fuels or wastes in gram to kilogram quantities.

  15. Plasma Processing of SRF Cavities for the next Generation Of Particle Accelerators

    SciTech Connect

    Vuskovic, Leposava

    2015-11-23

    The cost-effective production of high frequency accelerating fields are the foundation for the next generation of particle accelerators. The Ar/Cl2 plasma etching technology holds the promise to yield a major reduction in cavity preparation costs. Plasma-based dry niobium surface treatment provides an excellent opportunity to remove bulk niobium, eliminate surface imperfections, increase cavity quality factor, and bring accelerating fields to higher levels. At the same time, the developed technology will be more environmentally friendly than the hydrogen fluoride-based wet etching technology. Plasma etching of inner surfaces of standard multi-cell SRF cavities is the main goal of this research in order to eliminate contaminants, including niobium oxides, in the penetration depth region. Successful plasma processing of multi-cell cavities will establish this method as a viable technique in the quest for more efficient components of next generation particle accelerators. In this project the single-cell pill box cavity plasma etching system is developed and etching conditions are determined. An actual single cell SRF cavity (1497 MHz) is plasma etched based on the pill box cavity results. The first RF test of this plasma etched cavity at cryogenic temperature is obtained. The system can also be used for other surface modifications, including tailoring niobium surface properties, surface passivation or nitriding for better performance of SRF cavities. The results of this plasma processing technology may be applied to most of the current SRF cavity fabrication projects. In the course of this project it has been demonstrated that a capacitively coupled radio-frequency discharge can be successfully used for etching curved niobium surfaces, in particular the inner walls of SRF cavities. The results could also be applicable to the inner or concave surfaces of any 3D structure other than an SRF cavity.

  16. ACCELERATED PROCESSING OF SB4 AND PREPARATION FOR SB5 PROCESSING AT DWPF

    SciTech Connect

    Herman, C

    2008-12-01

    The Defense Waste Processing Facility (DWPF) initiated processing of Sludge Batch 4 (SB4) in May 2007. SB4 was the first DWPF sludge batch to contain significant quantities of HM or high Al sludge. Initial testing with SB4 simulants showed potential negative impacts to DWPF processing; therefore, Savannah River National Laboratory (SRNL) performed extensive testing in an attempt to optimize processing. SRNL's testing has resulted in the highest DWPF production rates since start-up. During SB4 processing, DWPF also began incorporating waste streams from the interim salt processing facilities to initiate coupled operations. While DWPF has been processing SB4, the Liquid Waste Organization (LWO) and the SRNL have been preparing Sludge Batch 5 (SB5). SB5 has undergone low-temperature aluminum dissolution to reduce the mass of sludge for vitrification and will contain a small fraction of Purex sludge. A high-level review of SB4 processing and the SB5 preparation studies will be provided.

  17. Use of Networked Collaborative Concept Mapping To Measure Team Processes and Team Outcomes.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; O'Neil, Harold F., Jr.; Herl, Howard E.; Dennis, Robert A.

    The feasibility of using a computer-based networked collaborative concept mapping system to measure teamwork skills was studied. A concept map is a node-link-node representation of content, where the nodes represent concepts and links represent relationships between connected concepts. Teamwork processes were examined for a group concept mapping…

  18. The Impact of Concept Mapping on the Process of Problem-Based Learning

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2012-01-01

    A concept map is a graphical tool to activate and elaborate on prior knowledge, to support problem solving, promote conceptual thinking and understanding, and to organize and memorize knowledge. The aim of this study is to determine if the use of concept mapping (CM) in a problem-based learning (PBL) curriculum enhances the PBL process. The paper…

  19. Mapping Experiment as a Learning Process: How the First Electromagnetic Motor Was Invented.

    ERIC Educational Resources Information Center

    Gooding, David

    1990-01-01

    Introduced is a notation to map out an experiment as an active process in a real-world environment and display the human aspect written out of most narratives. Comparing maps of accounts can show how knowledge-construction depends on narrative reconstruction. Emphasized are nonverbal and procedural aspects of discovery and invention. (KR)

  20. Modeling the Communication Process: The Map Is not the Territory.

    ERIC Educational Resources Information Center

    Bowman, Joel P.; Targowski, Andrew S.

    1987-01-01

    Presents a brief overview of the most significant models of the communication process, evaluates the communication models of the greatest relevance to business communication, and establishes a foundation for a new conception of that process. (JC)

  1. Open-source graphics processing unit-accelerated ray tracer for optical simulation

    NASA Astrophysics Data System (ADS)

    Mauch, Florian; Gronle, Marc; Lyda, Wolfram; Osten, Wolfgang

    2013-05-01

    Ray tracing still is the workhorse in optical design and simulation. Its basic principle, propagating light as a set of mutually independent rays, implies a linear dependency of the computational effort and the number of rays involved in the problem. At the same time, the mutual independence of the light rays bears a huge potential for parallelization of the computational load. This potential has recently been recognized in the visualization community, where graphics processing unit (GPU)-accelerated ray tracing is used to render photorealistic images. However, precision requirements in optical simulation are substantially higher than in visualization, and therefore performance results known from visualization cannot be expected to transfer to optical simulation one-to-one. In this contribution, we present an open-source implementation of a GPU-accelerated ray tracer, based on nVidias acceleration engine OptiX, that traces in double precision and exploits the massively parallel architecture of modern graphics cards. We compare its performance to a CPU-based tracer that has been developed in parallel.

  2. Successes and lessons learned: How to accelerate the base closure process

    SciTech Connect

    Larkin, V.C.; Stoll, R.

    1994-12-31

    Naval Station Puget Sound, Seattle, was nominated for closure by the Base Closure Commission in 1991 (BRAC II) and will be transferred in September of 1995. Historic activities have resulted in petroleum-related environmental issues. Unlike many bases being closed, the politically sensitive issues are not the economics of job losses. Because homeless housing is expected to be included in the selected reuse plan, the primary concerns of the public are reduced real estate values and public safety. In addition to a reuse plan adopted by the Seattle City Council, the Muckleshoot Indian tribe has also submitted an alternative reuse plan to the Navy. Acceleration methods described in this paper include methods for beginning the environmental impact statement (EIS) process before reuse plans are finalized; tracking development of engineering alternatives in parallel with environmental investigations; using field screening data to begin developing plans and specifications for remediation, instead of waiting 6 weeks for analytical results and data validation; using efficient communication techniques to facilitate accelerated review of technical documents by the BCT; expediting removal actions and performing ``cleanups incidental to investigation``; and effectively facilitating members of the Restoration Advisory Board with divergent points of view. This paper will describe acceleration methods that proved to be effective and methods that could be modified to be more effective at other sites.

  3. Investigation of acceleration processes of the 14th july 2005 flare series occurred in ar 10786

    NASA Astrophysics Data System (ADS)

    Sizykh, Tatyana; Kashapova, Larisa

    We present the results of acceleration process study in the flare series occurred 14th July 2005 on the western limb of the Sun. Our investigation is based on HXR data obtained by RHESSI. It was observed increasing of solar flare activity with X1.2 class flare at its culmination. The presence of accelerated electrons (the power-law component of HXR spectrum for energies more than 25 keV) was clearly signified only in the first (C3.8) and the last of studied flares. We applied lgT-1/2lgEM diagrams ( Jakimiec et al,1986) for quantitative study of HXR spectrums for all flares. For analysis of the flares showed presence of significant flux of accelerated electrons we also used diagrams made on base of parameters obtained from non-thermal part of the spectrum (flux, spectral index, spectral curvature, Grigis Benz 2009). The possible scenario of evolution of this active region is discussed.

  4. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  5. Shifted Hamming distance: a fast and accurate SIMD-friendly filter to accelerate alignment verification in read mapping

    PubMed Central

    Xin, Hongyi; Greth, John; Emmons, John; Pekhimenko, Gennady; Kingsford, Carl; Alkan, Can; Mutlu, Onur

    2015-01-01

    Motivation: Calculating the edit-distance (i.e. minimum number of insertions, deletions and substitutions) between short DNA sequences is the primary task performed by seed-and-extend based mappers, which compare billions of sequences. In practice, only sequence pairs with a small edit-distance provide useful scientific data. However, the majority of sequence pairs analyzed by seed-and-extend based mappers differ by significantly more errors than what is typically allowed. Such error-abundant sequence pairs needlessly waste resources and severely hinder the performance of read mappers. Therefore, it is crucial to develop a fast and accurate filter that can rapidly and efficiently detect error-abundant string pairs and remove them from consideration before more computationally expensive methods are used. Results: We present a simple and efficient algorithm, Shifted Hamming Distance (SHD), which accelerates the alignment verification procedure in read mapping, by quickly filtering out error-abundant sequence pairs using bit-parallel and SIMD-parallel operations. SHD only filters string pairs that contain more errors than a user-defined threshold, making it fully comprehensive. It also maintains high accuracy with moderate error threshold (up to 5% of the string length) while achieving a 3-fold speedup over the best previous algorithm (Gene Myers’s bit-vector algorithm). SHD is compatible with all mappers that perform sequence alignment for verification. Availability and implementation: We provide an implementation of SHD in C with Intel SSE instructions at: https://github.com/CMU-SAFARI/SHD. Contact: hxin@cmu.edu, calkan@cs.bilkent.edu.tr or onur@cmu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25577434

  6. Ground Test of the Urine Processing Assembly for Accelerations and Transfer Functions

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Almond, Deborah F. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of the ground test of the urine processing assembly for accelerations and transfer functions. Details are given on the test setup, test data, data analysis, analytical results, and microgravity assessment. The conclusions of the tests include the following: (1) the single input/multiple output method is useful if the data is acquired by tri-axial accelerometers and inputs can be considered uncorrelated; (2) tying coherence with the matrix yields higher confidence in results; (3) the WRS#2 rack ORUs need to be isolated; (4) and future work includes a plan for characterizing performance of isolation materials.

  7. Turbulent Magnetohydrodynamic Acceleration Processes: Theory SSX Experiments and Connections to Space and Astrophysics

    SciTech Connect

    W Matthaeus; M Brown

    2006-07-15

    This is the final technical report for a funded program to provide theoretical support to the Swarthmore Spheromak Experiment. We examined mhd relaxation, reconnecton between two spheromaks, particle acceleration by these processes, and collisonless effects, e.g., Hall effect near the reconnection zone,. Throughout the project, applications to space plasma physics and astrophysics were included. Towards the end ofthe project we were examining a more fully turbulent relaxation associated with unconstrained dynamics in SSX. We employed experimental, spacecraft observations, analytical and numerical methods.

  8. Study of the near-electrode processes in quasi-steady plasma accelerators with impenetrable electrodes

    NASA Astrophysics Data System (ADS)

    Kozlov, A. N.

    2012-01-01

    Near-electrode processes in a coaxial plasma accelerator with equipotential impenetrable electrodes are simulated using a two-dimensional (generally, time-dependent) two-fluid MHD model with allowance for the Hall effect and the plasma conductivity tensor. The simulations confirm the theoretically predicted mechanism of the so-called "crisis of current" caused by the Hall effect. The simulation results are compared with available experimental data. The influence of both the method of plasma supply to the channel and an additional longitudinal magnetic field on the development of near-electrode instabilities preceding the crisis of current is studied.

  9. Segregating metabolic processes into different microbial cells accelerates the consumption of inhibitory substrates.

    PubMed

    Lilja, Elin E; Johnson, David R

    2016-07-01

    Different microbial cell types typically specialize at performing different metabolic processes. A canonical example is substrate cross-feeding, where one cell type consumes a primary substrate into an intermediate and another cell type consumes the intermediate. While substrate cross-feeding is widely observed, its consequences on ecosystem processes is often unclear. How does substrate cross-feeding affect the rate or extent of substrate consumption? We hypothesized that substrate cross-feeding eliminates competition between different enzymes and reduces the accumulation of growth-inhibiting intermediates, thus accelerating substrate consumption. We tested this hypothesis using isogenic mutants of the bacterium Pseudomonas stutzeri that either completely consume nitrate to dinitrogen gas or cross-feed the intermediate nitrite. We demonstrate that nitrite cross-feeding eliminates inter-enzyme competition and, in turn, reduces nitrite accumulation. We further demonstrate that nitrite cross-feeding accelerates substrate consumption, but only when nitrite has growth-inhibiting effects. Knowledge about inter-enzyme competition and the inhibitory effects of intermediates could therefore be important for deciding how to best segregate different metabolic processes into different microbial cell types to optimize a desired biotransformation. PMID:26771930

  10. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    NASA Astrophysics Data System (ADS)

    Beck, A.; Kalmykov, S. Y.; Davoine, X.; Lifschitz, A.; Shadwick, B. A.; Malka, V.; Specka, A.

    2014-03-01

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 1018 cm-3. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  11. Revealing the flux: Using processed Husimi maps to visualize dynamics of bound systems and mesoscopic transport

    NASA Astrophysics Data System (ADS)

    Mason, Douglas J.; Borunda, Mario F.; Heller, Eric J.

    2015-04-01

    We elaborate upon the "processed Husimi map" representation for visualizing quantum wave functions using coherent states as a measurement of the local phase space to produce a vector field related to the probability flux. Adapted from the Husimi projection, the processed Husimi map is mathematically related to the flux operator under certain limits but offers a robust and flexible alternative since it can operate away from these limits and in systems that exhibit zero flux. The processed Husimi map is further capable of revealing the full classical dynamics underlying a quantum wave function since it reverse engineers the wave function to yield the underlying classical ray structure. We demonstrate the capabilities of processed Husimi maps on bound systems with and without electromagnetic fields, as well as on open systems on and off resonance, to examine the relationship between closed system eigenstates and mesoscopic transport.

  12. DSN microwave antenna holography. Part 2: Data processing and display of high-resolution effective maps

    NASA Technical Reports Server (NTRS)

    Rochblatt, D. J.; Rahmat-Samii, Y.; Mumford, J. H.

    1986-01-01

    The results of a recently completed computer graphic package for the process and display of holographically recorded data into effective aperture maps are presented. The term effective maps (labelled provisional on the holograms) signifies that the maps include contributions of surface mechanical errors as well as other electromagnetic factors (phase error due to feed/subreflector misalignment, linear phase error contribution due to pointing errors, subreflector flange diffraction effects, and strut diffraction shadows). While these maps do not show the true mechanical surface errors, they nevertheless show the equivalent errors, which are effective in determining overall antenna performance. Final steps to remove electromagnetic pointing and misalignment factors are now in progress. The processing and display of high-resolution effective maps of a 64m antenna (DSS 63) are presented.

  13. MetricMap: an embedding technique for processing distance-based queries in metric spaces.

    PubMed

    Wang, Jason T L; Wang, Xiong; Shasha, Dennis; Zhang, Kaizhong

    2005-10-01

    In this paper, we present an embedding technique, called MetricMap, which is capable of estimating distances in a pseudometric space. Given a database of objects and a distance function for the objects, which is a pseudometric, we map the objects to vectors in a pseudo-Euclidean space with a reasonably low dimension while preserving the distance between two objects approximately. Such an embedding technique can be used as an approximate oracle to process a broad class of distance-based queries. It is also adaptable to data mining applications such as data clustering and classification. We present the theory underlying MetricMap and conduct experiments to compare MetricMap with other methods including MVP-tree and M-tree in processing the distance-based queries. Experimental results on both protein and RNA data show the good performance and the superiority of MetricMap over the other methods. PMID:16240772

  14. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use

  15. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, M.; Buss, R.; Scherrer, S.; Margreth, M.; Zappa, M.

    2015-12-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexity of automatic DRP mapping approaches affects hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison and a deviation map were derived. The automatically derived DRP-maps were used in synthetic runoff simulations with an adapted version of the hydrological model PREVAH, and simulation results compared with those from simulations using the reference maps. The DRP-maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP-maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. We therefore recommend not only using expert

  16. New Applications of Ultraviolet Spectroscopy to the Identification of Coronal Heating and Solar Wind Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Cranmer, S. R.

    2001-05-01

    The Ultraviolet Coronagraph Spectrometer (UVCS) aboard SOHO has revealed surprisingly extreme plasma conditions in the extended solar corona. This presentation reviews several new ways that UVCS and future spectroscopic instruments can be used to identify the physical processes responsible for producing the various components of the solar wind. The most promising mechanism for heating and accelerating heavy ions remains the dissipation of ion cyclotron waves, but the origin of these waves---as well as the dominant direction of propagation relative to the background magnetic field---is not yet known. Ultraviolet spectroscopy of a sufficient number of ions would be able to pinpoint the precise magnetohydrodynamic modes and the relative amounts of damping, turbulent cascade, and local plasma instability in the corona. (A simple graphical comparison of line-width ratios will be presented as a first step in this direction.) Spectroscopic observations with sufficient sensitivity can also detect departures from Gaussian line shapes that are unique identifiers of non-Maxwellian velocity distributions arising from cyclotron (or other) processes. Even without these next-generation diagnostics, UVCS data are continuing to put constraints on how the heating and acceleration mechanisms respond to changes in the ``background'' properties of coronal holes and streamers; i.e., geometry, latitude, and density. These provide crucial scaling relations in the acceleration region of the fast and slow solar wind that must be reproduced by any candidate theory. This work is supported by the National Aeronautics and Space Administration under grant NAG5-10093 to the Smithsonian Astrophysical Observatory, by Agenzia Spaziale Italiana, and by the Swiss contribution to the ESA PRODEX program.

  17. Spatiotemporal processing of linear acceleration: primary afferent and central vestibular neuron responses

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Dickman, J. D.

    2000-01-01

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements

  18. UV Irradiation Accelerates Amyloid Precursor Protein (APP) Processing and Disrupts APP Axonal Transport

    PubMed Central

    Almenar-Queralt, Angels; Falzone, Tomas L.; Shen, Zhouxin; Lillo, Concepcion; Killian, Rhiannon L.; Arreola, Angela S.; Niederst, Emily D.; Ng, Kheng S.; Kim, Sonia N.; Briggs, Steven P.; Williams, David S.

    2014-01-01

    Overexpression and/or abnormal cleavage of amyloid precursor protein (APP) are linked to Alzheimer's disease (AD) development and progression. However, the molecular mechanisms regulating cellular levels of APP or its processing, and the physiological and pathological consequences of altered processing are not well understood. Here, using mouse and human cells, we found that neuronal damage induced by UV irradiation leads to specific APP, APLP1, and APLP2 decline by accelerating their secretase-dependent processing. Pharmacological inhibition of endosomal/lysosomal activity partially protects UV-induced APP processing implying contribution of the endosomal and/or lysosomal compartments in this process. We found that a biological consequence of UV-induced γ-secretase processing of APP is impairment of APP axonal transport. To probe the functional consequences of impaired APP axonal transport, we isolated and analyzed presumptive APP-containing axonal transport vesicles from mouse cortical synaptosomes using electron microscopy, biochemical, and mass spectrometry analyses. We identified a population of morphologically heterogeneous organelles that contains APP, the secretase machinery, molecular motors, and previously proposed and new residents of APP vesicles. These possible cargoes are enriched in proteins whose dysfunction could contribute to neuronal malfunction and diseases of the nervous system including AD. Together, these results suggest that damage-induced APP processing might impair APP axonal transport, which could result in failure of synaptic maintenance and neuronal dysfunction. PMID:24573290

  19. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches. PMID:24298424

  20. Business process mapping techniques for ISO 9001 and 14001 certifications

    SciTech Connect

    Klement, R.E.; Richardson, G.D.

    1997-11-01

    AlliedSignal Federal Manufacturing and Technologies/Kansas City (FM and T/KC) produces nonnuclear components for nuclear weapons. The company has operated the plant for the US Department of Energy (DOE) since 1949. Throughout the history of the plant, procedures have been written to reflect the nuclear weapons industry best practices, and the facility has built a reputation for producing high quality products. The purpose of this presentation is to demonstrate how Total Quality principles were used at FM and T/KC to document processes for ISO 9001 and 14001 certifications. The information presented to the reader will lead to a better understanding of business administration by aligning procedures to key business processes within a business model; converting functional-based procedures to process-based procedures for total integrated resource management; and assigning ownership, validation, and metrics to procedures/processes, adding value to a company`s profitability.

  1. Terahertz digital holography image processing based on MAP algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Guang-Hao; Li, Qi

    2015-04-01

    Terahertz digital holography combines the terahertz technology and digital holography technology at present, fully exploits the advantages in both of them. Unfortunately, the quality of terahertz digital holography reconstruction images is gravely harmed by speckle noise which hinders the popularization of this technology. In this paper, the maximum a posterior estimation (MAP) filter is harnessed for the restoration of the digital reconstruction images. The filtering results are compared with images filtered by Wiener Filter and conventional frequency-domain filters from both subjective and objective perspectives. As for objective assessment, we adopted speckle index (SPKI) and edge preserving index (EPI) to quantitate the quality of images. In this paper, Canny edge detector is also used to outline the target in original and reconstruction images, which then act as an important role in the evaluation of filter performance. All the analysis indicate that maximum a posterior estimation filtering algorithm performs superiorly compared with the other two competitors in this paper and has enhanced the terahertz digital holography reconstruction images to a certain degree, allowing for a more accurate boundary identification.

  2. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  3. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  4. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  5. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  6. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  7. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  8. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  9. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  10. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  11. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  12. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  13. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  14. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  15. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  16. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  17. He-Ne laser irradiation acceleration of healing process of open gingival wounds in cats

    NASA Astrophysics Data System (ADS)

    Abramovici, Armand; Roisman, P.; Hirsch, A.; Segal, S.; Fischer, J.

    1994-09-01

    A histopathological study on the effect of He-Ne laser treatment on the evolution of cat gingiva open wounds is presented. The irradiation initiates first a massive inflammatory cell exudate together with an antiedematic effect after the second postoperative day. A substantial acceleration of the healing process is noted on the sixth day among irradiated tissues as compared to the operated gingiva which still shows, at this period of time, inflammatory exudate and isolated fibroblasts. The persistence of inflammatory exudate and edema among the untreated gingiva seemed to have a delaying effect upon the healing process. The biostimulatory effect of soft laser seems to act photodynamically both at the intracellular level and extracellular millieu, thus promoting the proliferative capacity of fibroblasts and capillary buds formation necessary for a rapid differentiation of connective tissue. The controlled application of He-Ne laser treatment is suggested in dental healing procedures.

  18. Modified Anderson Method for Accelerating 3D-RISM Calculations Using Graphics Processing Unit.

    PubMed

    Maruyama, Yutaka; Hirata, Fumio

    2012-09-11

    A fast algorithm is proposed to solve the three-dimensional reference interaction site model (3D-RISM) theory on a graphics processing unit (GPU). 3D-RISM theory is a powerful tool for investigating biomolecular processes in solution; however, such calculations are often both memory-intensive and time-consuming. We sought to accelerate these calculations using GPUs, but to work around the problem of limited memory size in GPUs, we modified the less memory-intensive "Anderson method" to give faster convergence to 3D-RISM calculations. Using this method on a Tesla C2070 GPU, we reduced the total computational time by a factor of 8, 1.4 times by the modified Andersen method and 5.7 times by GPU, compared to calculations on an Intel Xeon machine (eight cores, 3.33 GHz) with the conventional method. PMID:26605714

  19. ATCOM: accelerated image processing for terrestrial long-range imaging through atmospheric effects

    NASA Astrophysics Data System (ADS)

    Curt, Petersen F.; Paolini, Aaron

    2013-05-01

    Long-range video surveillance performance is often severely diminished due to atmospheric turbulence. The larger apertures typically used for video-rate operation at long-range are particularly susceptible to scintillation and blurring effects that limit the overall diffraction efficiency and resolution. In this paper, we present research progress made toward a digital signal processing technique which aims to mitigate the effects of turbulence in real-time. Our previous work in this area focused on an embedded implementation for portable applications. Our more recent research has focused on functional enhancements to the same algorithm using general-purpose hardware. We present some techniques that were successfully employed to accelerate processing of high-definition color video streams and study performance under nonideal conditions involving moving objects and panning cameras. Finally, we compare the real-time performance of two implementations using a CPU and a GPU.

  20. Arsenite exposure accelerates aging process regulated by the transcription factor DAF-16/FOXO in Caenorhabditis elegans.

    PubMed

    Yu, Chan-Wei; How, Chun Ming; Liao, Vivian Hsiu-Chuan

    2016-05-01

    Arsenic is a known human carcinogen and high levels of arsenic contamination in food, soils, water, and air are of toxicology concerns. Nowadays, arsenic is still a contaminant of emerging interest, yet the effects of arsenic on aging process have received little attention. In this study, we investigated the effects and the underlying mechanisms of chronic arsenite exposure on the aging process in Caenorhabditis elegans. The results showed that prolonged arsenite exposure caused significantly decreased lifespan compared to non-exposed ones. In addition, arsenite exposure (100 μM) caused significant changes of age-dependent biomarkers, including a decrease of defecation frequency, accumulations of intestinal lipofuscin and lipid peroxidation in an age-dependent manner in C. elegans. Further evidence revealed that intracellular reactive oxygen species (ROS) level was significantly increased in an age-dependent manner upon 100 μM arsenite exposure. Moreover, the mRNA levels of transcriptional makers of aging (hsp-16.1, hsp-16.49, and hsp-70) were increased in aged worms under arsenite exposure (100 μM). Finally, we showed that daf-16 mutant worms were more sensitive to arsenite exposure (100 μM) on lifespan and failed to induce the expression of its target gene sod-3 in aged daf-16 mutant under arsenite exposure (100 μM). Our study demonstrated that chronic arsenite exposure resulted in accelerated aging process in C. elegans. The overproduction of intracellular ROS and the transcription factor DAF-16/FOXO play roles in mediating the accelerated aging process by arsenite exposure in C. elegans. This study implicates a potential ecotoxicological and health risk of arsenic in the environment. PMID:26796881

  1. Extracting Process and Mapping Management for Heterogennous Systems

    NASA Astrophysics Data System (ADS)

    Hagara, Igor; Tanuška, Pavol; Duchovičová, Soňa

    2013-12-01

    A lot of papers describe three common methods of data selection from primary systems. This paper defines how to select the correct method or combinations of methods for minimizing the impact of production system and common operation. Before using any method, it is necessary to know the primary system and its databases structures for the optimal use of the actual data structure setup and the best design for ETL process. Databases structures are usually categorized into groups, which characterize their quality. The classification helps to find the ideal method for each group and thus design a solution of ETL process with the minimal impact on the data warehouse and production system.

  2. Supporting the Learning Process with Collaborative Concept Mapping Using Computer-based Communication Tools and Processes.

    ERIC Educational Resources Information Center

    De Simone, Christina; Schmid, Richard F.; McEwen, Laura A.

    2001-01-01

    Studied the effects of a combination of student collaboration, concept mapping, and electronic technologies with 26 students in a graduate level learning theories class. Findings suggest that concept mapping and collaborative learning techniques complement each other, and that students found the combined approach useful. (SLD)

  3. Initial investigation using statistical process control for quality control of accelerator beam steering

    PubMed Central

    2011-01-01

    Background This study seeks to increase clinical operational efficiency and accelerator beam consistency by retrospectively investigating the application of statistical process control (SPC) to linear accelerator beam steering parameters to determine the utility of such a methodology in detecting changes prior to equipment failure (interlocks actuated). Methods Steering coil currents (SCC) for the transverse and radial planes are set such that a reproducibly useful photon or electron beam is available. SCC are sampled and stored in the control console computer each day during the morning warm-up. The transverse and radial - positioning and angle SCC for photon beam energies were evaluated using average and range (Xbar-R) process control charts (PCC). The weekly average and range values (subgroup n = 5) for each steering coil were used to develop the PCC. SCC from September 2009 (annual calibration) until two weeks following a beam steering failure in June 2010 were evaluated. PCC limits were calculated using the first twenty subgroups. Appropriate action limits were developed using conventional SPC guidelines. Results PCC high-alarm action limit was set at 6 standard deviations from the mean. A value exceeding this limit would require beam scanning and evaluation by the physicist and engineer. Two low alarms were used to indicate negative trends. Alarms received following establishment of limits (week 20) are indicative of a non-random cause for deviation (Xbar chart) and/or an uncontrolled process (R chart). Transverse angle SCC for 6 MV and 15 MV indicated a high-alarm 90 and 108 days prior to equipment failure respectively. A downward trend in this parameter continued, with high-alarm, until failure. Transverse position and radial angle SCC for 6 and 15 MV indicated low-alarms starting as early as 124 and 116 days prior to failure, respectively. Conclusion Radiotherapy clinical efficiency and accelerator beam consistency may be improved by instituting SPC

  4. Development of Process Maps in Two-Wire Tandem Submerged Arc Welding Process of HSLA Steel

    NASA Astrophysics Data System (ADS)

    Kiran, D. V.; Alam, S. A.; De, A.

    2013-04-01

    Appropriate selection of welding conditions to guarantee requisite weld joint mechanical properties is ever difficult because of their complex interactions. An approach is presented here to identify suitable welding conditions in typical two-wire tandem submerged arc welding (SAW-T) that involves many welding variables. First, an objective function is defined, which depicts the squared error between the mechanical properties of weld joint and of base material. A set of artificial neural network (ANN)-based models are developed next to estimate the weld joint properties as function of welding conditions using experimentally measured results. The neural network model-based predictions are used next to create a set of process map contours that depict the minimum achievable values of the objective function and the corresponding welding conditions. In typical SAW-T of HSLA steel, welding speed from 9.0 to 11.5 mm/s, leading wire current from 530 to 580 A, and trailing wire negative current from 680 to 910 A are found to be the most optimal.

  5. A polarization-based frequency scanning interferometer and the signal processing acceleration method based on parallel processing architecture

    NASA Astrophysics Data System (ADS)

    Lee, Seung Hyun; Kim, Min Young

    FSI system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on FFT. However, it still suffers from optical noise from target surface and relatively long processing time due to the number of images acquired in frequency scanning phase. First, a polarization-based frequency scanning interferometry (PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, λ/4 plate in front of reference mirror, λ/4 plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, λ/2 plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem low contrast of acquired fringe image by using polarization technique. Also, we can control light distribution of object beam and reference beam. Second, the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as GPU (Graphic Processing Unit) and CUDA (Compute Unified Device Architecture). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

  6. Surface damage correction, and atomic level smoothing of optics by Accelerated Neutral Atom Beam (ANAB) Processing

    NASA Astrophysics Data System (ADS)

    Walsh, M.; Chau, K.; Kirkpatrick, S.; Svrluga, R.

    2014-10-01

    Surface damage and surface contamination of optics has long been a source of problems for laser, lithography and other industries. Nano-sized surface defects may present significant performance issues in optical materials for deep UV and EUV applications. The effects of nanometer sized surface damage (scratches, pits, and organics) on the surface of optics made of traditional materials and new more exotic materials is a limiting factor to high end performance. Angstrom level smoothing of materials such as calcium fluoride, spinel, zinc sulfide, BK7 and others presents a unique set of challenges. Exogenesis Corporation, using its proprietary Accelerated Neutral Atom Beam (ANAB) technology, is able to remove nano-scale surface damage and contamination and leaves many material surfaces with roughness typically around one angstrom. This process technology has been demonstrated on nonlinear crystals, and various other high-end optical materials. This paper describes the ANAB technology and summarizes smoothing results for various materials that have been processed with ANAB. All surface measurement data for the paper was produced via AFM analysis. Exogenesis Corporation's ANAB processing technology is a new and unique surface modification technique that has demonstrated to be highly effective at correcting nano-scale surface defects. ANAB is a non-contact vacuum process comprised of an intense beam of accelerated, electrically neutral gas atoms with average energies of a few tens of electron volts. The ANAB process does not apply normal forces associated with traditional polishing techniques. ANAB efficiently removes surface contaminants, nano-scale scratches, bumps and other asperities under low energy physical sputtering conditions as the removal action proceeds. ANAB may be used to remove a precisely controlled, uniform thickness of material without any increase of surface roughness, regardless of the total amount of material removed. The ANAB process does not

  7. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units

    PubMed Central

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618

  8. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  9. On-site installation and shielding of a mobile electron accelerator for radiation processing

    NASA Astrophysics Data System (ADS)

    Catana, Dumitru; Panaitescu, Julian; Axinescu, Silviu; Manolache, Dumitru; Matei, Constantin; Corcodel, Calin; Ulmeanu, Magdalena; Bestea, Virgil

    1995-05-01

    The development of radiation processing of some bulk products, e.g. grains or potatoes, would be sustained if the irradiation had been carried out at the place of storage, i.e. silo. A promising solution is proposed consisting of a mobile electron accelerator, installed on a couple of trucks and traveling from one customer to another. The energy of the accelerated electrons was chosen at 5 MeV, with 10 to 50 kW beam power. The irradiation is possible either with electrons or with bremsstrahlung. A major problem of the above solution is the provision of adequate shielding at the customer, with a minimum investment cost. Plans for a bunker are presented, which houses the truck carrying the radiation head. The beam is vertical downwards, through the truck floor, through a transport pipe and a scanning horn. The irradiation takes place in a pit, where the products are transported through a belt. The belt path is so chosen as to minimize openings in the shielding. Shielding calculations are presented supposing a working regime with 5 MeV bremsstrahlung. Leakage and scattered radiation are taken into account.

  10. Graphics processing unit (GPU)-accelerated particle filter framework for positron emission tomography image reconstruction.

    PubMed

    Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng

    2012-04-01

    As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application. PMID:22472843

  11. High-Speed Digital Signal Processing Method for Detection of Repeating Earthquakes Using GPGPU-Acceleration

    NASA Astrophysics Data System (ADS)

    Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru

    2013-04-01

    Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a

  12. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    SciTech Connect

    Wang, J; Templeton, D C; Harris, D B

    2011-01-21

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combined with conventional methods significantly improves the network detection ability in an efficient matter.

  13. Machine and Process System Diagnostics Using One-Step Prediction Maps

    SciTech Connect

    Breeding, J.E.; Damiano, B.; Tucker, R.W., Jr.

    1999-05-10

    This paper describes a method for machine or process system diagnostics that uses one-step prediction maps. The method uses nonlinear time series analysis techniques to form a one-step prediction map that estimates the next time series data point when given a sequence of previously measured time series data point. The difference between the predicted and measured time series values is a measure of the map error. The average value of this error should remain within some bound as long as both the dynamic system and its operating condition remain unchanged. However, changes in the dynamic system or operating condition will cause an increase in average map error. Thus, for a constant operating condition, monitoring the average map error over time should indicate when a change has occurred in the dynamic system. Furthermore, the map error itself forms a time series that can be analyzed to detect changes in system dynamics. The paper provides technical background in the nonlinear analysis techniques used in the diagnostic method, describes the creation of one-step prediction maps and their application to machine or process system diagnostics, and then presents results obtained from applying the diagnostic method to simulated and measured data.

  14. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  15. Assessing diversity and quality in primary care through the multimethod assessment process (MAP).

    PubMed

    Kairys, Jo Ann; Orzano, John; Gregory, Patrice; Stroebel, Christine; DiCicco-Bloom, Barbara; Roemheld-Hamm, Beatrix; Kobylarz, Fred A; Scott, John G; Coppola, Lisa; Crabtree, Benjamin F

    2002-01-01

    The U.S. health care system serves a diverse population, often resulting in significant disparities in delivery and quality of care. Nevertheless, most quality improvement efforts fail to systematically assess diversity and associated disparities. This article describes application of the multimethod assessment process (MAP) for understanding disparities in relation to diversity, cultural competence, and quality improvement in clinical practice. MAP is an innovative quality improvement methodology that integrates quantitative and qualitative techniques and produces a system level understanding of organizations to guide quality improvement interventions. A demonstration project in a primary care practice illustrates the utility of MAP for assessing diversity. PMID:12938252

  16. Mapping social processes at work in nursing knowledge development.

    PubMed

    Hamilton, Patti; Willis, Eileen; Henderson, Julie; Harvey, Clare; Toffoli, Luisa; Abery, Elizabeth; Verrall, Claire

    2014-09-01

    In this paper, we suggest a blueprint for combining bibliometrics and critical analysis as a way to review published scientific works in nursing. This new approach is neither a systematic review nor meta-analysis. Instead, it is a way for researchers and clinicians to understand how and why current nursing knowledge developed as it did. Our approach will enable consumers and producers of nursing knowledge to recognize and take into account the social processes involved in the development, evaluation, and utilization of new nursing knowledge. We offer a rationale and a strategy for examining the socially-sanctioned actions by which nurse scientists signal to readers the boundaries of their thinking about a problem, the roots of their ideas, and the significance of their work. These actions - based on social processes of authority, credibility, and prestige - have bearing on the careers of nurse scientists and on the ways the knowledge they create enters into the everyday world of nurse clinicians and determines their actions at the bedside, as well as their opportunities for advancement. PMID:24636054

  17. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  18. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  19. Acceleration of Electron Repulsion Integral Evaluation on Graphics Processing Units via Use of Recurrence Relations.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2013-02-12

    Electron repulsion integral (ERI) calculation on graphical processing units (GPUs) can significantly accelerate quantum chemical calculations. Herein, the ab initio self-consistent-field (SCF) calculation is implemented on GPUs using recurrence relations, which is one of the fastest ERI evaluation algorithms currently available. A direct-SCF scheme to assemble the Fock matrix efficiently is presented, wherein ERIs are evaluated on-the-fly to avoid CPU-GPU data transfer, a well-known architectural bottleneck in GPU specific computation. Realized speedups on GPUs reach 10-100 times relative to traditional CPU nodes, with accuracies of better than 1 × 10(-7) for systems with more than 4000 basis functions. PMID:26588740

  20. [Architecture and design of a processes map in a Urology Department].

    PubMed

    Mora, Jose Ramón; Luján, Marcos

    2015-01-01

    Every organization with the intention to be oriented to processes management must know it is a system and what are the factors that characterize it. Health care institutions are open and mixed systems. It is in this system where the chain of value of the productive process occurs, generating a very complex integrated management system, as the productive system main recipients are people with health needs. The process management approach in clinical centers, departments and units means that, once the processes have been identified, they have to be set depending on their mission, establishing a boxes and connections architecture known as process maps. Therefore, a map of processes is the graphical representation of the organizational management system, which may be deployed applying modeling techniques at various levels. In this article we will review the conceptual framework of the health care productive system and management with the focus on processes, incorporating didactic products based on experiences from various centers and health services. PMID:25688532

  1. Effects of a Story Map on Accelerated Reader Postreading Test Scores in Students with High-Functioning Autism

    ERIC Educational Resources Information Center

    Stringfield, Suzanne Griggs; Luscre, Deanna; Gast, David L.

    2011-01-01

    In this study, three elementary-aged boys with high-functioning autism (HFA) were taught to use a graphic organizer called a Story Map as a postreading tool during language arts instruction. Students learned to accurately complete the Story Map. The effect of the intervention on story recall was assessed within the context of a multiple-baseline…

  2. Modeling and image processing for visualization of volcanic mapping

    SciTech Connect

    Pareschi, M.T.; Bernstein, R.

    1989-07-01

    In countries such as Italy, Japan, and Mexico, where active volcanoes are located in highly populated areas, the problem of risk reduction is very important. Actual knowledge about volcanic behavior does not allow deterministic event prediction or the forecasting of eruptions. However, areas exposed to eruptions can be analyzed if eruption characteristics can be inferred or assumed. Models to simulate volcanic eruptions and identify hazardous areas have been developed by collaboration between the IBM Italy Pisa Scientific Center and the Earth Science Department of Pisa University (supported by the Italian National Group of Volcanology of the Italian National Research Council). The input to the models is the set of assumed eruption characteristics: the topology of the phenomenon (ash fall, pyroclastic flow, etc.), vent position, total eruptible mass, wind profile, etc. The output of the models shows volcanic product distribution at ground level. These models are reviewed and their use in hazard estimation (compared with the more traditional techniques currently in use) is outlined. Effective use of these models, by public administrators and planners in preparing plans for the evacuation of hazardous zones, requires the clear and effective display of model results. Techniques to display and visualize such data have been developed by the authors. In particular, a computer program has been implemented on the IBM 7350 Image Processing System to display model outputs, representing both volume (in two dimensions) and distribution of ejected material, and to superimpose the displays upon satellite images that show 3D oblique views of terrain. This form of presentation, realized for various sets of initial conditions and eruption times, represents a very effective visual tool for volcanic hazard zoning and evacuation planning.

  3. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments. PMID:23893758

  4. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Meroni, A.; Bahr, T.

    2013-05-01

    Having access to SAR data can be highly important and critical especially for disaster mapping. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. Therefore, we present in this paper the operational processing of SAR data within a GIS environment for rapid disaster mapping. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. A series of COSMO-SkyMed acquisitions was processed in ArcGIS® using a single-sensor, multi-mode, multi-temporal approach. The relevant processing steps were combined using the ArcGIS ModelBuilder to create a new model for rapid disaster mapping in ArcGIS, which can be accessed both via a desktop and a server environment.

  5. ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors.

    PubMed

    Lai, Vicky Tzuyin; Curran, Tim

    2013-12-01

    Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors. PMID:24182839

  6. Variability of Mass Dependence of Auroral Acceleration Processes with Solar Activity

    NASA Technical Reports Server (NTRS)

    Ghielmetti, Arthur G.

    1997-01-01

    The objectives of this investigation are to improve understanding of the mass dependent variability of the auroral acceleration processes and so to clarify apparent discrepancies regarding the altitude and local time variations with solar cycle by investigating: (1) the global morphological relationships between auroral electric field structures and the related particle signatures under varying conditions of solar activity, and (2) the relationships between the electric field structures and particle signatures in selected events that are representative of the different conditions occurring during a solar cycle. The investigation is based in part on the Lockheed UFI data base of UpFlowing Ion (UFI) events in the 5OO eV to 16keV energy range and associated electrons in the energy range 7O eV to 24 keV. This data base was constructed from data acquired by the ion mass spectrometer on the S3-3 satellite in the altitude range of I to 1.3 Re. The launch of the POLAR spacecraft in early 1996 and successful operation of its TIMAS ion mass spectrometer has provided us with data from within the auroral acceleration regions during the current solar minimum. The perigee of POLAR is at about 1 Re, comparable to that of S3-3. The higher sensitivity and time resolution of TIMAS compared to the ion mass spectrometer on S3-3 together with its wider energy range, 15 eV to 33 keV, facilitate more detailed studies of upflowing ions.

  7. The influence of acceleration forces on nucleation, solidification, and deformation processes in tin single crystals

    NASA Technical Reports Server (NTRS)

    Johnston, M. H.; Baldwin, D. H.

    1974-01-01

    An apparatus was designed and assembled to directionally solidify single crystals under the influence of acceleration forces of various magnitudes. The investigation conducted showed that acceleration gradients produce a preferred growth orientation effect not previously observed for tin. Convection currents at approximately 5-g encourage multiple nucleation and subsequent random orientation of growth direction. Deformation effects such as recrystallization and twinning are observed at acceleration levels greater than 2-g.

  8. Retrospective analysis of linear accelerator output constancy checks using process control techniques.

    PubMed

    Sanghangthum, Taweap; Suriyapee, Sivalee; Srisatit, Somyot; Pawlicki, Todd

    2013-01-01

    Shewhart control charts have previously been suggested as a process control tool for use in routine linear accelerator (linac) output verifications. However, a comprehensive approach to process control has not been investigated for linac output verifications. The purpose of this work is to investigate a comprehensive process control approach to linac output constancy quality assurance (QA). The RBA-3 dose constancy check was used to verify outputs of photon beams and electron beams delivered by a Varian Clinac 21EX linac. The data were collected during 2009 to 2010. Shewhart-type control charts, exponentially weighted moving average (EWMA) charts, and capability indices were applied to these processes. The Shewhart-type individuals chart (X-chart) was used and the number of data points used to calculate the control limits was varied. The parameters tested for the EWMA charts (smoothing parameter (λ) and the control limit width (L)) were λ = 0.05, L = 2.492; λ = 0.10, L = 2.703; and λ = 0.20, L = 2.860, as well as the number of points used to estimate the initial process mean and variation. Lastly, the number of in-control data points used to determine process capability (C(p)) and acceptability (C(pk)) were investigated, comparing the first in-control run to the longest in-control run of the process data. C(p) and C(pk) values greater than 1.0 were considered acceptable. The 95% confidence intervals were reported. The X-charts detected systematic errors (e.g., device setup errors). In-control run lengths on the X-charts varied from 5 to 30 output measurements (about one to seven months). EWMA charts showed in-control runs ranging from 9 to 33 output measurements (about two to eight months). The C(p) and C(pk) ratios are higher than 1.0 for all energies, except 12 and 20 MeV. However, 10 MV and 6, 9, and 16 MeV were in question when considering the 95% confidence limits. The X-chart should be calculated using 8-12 data points. For EWMA chart, using 4 data points

  9. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  10. Click chemistry approach to conventional vegetable tanning process: accelerated method with improved organoleptic properties.

    PubMed

    Krishnamoorthy, Ganesan; Ramamurthy, Govindaswamy; Sadulla, Sayeed; Sastry, Thotapalli Parvathaleswara; Mandal, Asit Baran

    2014-09-01

    Click chemistry approaches are tailored to generate molecular building blocks quickly and reliably by joining small units together selectively and covalently, stably and irreversibly. The vegetable tannins such as hydrolyzable and condensed tannins are capable to produce rather stable radicals or inhibit the progress of radicals and are prone to oxidations such as photo and auto-oxidation, and their anti-oxidant nature is well known. A lot remains to be done to understand the extent of the variation of leather stability, color variation (lightening and darkening reaction of leather), and poor resistance to water uptake for prolonged periods. In the present study, we have reported click chemistry approaches to accelerated vegetable tanning processes based on periodates catalyzed formation of oxidized hydrolysable and condensed tannins for high exhaustion with improved properties. The distribution of oxidized vegetable tannin, the thermal stability such as shrinkage temperature (T s) and denaturation temperature (T d), resistance to collagenolytic activities, and organoleptic properties of tanned leather as well as the evaluations of eco-friendly characteristics were investigated. Scanning electron microscopic analysis indicates the cross section of tightness of the leather. Differential scanning calorimetric analysis shows that the T d of leather is more than that of vegetable tanned or equal to aldehyde tanned one. The leathers exhibited fullness, softness, good color, and general appearance when compared to non-oxidized vegetable tannin. The developed process benefits from significant reduction in total solids and better biodegradability in the effluent, compared to non-oxidized vegetable tannins. PMID:24888617

  11. An Accelerated Analytical Process for the Development of STR Profiles for Casework Samples.

    PubMed

    Laurin, Nancy; Frégeau, Chantal J

    2015-07-01

    Significant efforts are being devoted to the development of methods enabling rapid generation of short tandem repeat (STR) profiles in order to reduce turnaround times for the delivery of human identification results from biological evidence. Some of the proposed solutions are still costly and low throughput. This study describes the optimization of an analytical process enabling the generation of complete STR profiles (single-source or mixed profiles) for human identification in approximately 5 h. This accelerated process uses currently available reagents and standard laboratory equipment. It includes a 30-min lysis step, a 27-min DNA extraction using the Promega Maxwell(®) 16 System, DNA quantification in <1 h using the Qiagen Investigator(®) Quantiplex HYres kit, fast amplification (<26 min) of the loci included in AmpFℓSTR(®) Identifiler(®), and analysis of the profiles on the 3500-series Genetic Analyzer. This combination of fast individual steps produces high-quality profiling results and offers a cost-effective alternative approach to rapid DNA analysis. PMID:25782346

  12. Incipient fault detection and identification in process systems using accelerating neural network learning

    SciTech Connect

    Parlos, A.G.; Muthusami, J.; Atiya, A.F. . Dept. of Nuclear Engineering)

    1994-02-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary.

  13. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind Braille readers

    PubMed Central

    Bhattacharjee, Arindam; Ye, Amanda J.; Lisak, Joy A.; Vargas, Maria G.; Goldreich, Daniel

    2010-01-01

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at about 100-ms intervals, and adjacent raised dots within a character at 50-ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25-ms target tap) and amplitude discrimination (compared to 100-micron reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker: 50-Hz, 50-micron; target-masker delays ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged, but that perceptual processing is accelerated in congenitally blind Braille readers. PMID:20980584

  14. In Vivo Hypobaric Hypoxia Performed During the Remodeling Process Accelerates Bone Healing in Mice

    PubMed Central

    Durand, Marjorie; Collombet, Jean-Marc; Frasca, Sophie; Begot, Laurent; Lataillade, Jean-Jacques; Le Bousse-Kerdilès, Marie-Caroline

    2014-01-01

    We investigated the effects of respiratory hypobaric hypoxia on femoral bone-defect repair in mice because hypoxia is believed to influence both mesenchymal stromal cell (MSC) and hematopoietic stem cell mobilization, a process involved in the bone-healing mechanism. To mimic conditions of non-weight-bearing limb immobilization in patients suffering from bone trauma, our hypoxic mouse model was further subjected to hind-limb unloading. A hole was drilled in the right femur of adult male C57/BL6J mice. Four days after surgery, mice were subjected to hind-limb unloading for 1 week. Seven days after surgery, mice were either housed for 4 days in a hypobaric room (FiO2 at 10%) or kept under normoxic conditions. Unsuspended control mice were housed in either hypobaric or normoxic conditions. Animals were sacrificed on postsurgery day 11 to allow for collection of both contralateral and lesioned femurs, blood, and spleen. As assessed by microtomography, delayed hypoxia enhanced bone-healing efficiency by increasing the closing of the cortical defect and the newly synthesized bone volume in the cavity by +55% and +35%, respectively. Proteome analysis and histomorphometric data suggested that bone-repair improvement likely results from the acceleration of the natural bone-healing process rather than from extended mobilization of MSC-derived osteoprogenitors. Hind-limb unloading had hardly any effect beyond delayed hypoxia-enhanced bone-healing efficiency. PMID:24944208

  15. Project Zero Delay: a process for accelerating the activation of cancer clinical trials.

    PubMed

    Kurzrock, Razelle; Pilat, Susan; Bartolazzi, Marcel; Sanders, Dwana; Van Wart Hood, Jill; Tucker, Stanley D; Webster, Kevin; Mallamaci, Michael A; Strand, Steven; Babcock, Eileen; Bast, Robert C

    2009-09-10

    Drug development in cancer research is lengthy and expensive. One of the rate-limiting steps is the initiation of first-in-human (phase I) trials. Three to 6 months can elapse between investigational new drug (IND) approval by the US Food and Drug Administration and the entry of a first patient. Issues related to patient participation have been well analyzed, but the administrative processes relevant to implementing clinical trials have received less attention. While industry and academia often partner for the performance of phase I studies, their administrative processes are generally performed independently, and their timelines driven by different priorities: safety reviews, clinical operations, regulatory submissions, and contracting of clinical delivery vendors for industry; contracts, budgets, and institutional review board approval for academia. Both processes converge on US Food and Drug Administration approval of an IND. In the context of a strategic alliance between M. D. Anderson Cancer Center and AstraZeneca Pharmaceuticals LP, a concerted effort has been made to eliminate delays in implementing clinical trials. These efforts focused on close communications, identifying and matching key timelines, alignment of priorities, and tackling administrative processes in parallel, rather than sequentially. In a recent, first-in-human trial, the study was activated and the first patient identified in 46 days from completion of the final study protocol and about 48 hours after final US Food and Drug Administration IND approval, reducing the overall timeline by about 3 months, while meeting all clinical good practice guidelines. Eliminating administrative delays can accelerate the evaluation of new drugs without compromising patient safety or the quality of clinical research. PMID:19652061

  16. Using concept maps to explore preservice teachers' perceptions of science content knowledge, teaching practices, and reflective processes

    NASA Astrophysics Data System (ADS)

    Somers, Judy L.

    This qualitative study examined seven preservice teachers' perceptions of their science content knowledge, teaching practices, and reflective processes through the use of the metacognitive strategy of concept maps. Included in the paper is a review of literature in the areas of preservice teachers' perceptions of teaching, concept development, concept mapping, science content understanding, and reflective process as a part of metacognition. The key questions addressed include the use of concept maps to indicate organization and understanding of science content, mapping strategies to indicate perceptions of teaching practice, and the influence of concept maps on reflective process. There is also a comparison of preservice teachers' perceptions of concept map usage with the purposes and practices of maps as described by experienced teachers. Data were collected primarily through interviews, observations, a pre and post concept mapping activity, and an analysis of those concept maps using a rubric developed for this study. Findings showed that concept map usage clarified students' understanding of the organization and relationships within content area and that the process of creating the concept maps increased participants' understanding of the selected content. The participants felt that the visual element of concept mapping was an important factor in improving content understanding. These participants saw benefit in using concept maps as planning tools and as instructional tools. They did not recognize the use of concept maps as assessment tools. When the participants were able to find personal relevance in and through their concept maps they were better able to be reflective about the process. The experienced teachers discussed student understanding and skill development as the primary purpose of concept map usage, while they were able to use concept maps to accomplish multiple purposes in practice.

  17. Hot Compression Deformation Behavior and Processing Maps of Mg-Gd-Y-Zr Alloy

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Zhou, Wei-Min; Li, Song; Li, Xiao-Ling; Lu, Chen

    2013-09-01

    Hot compression deformation behavior and processing maps of the Mg-Gd-Y-Zr alloy were investigated in this paper. Compression tests were conducted at the temperature range from 300 to 450 °C and the strain rate range from 0.001 to 1.0 s-1. It is found that the flow stress behavior is described by the hyperbolic sine constitutive equation in which the average activation energy of 251.96 kJ/mol is calculated. Through the flow stress behavior, the processing maps are calculated and analyzed according to the dynamic materials model. In the processing maps, the variation of the efficiency of the power dissipation is plotted as a function of temperature and strain rate. The instability domains of flow behavior are identified by the maps. The maps exhibit a domain of dynamic recrystallization occurring at the temperature range of 375-450 °C and strain rate range of 0.001-0.03 s-1 which are the optimum parameters for hot working of the alloy.

  18. Learning from Nature - Mapping of Complex Hydrological and Geomorphological Process Systems for More Realistic Modelling of Hazard-related Maps

    NASA Astrophysics Data System (ADS)

    Chifflard, Peter; Tilch, Nils

    2010-05-01

    Introduction Hydrological or geomorphological processes in nature are often very diverse and complex. This is partly due to the regional characteristics which vary over time and space, as well as changeable process-initiating and -controlling factors. Despite being aware of this complexity, such aspects are usually neglected in the modelling of hazard-related maps due to several reasons. But particularly when it comes to creating more realistic maps, this would be an essential component to consider. The first important step towards solving this problem would be to collect data relating to regional conditions which vary over time and geographical location, along with indicators of complex processes. Data should be acquired promptly during and after events, and subsequently digitally combined and analysed. Study area In June 2009, considerable damage occurred in the residential area of Klingfurth (Lower Austria) as a result of great pre-event wetness and repeatedly heavy rainfall, leading to flooding, debris flow deposit and gravitational mass movement. One of the causes is the fact that the meso-scale watershed (16 km²) of the Klingfurth stream is characterised by adverse geological and hydrological conditions. Additionally, the river system network with its discharge concentration within the residential zone contributes considerably to flooding, particularly during excessive rainfall across the entire region, as the flood peaks from different parts of the catchment area are superposed. First results of mapping Hydro(geo)logical surveys across the entire catchment area have shown that - over 600 gravitational mass movements of various type and stage have occurred. 516 of those have acted as a bed load source, while 325 mass movements had not reached the final stage yet and could thus supply bed load in the future. It should be noted that large mass movements in the initial or intermediate stage were predominately found in clayey-silty areas and weathered material

  19. A method to evaluate dose errors introduced by dose mapping processes for mass conserving deformations

    PubMed Central

    Yan, C.; Hugo, G.; Salguero, F. J.; Saleh-Sayah, N.; Weiss, E.; Sleeman, W. C.; Siebers, J. V.

    2012-01-01

    Purpose: To present a method to evaluate the dose mapping error introduced by the dose mapping process. In addition, apply the method to evaluate the dose mapping error introduced by the 4D dose calculation process implemented in a research version of commercial treatment planning system for a patient case. Methods: The average dose accumulated in a finite volume should be unchanged when the dose delivered to one anatomic instance of that volume is mapped to a different anatomic instance—provided that the tissue deformation between the anatomic instances is mass conserving. The average dose to a finite volume on image S is defined as dS¯=es/mS, where eS is the energy deposited in the mass mS contained in the volume. Since mass and energy should be conserved, when dS¯ is mapped to an image R(dS→R¯=dR¯), the mean dose mapping error is defined as Δdm¯=|dR¯-dS¯|=|eR/mR-eS/mS|, where the eR and eS are integral doses (energy deposited), and mR and mS are the masses within the region of interest (ROI) on image R and the corresponding ROI on image S, where R and S are the two anatomic instances from the same patient. Alternatively, application of simple differential propagation yields the differential dose mapping error, Δdd¯=|∂d¯∂e*Δe+∂d¯∂m*Δm|=|(eS-eR)mR-(mS-mR)mR2*eR|=α|dR¯-dS¯| with α=mS/mR. A 4D treatment plan on a ten-phase 4D-CT lung patient is used to demonstrate the dose mapping error evaluations for a patient case, in which the accumulated dose, DR¯=∑S=09dS→R¯, and associated error values (ΔDm¯ and ΔDd¯) are calculated for a uniformly spaced set of ROIs. Results: For the single sample patient dose distribution, the average accumulated differential dose mapping error is 4.3%, the average absolute differential dose mapping error is 10.8%, and the average accumulated mean dose mapping error is 5.0%. Accumulated differential dose mapping errors within the gross tumor volume (GTV) and planning target volume (PTV) are lower, 0

  20. Accelerated Cardiac T2 Mapping using Breath-hold Multi-Echo Fast Spin-Echo Pulse Sequence with Compressed sensing and Parallel Imaging

    PubMed Central

    Feng, Li; Otazo, Ricardo; Jung, Hong; Jensen, Jens H.; Ye, Jong C.; Sodickson, Daniel K.; Kim, Daniel

    2010-01-01

    Cardiac T2 mapping is a promising method for quantitative assessment of myocardial edema and iron overload. We have developed a new multi-echo fast spin echo (ME-FSE) pulse sequence for breath-hold T2 mapping with acceptable spatial resolution. We propose to further accelerate this new ME-FSE pulse sequence using k-t FOCal Underdetermined System Solver (FOCUSS) adapted with a framework that utilizes both compressed sensing and parallel imaging (.e.g, GRAPPA) to achieve higher spatial resolution. We imaged twelve control subjects in mid-ventricular short-axis planes and compared the accuracy of T2 measurements obtained using ME-FSE with GRAPPA and ME-FSE with k-t FOCUSS. For image reconstruction, we used a bootstrapping two-step approach, where in the first step fast Fourier transform was used as the sparsifying transform and in the final step principal component analysis was used as the sparsifying transform. Compared with T2 measurements obtained using GRAPPA, T2 measurements obtained using k-t FOCUSS were in excellent agreement (mean difference = 0.04 ms; upper/lower 95% limits of agreement were 2.26/−2.19 ms). The proposed accelerated ME-FSE pulse sequence with k-t FOCUSS is a promising investigational method for rapid T2 measurement of the heart with relatively high spatial resolution (1.7 mm × 1.7 mm). PMID:21360737

  1. The Maneuver Planning Process for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Andrews, Stephen; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) was successfully launched from Kennedy Space Center's Eastern Range on June 30, 2001. MAP will measure the cosmic microwave background as a follow up to NASA's Cosmic Background Explorer (COBE) mission from the early 1990's. MAP will take advantage of its mission orbit about the Sun-Earth/Moon L2 Lagrangian point to produce results with higher resolution, sensitivity, and accuracy than COBE. A strategy comprising highly eccentric phasing loops with a lunar gravity assist was utilized to provide a zero-cost insertion into a lissajous orbit about L2. Maneuvers were executed at the phasing loop perigees to correct for launch vehicle errors and to target the lunar gravity assist so that a suitable orbit at L2 was achieved. This paper will discuss the maneuver planning process for designing, verifying, and executing MAP's maneuvers. A discussion of the tools and how they interacted will also be included. The maneuver planning process was iterative and crossed several disciplines, including trajectory design, attitude control, propulsion, power, thermal, communications, and ground planning. Several commercial, off-the-shelf (COTS) packages were used to design the maneuvers. STK/Astrogator was used as the trajectory design tool. All maneuvers were designed in Astrogator to ensure that the Moon was met at the correct time and orientation to provide the energy needed to achieve an orbit about L2. The Mathworks Matlab product was used to develop a tool for generating command quaternions. The command quaternion table (CQT) was used to drive the attitude during the perigee maneuvers. The MatrixX toolset, originally written by Integrated Systems, Inc., now distributed by Mathworks, was used to create HiFi, a high fidelity simulator of the MAP attitude control system. HiFi was used to test the CQT and to make sure that all attitude requirements were met during the maneuver. In addition, all ACS data plotting and output were generated in

  2. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  3. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  4. A remediation contractor`s view of accelerating the cleanup process

    SciTech Connect

    Librizzi, W.J.; Phelps, G.S.

    1994-12-31

    Superfund, since its passage in December, 1980, has been under continual evaluation and change. Progress has been made over the past 13 years. To date, EPA under Superfund has completed 220 long term cleanups with 1,100 in various stages of completion. In addition, Superfund has been a catalyst for the development of new innovative cleanup technologies. In this regard, EPA has identified more than 150 innovative technologies now being used to treat contaminated soil, groundwater, sludge and sediments. Despite these noted accomplishments, continued criticisms of the program focus on Superfund weaknesses. They include: inconsistent cleanups; high transactional costs; perceived unfairness in liability; overlapping federal/state relationship; Inadequate community involvement; impediments to economic development. Techniques that can accelerate the hazardous waste cleanup process are discussed further in this paper. They include: strengthened interrelationships between the design and remediation contractors; role of the remediation contractor in the implementation of presumptive remedies; a proactive community relations program, partnering and early and frequent interface with regulatory agencies.

  5. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC. PMID:24009129

  6. Closing the gap: accelerating the translational process in nanomedicine by proposing standardized characterization techniques

    PubMed Central

    Khorasani, Ali A; Weaver, James L; Salvador-Morales, Carolina

    2014-01-01

    On the cusp of widespread permeation of nanomedicine, academia, industry, and government have invested substantial financial resources in developing new ways to better treat diseases. Materials have unique physical and chemical properties at the nanoscale compared with their bulk or small-molecule analogs. These unique properties have been greatly advantageous in providing innovative solutions for medical treatments at the bench level. However, nanomedicine research has not yet fully permeated the clinical setting because of several limitations. Among these limitations are the lack of universal standards for characterizing nanomaterials and the limited knowledge that we possess regarding the interactions between nanomaterials and biological entities such as proteins. In this review, we report on recent developments in the characterization of nanomaterials as well as the newest information about the interactions between nanomaterials and proteins in the human body. We propose a standard set of techniques for universal characterization of nanomaterials. We also address relevant regulatory issues involved in the translational process for the development of drug molecules and drug delivery systems. Adherence and refinement of a universal standard in nanomaterial characterization as well as the acquisition of a deeper understanding of nanomaterials and proteins will likely accelerate the use of nanomedicine in common practice to a great extent. PMID:25525356

  7. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    PubMed

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  8. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway.

    PubMed

    Tooker, Brian C; Brindley, Stephen M; Chiarappa-Zucca, Marina L; Turteltaub, Kenneth W; Newman, Lee S

    2015-01-01

    Exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, it was determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) than HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ. PMID:24932923

  9. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    SciTech Connect

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.

  10. Accelerator Mass Spectrometry Detection of Beryllium Ions in the Antigen Processing and Presentation Pathway

    PubMed Central

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2015-01-01

    Exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, it was determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ. PMID:24932923

  11. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    DOE PAGESBeta

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstratemore » that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.« less

  12. Processing techniques for the production of an experimental computer-generated shaded-relief map

    USGS Publications Warehouse

    Judd, Damon D.

    1986-01-01

    The data consisted of forty-eight 1° by 1° blocks of resampled digital elevation model (DEM) data. These data were digitally mosaicked and assigned colors based on intervals of elevation values. The color-coded data set was then used to create a shaded-relief image that was photographically composited with cartographic line information to produce a shaded-relief map. The majority of the processing was completed at the National Mapping Division EROS Data Center in Sioux Falls, South Dakota.

  13. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the

  14. Flow Behavior and Processing Maps of a Low-Carbon Steel During Hot Deformation

    NASA Astrophysics Data System (ADS)

    Yang, Xiawei; Li, Wenya

    2015-12-01

    The hot isothermal compression tests of a low-carbon steel containing 0.20 pct C were performed in the temperature range of 973 K to 1273 K (700 °C to 1000 °C) and at the strain rate range of 0.001 to 1 s-1. The results show that the flow stress is dependent on deformation temperature and strain rate (decreasing with increasing temperature and/or increasing with increasing strain rate). The flow stress predicted by Arrhenius-type and artificial neural network models were both in a good agreement with experimental data, while the prediction accuracy of the latter is better than the former. A processing map can be obtained by superimposing an instability map on a power dissipation map. Finally, an FEM model was successfully established to simulate the compression test process of this steel. The processing map combined with the FEM model can be very beneficial to solve the problems of residual stress, distortion, and flow instability of components.

  15. Lightweight Hyperspectral Mapping System and a Novel Photogrammetric Processing Chain for UAV-based Sensing

    NASA Astrophysics Data System (ADS)

    Suomalainen, Juha; Franke, Jappe; Anders, Niels; Iqbal, Shahzad; Wenting, Philip; Becker, Rolf; Kooistra, Lammert

    2014-05-01

    We have developed a lightweight Hyperspectral Mapping System (HYMSY) and a novel processing chain for UAV based mapping. The HYMSY consists of a custom pushbroom spectrometer (range 450-950nm, FWHM 9nm, ~20 lines/s, 328 pixels/line), a consumer camera (collecting 16MPix raw image every 2 seconds), a GPS-Inertia Navigation System (GPS-INS), and synchronization and data storage units. The weight of the system at take-off is 2.0kg allowing us to mount it on a relatively small octocopter. The novel processing chain exploits photogrammetry in the georectification process of the hyperspectral data. At first stage the photos are processed in a photogrammetric software producing a high-resolution RGB orthomosaic, a Digital Surface Model (DSM), and photogrammetric UAV/camera position and attitude at the moment of each photo. These photogrammetric camera positions are then used to enhance the internal accuracy of GPS-INS data. These enhanced GPS-INS data are then used to project the hyperspectral data over the photogrammetric DSM, producing a georectified end product. The presented photogrammetric processing chain allows fully automated georectification of hyperspectral data using a compact GPS-INS unit while still producingin UAV use higher georeferencing accuracy than would be possible using the traditional processing method. During 2013, we have operated HYMSY on 150+ octocopter flights at 60+ sites or days. On typical flight we have produced for a 2-10ha area: a RGB orthoimagemosaic at 1-5cm resolution, a DSM in 5-10cm resolution, and hyperspectral datacube at 10-50cm resolution. The targets have mostly consisted of vegetated targets including potatoes, wheat, sugar beets, onions, tulips, coral reefs, and heathlands,. In this poster we present the Hyperspectral Mapping System and the photogrammetric processing chain with some of our first mapping results.

  16. Mapping and modelling spatial patterns of dominant processes in a karstic catchment

    NASA Astrophysics Data System (ADS)

    Reszler, Christian; Stadler, Hermann; Komma, Jürgen; Blöschl, Günter

    2014-05-01

    This paper presents a framework of combining hydrogeological mapping and hydrological modelling for dominant processes identification in karstic catchments. The aim is to identify areas with a potential of surface erosion and solute input into a karst system. Hydrogeological mapping is based on a mapping catalogue, where the items can be transferred directly to model structure and parameters. The items contain mappable properties such as geological units, overlaying loose material/debris and soils. The synthesis of these properties leads to identification of dominant hydrological mechanisms, particularly the interplay between direct infiltration and surface runoff. Model structure and a priori model parameters can be set and modified based on hydrogeological expert evaluation. This enhances the calibration and validation procedure and includes the formulation of a conceptual karst drainage module. Besides discharge data of springs water quality data (e.g. SAC 254) are used to obtain a better understanding of the karst system and its drainage characteristics and to estimate particle travel time.

  17. Topological data analysis of contagion maps for examining spreading processes on networks.

    PubMed

    Taylor, Dane; Klimm, Florian; Harrington, Heather A; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A; Mucha, Peter J

    2015-01-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges-for example, due to airline transportation or communication media-allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct 'contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks. PMID:26194875

  18. Topological data analysis of contagion maps for examining spreading processes on networks

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-07-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  19. Process Control and Characterization of NiCr Coatings by HVOF-DJ2700 System: A Process Map Approach

    NASA Astrophysics Data System (ADS)

    Valarezo, Alfredo; Choi, Wanhuk B.; Chi, Weiguang; Gouldstone, Andrew; Sampath, Sanjay

    2010-09-01

    The concept of ‘process maps’ has been utilized to study the fundamentals of process-structure-property relationships in high velocity oxygen fuel (HVOF) sprayed coatings. Ni-20%Cr was chosen as a representative material for metallic alloys. In this paper, integrated experiments including diagnostic studies, splat collection, coating deposition, and property characterization were carried out in an effort to investigate the effects of fuel gas chemistry (fuel/oxygen ratio), total gas flow, and energy input on particle states: particle temperature ( T) and velocity ( V), coating formation dynamics, and properties. Coatings were deposited on an in situ curvature sensor to study residual stress evolution. The results were reconciled within the framework of process maps linking torch parameters with particle states (1st order map) and relating particle state with deposit properties (2nd order map). A strong influence of particle velocity on induced compressive stresses through peening effect is discussed. The complete tracking of the coating buildup history including particle state, residual stress evolution and deposition temperature, in addition to single splat analysis, allows the interpretation of resultant coating microstructures and properties and enables coating design with desired properties.

  20. Mapping knowledge translation and innovation processes in Cancer Drug Development: the case of liposomal doxorubicin.

    PubMed

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Hector; Castaño, Victor M

    2014-01-01

    We explored how the knowledge translation and innovation processes are structured when theyresult in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research. PMID:25182125

  1. Insights into siloxane removal from biogas in biotrickling filters via process mapping-based analysis.

    PubMed

    Soreanu, Gabriela

    2016-03-01

    Data process mapping using response surface methodology (RSM)-based computational techniques is performed in this study for the diagnosis of a laboratory-scale biotrickling filter applied for siloxane (i.e. octamethylcyclotetrasiloxane (D4) and decamethylcyclopentasiloxane (D5)) removal from biogas. A mathematical model describing the process performance (i.e. Si removal efficiency, %) was obtained as a function of key operating parameters (e.g biogas flowrate, D4 and D5 concentration). The contour plots and the response surfaces generated for the obtained objective function indicate a minimization trend in siloxane removal performance, however a maximum performance of approximately 60% Si removal efficiency was recorded. Analysis of the process mapping results provides indicators of improvement to biological system performance. PMID:26745382

  2. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    USGS Publications Warehouse

    Frigeri, A.; Hare, T.; Neteler, M.; Coradini, A.; Federico, C.; Orosei, R.

    2011-01-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  3. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    NASA Astrophysics Data System (ADS)

    Frigeri, Alessandro; Hare, Trent; Neteler, Markus; Coradini, Angioletta; Federico, Costanzo; Orosei, Roberto

    2011-09-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  4. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  5. Hot Deformation Characteristics of GH625 and Development of a Processing Map

    NASA Astrophysics Data System (ADS)

    Zhou, H. T.; Liu, R. R.; Liu, Z. C.; Zhou, X.; Peng, Q. Z.; Zhong, F. H.; Peng, Y.

    2013-09-01

    The hot deformation behavior of GH625 is investigated by a compression test in the temperature range of 950-1150 °C and the strain rate of 10-3-5 s-1. It is found that the flow stress behavior is described by the hyperbolic sine constitutive equation with average activation energy of 421 kJ/mol. Through the flow stresses' curves, the processing maps are constructed and analyzed according to the dynamic materials model. In the processing map, the variation of the efficiency of the power dissipation is plotted as a function of temperature and strain rate, and the maps exhibit a significant feature with a domain of dynamic recrystallization occurring at the temperature range of 950-1150 °C and in the strain rate range of 0.005-0.13 s-1, which are the optimum parameters for hot working of the alloy. Meanwhile, the instability zones of flow behavior can also be recognized by the maps.

  6. An Approach to Optimize Size Parameters of Forging by Combining Hot-Processing Map and FEM

    NASA Astrophysics Data System (ADS)

    Hu, H. E.; Wang, X. Y.; Deng, L.

    2014-11-01

    The size parameters of 6061 aluminum alloy rib-web forging were optimized by using hot-processing map and finite element method (FEM) based on high-temperature compression data. The results show that the stress level of the alloy can be represented by a Zener-Holloman parameter in a hyperbolic sine-type equation with the hot deformation activation energy of 343.7 kJ/mol. Dynamic recovery and dynamic recrystallization concurrently preceded during high-temperature deformation of the alloy. Optimal hot-processing parameters for the alloy corresponding to the peak value of 0.42 are 753 K and 0.001 s-1. The instability domain occurs at deformation temperature lower than 653 K. FEM is an available method to validate hot-processing map in actual manufacture by analyzing the effect of corner radius, rib width, and web thickness on workability of rib-web forging of the alloy. Size parameters of die forgings can be optimized conveniently by combining hot-processing map and FEM.

  7. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted

  8. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  9. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  10. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  11. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  12. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  13. Journey to the Edges: Social Structures and Neural Maps of Intergroup Processes

    PubMed Central

    Fiske, Susan T.

    2013-01-01

    This article explores boundaries of the intellectual map of intergroup processes, going to the macro (social structure) boundary and the micro (neural systems) boundary. Both are illustrated by with my own and others’ work on social structures and on neural structures related to intergroup processes. Analyzing the impact of social structures on intergroup processes led to insights about distinct forms of sexism and underlies current work on forms of ageism. The stereotype content model also starts with the social structure of intergroup relations (interdependence and status) and predicts images, emotions, and behaviors. Social structure has much to offer the social psychology of intergroup processes. At the other, less explored boundary, social neuroscience addresses the effects of social contexts on neural systems relevant to intergroup processes. Both social structural and neural analyses circle back to traditional social psychology as converging indicators of intergroup processes. PMID:22435843

  14. Development of a new flux map processing code for moveable detector system in PWR

    SciTech Connect

    Li, W.; Lu, H.; Li, J.; Dang, Z.; Zhang, X.

    2013-07-01

    This paper presents an introduction to the development of the flux map processing code MAPLE developed by China Nuclear Power Technology Research Institute (CNPPJ), China Guangdong Nuclear Power Group (CGN). The method to get the three-dimensional 'measured' power distribution according to measurement signal has also been described. Three methods, namely, Weight Coefficient Method (WCM), Polynomial Expand Method (PEM) and Thin Plane Spline (TPS) method, have been applied to fit the deviation between measured and predicted results for two-dimensional radial plane. The measured flux map data of the LINGAO nuclear power plant (NPP) is processed using MAPLE as a test case to compare the effectiveness of the three methods, combined with a 3D neutronics code COCO. Assembly power distribution results show that MAPLE results are reasonable and satisfied. More verification and validation of the MAPLE code will be carried out in future. (authors)

  15. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  16. Foundations of MRI phase imaging and processing for Quantitative Susceptibility Mapping (QSM).

    PubMed

    Schweser, Ferdinand; Deistung, Andreas; Reichenbach, Jürgen R

    2016-03-01

    Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies. PMID:26702760

  17. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  18. Characterization of Hot Deformation Behavior of Hastelloy C-276 Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zhang, Liwen; Shen, Wenfei; Li, Mengfei; Gu, Sendong

    2015-01-01

    In order to clarify the microstructural evolution and workability of Hastelloy C-276 during hot forming to get excellent mechanical properties, the hot deformation behavior of this superalloy is characterized. The cylindrical specimens were isothermal compressed in the temperature range of 1000-1200 °C and strain rate range of 0.001-5 s-1 on a Gleeble 1500 thermal-mechanical simulator. The flow curves and microstructural investigation indicates that dynamic recrystallization is the prime softening mechanism at the evaluated deformation conditions. The constitutive equation was presented as a function of the deformation temperature, strain rate, and strain, and the deformation activation energy was about 450 kJ/mol. The processing maps based on dynamic materials model at the strains of 0.2, 0.4, 0.6, 0.8, and 1.0 were established, and the processing map at 1.0 strain shows good correspondence to the microstructural observation. The domains in processing map in which the efficiency of power dissipation (η) is higher than 0.25 are corresponding to sufficient dynamic recyrstallization phenomenon, which are suggested to be the optimum working areas for Hastelloy C-276.

  19. Characterizing physical habitats in rivers using map-derived drivers of fluvial geomorphic processes

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Lerner, David N.

    2012-10-01

    New understanding of fluvial geomorphological processes has successfully informed flood mitigation strategies and rehabilitation schemes in recent years. However well established geomorphological assessments are location-specific and demanding in terms of resource and expertise required, and their routine application for regional or national river characterization, although desirable, is unlikely at present. This paper proposes a framework based on GIS procedures, empirical relationships and the self organized map for the analysis and classification of map-derived drivers of fluvial morphological processes. The geomorphic controls analysed are: channel gradient and hydrology, specific stream power, river order and floodplain extent. The case study is a gravel bed river in England. Using the self organized map, we analyse patterns of these controls along the river longitudinal profile and identify clusters of similar configuration. The reciprocal relationships that emerge amongst the geomorphic controls reflect the hierarchical nature of fluvial systems and are consistent with the current theoretical understanding of fluvial processes. Field observations from the River Habitat Survey are used to prove the influence of geomorphic drivers on reach-scale morphological forms. Six clusters are identified which describe six distinctive channel types. These proved to be characterized by distinctive configurations of geomorphic drivers and specific sets of physical habitat features. The method successfully characterizes the notable transitions in channel character along the river course. The framework is suitable for regional or national scale assessments through automatic GIS and statistical procedures with moderate effort.

  20. Interactive remote data processing using Pixelize Wavelet Filtration (PWF-method) and PeriodMap analysis

    NASA Astrophysics Data System (ADS)

    Sych, Robert; Nakariakov, Valery; Anfinogentov, Sergey

    Wavelet analysis is suitable for investigating waves and oscillating in solar atmosphere, which are limited in both time and frequency. We have developed an algorithms to detect this waves by use the Pixelize Wavelet Filtration (PWF-method). This method allows to obtain information about the presence of propagating and non-propagating waves in the data observation (cube images), and localize them precisely in time as well in space. We tested the algorithm and found that the results of coronal waves detection are consistent with those obtained by visual inspection. For fast exploration of the data cube, in addition, we applied early-developed Period- Map analysis. This method based on the Fast Fourier Transform and allows on initial stage quickly to look for "hot" regions with the peak harmonic oscillations and determine spatial distribution at the significant harmonics. We propose the detection procedure of coronal waves separate on two parts: at the first part, we apply the PeriodMap analysis (fast preparation) and than, at the second part, use information about spatial distribution of oscillation sources to apply the PWF-method (slow preparation). There are two possible algorithms working with the data: in automatic and hands-on operation mode. Firstly we use multiply PWF analysis as a preparation narrowband maps at frequency subbands multiply two and/or harmonic PWF analysis for separate harmonics in a spectrum. Secondly we manually select necessary spectral subband and temporal interval and than construct narrowband maps. For practical implementation of the proposed methods, we have developed the remote data processing system at Institute of Solar-Terrestrial Physics, Irkutsk. The system based on the data processing server - http://pwf.iszf.irk.ru. The main aim of this resource is calculation in remote access through the local and/or global network (Internet) narrowband maps of wave's sources both in whole spectral band and at significant harmonics. In addition

  1. PREFACE: 3rd International Workshop on Materials Analysis and Processing in Magnetic Fields (MAP3)

    NASA Astrophysics Data System (ADS)

    Sakka, Yoshio; Hirota, Noriyuki; Horii, Shigeru; Ando, Tsutomu

    2009-07-01

    The 3rd International Workshop on Materials Analysis and Processing in Materials Fields (MAP3) was held on 14-16 May 2008 at the University of Tokyo, Japan. The first was held in March 2004 at the National High Magnetic Field Laboratory in Tallahassee, USA. Two years later the second took place in Grenoble, France. MAP3 was held at The University of Tokyo International Symposium, and jointly with MANA Workshop on Materials Processing by External Stimulation, and JSPS CORE Program of Construction of the World Center on Electromagnetic Processing of Materials. At the end of MAP3 it was decided that the next MAP4 will be held in Atlanta, USA in 2010. Processing in magnetic fields is a rapidly expanding research area with a wide range of promising applications in materials science. MAP3 focused on the magnetic field interactions involved in the study and processing of materials in all disciplines ranging from physics to chemistry and biology: Magnetic field effects on chemical, physical, and biological phenomena Magnetic field effects on electrochemical phenomena Magnetic field effects on thermodynamic phenomena Magnetic field effects on hydrodynamic phenomena Magnetic field effects on crystal growth Magnetic processing of materials Diamagnetic levitation Magneto-Archimedes effect Spin chemistry Application of magnetic fields to analytical chemistry Magnetic orientation Control of structure by magnetic fields Magnetic separation and purification Magnetic field-induced phase transitions Materials properties in high magnetic fields Development of NMR and MRI Medical application of magnetic fields Novel magnetic phenomena Physical property measurement by Magnetic fields High magnetic field generation> MAP3 consisted of 84 presentations including 16 invited talks. This volume of Journal of Physics: Conference Series contains the proceeding of MAP3 with 34 papers that provide a scientific record of the topics covered by the conference with the special topics (13 papers) in

  2. Electron bunching in a Penning trap and accelerating process for CO2 gas mixture active medium

    NASA Astrophysics Data System (ADS)

    Tian, Xiu-Fang; Wu, Cong-Feng; Jia, Qi-Ka

    2015-12-01

    In PASER (particle acceleration by stimulated emission of radiation), in the presence of an active medium incorporated in a Penning trap, moving electrons can become bunched, and as they get enough energy, they escape the trap forming an optical injector. These bunched electrons can enter the next PASER section filled with the same active medium to be accelerated. In this paper, electron dynamics in the presence of a gas mixture active medium incorporated in a Penning trap is analyzed by developing an idealized 1D model. We evaluate the energy exchange occurring as the train of electrons traverses into the next PASER section. The results show that the oscillating electrons can be bunched at the resonant frequency of the active medium. The influence of the trapped time and population inversion are analyzed, showing that the longer the electrons are trapped, the more energy from the medium the accelerated electrons get, and with the increase of population inversion, the decelerated electrons are virtually unchanged but the accelerated electrons more than double their peak energy values. The simulation results show that the gas active medium needs a lower population inversion to bunch the electrons compared to a solid active medium, so the experimental conditions can easily be achieved. Supported by National Natural Science Foundation of China (10675116) and Major State Basic Research Development Programme of China (2011CB808301)

  3. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  4. Mapping of Inner and Outer Celestial Bodies Using New Global and Local Topographic Data Derived from Photogrammetric Image Processing

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kokhanov, A. A.; Rodionova, J. F.; Zharkova, A. Yu.; Lazareva, M. S.

    2016-06-01

    New estimation of fundamental geodetic parameters and global and local topography of planets and satellites provide basic coordinate systems for mapping as well as opportunities for studies of processes on their surfaces. The main targets of our study are Europa, Ganymede, Calisto and Io (satellites of Jupiter), Enceladus (a satellite of Saturn), terrestrial planetary bodies, including Mercury, the Moon and Phobos, one of the Martian satellites. In particular, based on new global shape models derived from three-dimensional control point networks and processing of high-resolution stereo images, we have carried out studies of topography and morphology. As a visual representation of the results, various planetary maps with different scale and thematic direction were created. For example, for Phobos we have produced a new atlas with 43 maps, as well as various wall maps (different from the maps in the atlas by their format and design): basemap, topography and geomorphological maps. In addition, we compiled geomorphologic maps of Ganymede on local level, and a global hypsometric Enceladus map. Mercury's topography was represented as a hypsometric globe for the first time. Mapping of the Moon was carried out using new images with super resolution (0.5-1 m/pixel) for activity regions of the first Soviet planetary rovers (Lunokhod-1 and -2). New results of planetary mapping have been demonstrated to the scientific community at planetary map exhibitions (Planetary Maps Exhibitions, 2015), organized by MExLab team in frame of the International Map Year, which is celebrated in 2015-2016. Cartographic products have multipurpose applications: for example, the Mercury globe is popular for teaching and public outreach, the maps like those for the Moon and Phobos provide cartographic support for Solar system exploration.

  5. Processing of airborne lidar bathymetry data for detailed sea floor mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael

    2014-10-01

    Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas. With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field data collection can be directed into selected areas and even focused to identification of specific targets detected in the lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data, and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding density.

  6. Topological data analysis of contagion maps for examining spreading processes on networks

    PubMed Central

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-01-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth’s surface; however, in modern contagions long-range edges—for example, due to airline transportation or communication media—allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct “contagion maps” that use multiple contagions on a network to map the nodes as a point cloud. By analyzing the topology, geometry, and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modeling, forecast, and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks. PMID:26194875

  7. The Maneuver Planning Process for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Andrews, Stephen F.; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) mission utilized a strategy combining highly eccentric phasing loops with a lunar gravity assist to provide a zero-cost insertion into a Lissajous orbit about the Sun-Earth/Moon L2 point. Maneuvers were executed at the phasing loop perigees to correct for launch vehicle errors and to target the lunar gravity assist so that a suitable orbit at L2 was achieved. This paper will discuss the maneuver planning process for designing, verifying, and executing MAP's maneuvers. This paper will also describe how commercial off-the-shelf (COTS) tools were used to execute these tasks and produce a command sequence ready for upload to the spacecraft. These COTS tools included Satellite Tool Kit, MATLAB, and Matrix-X.

  8. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  9. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  10. Using saliency maps to separate competing processes in infant visual cognition.

    PubMed

    Althaus, Nadja; Mareschal, Denis

    2012-01-01

    This article presents an eye-tracking study using a novel combination of visual saliency maps and "area-of-interest" analyses to explore online feature extraction during category learning in infants. Category learning in 12-month-olds (N = 22) involved a transition from looking at high-saliency image regions to looking at more informative, highly variable object parts. In contrast, 4-month-olds (N = 27) exhibited a different pattern displaying a similar decreasing impact of saliency accompanied by a steady focus on the object's center, indicating that targeted feature extraction during category learning develops across the 1st year of life. These results illustrate how the effects of lower and higher level processes may be disentangled using a combined saliency map and area-of-interest analysis. PMID:22533474

  11. Mapping the particle acceleration in the cool core of the galaxy cluster RX J1720.1+2638

    SciTech Connect

    Giacintucci, S.; Markevitch, M.; Brunetti, G.; Venturi, T.; ZuHone, J. A.

    2014-11-01

    We present new deep, high-resolution radio images of the diffuse minihalo in the cool core of the galaxy cluster RX J1720.1+2638. The images have been obtained with the Giant Metrewave Radio Telescope at 317, 617, and 1280 MHz and with the Very Large Array at 1.5, 4.9, and 8.4 GHz, with angular resolutions ranging from 1'' to 10''. This represents the best radio spectral and imaging data set for any minihalo. Most of the radio flux of the minihalo arises from a bright central component with a maximum radius of ∼80 kpc. A fainter tail of emission extends out from the central component to form a spiral-shaped structure with a length of ∼230 kpc, seen at frequencies 1.5 GHz and below. We find indication of a possible steepening of the total radio spectrum of the minihalo at high frequencies. Furthermore, a spectral index image shows that the spectrum of the diffuse emission steepens with increasing distance along the tail. A striking spatial correlation is observed between the minihalo emission and two cold fronts visible in the Chandra X-ray image of this cool core. These cold fronts confine the minihalo, as also seen in numerical simulations of minihalo formation by sloshing-induced turbulence. All these observations favor the hypothesis that the radio-emitting electrons in cluster cool cores are produced by turbulent re-acceleration.

  12. Locality-Aware Parallel Process Mapping for Multi-Core HPC Systems

    SciTech Connect

    Hursey, Joshua J; Squyres, Jeffrey M.; Dontje, Terry

    2011-01-01

    High Performance Computing (HPC) systems are composed of servers containing an ever-increasing number of cores. With such high processor core counts, non-uniform memory access (NUMA) architectures are almost universally used to reduce inter-processor and memory communication bottlenecks by distributing processors and memory throughout a server-internal networking topology. Application studies have shown that the tuning of processes placement in a server s NUMA networking topology to the application can have a dramatic impact on performance. The performance implications are magnified when running a parallel job across multiple server nodes, especially with large scale HPC applications. This paper presents the Locality-Aware Mapping Algorithm (LAMA) for distributing the individual processes of a parallel application across processing resources in an HPC system, paying particular attention to the internal server NUMA topologies. The algorithm is able to support both homogeneous and heterogeneous hardware systems, and dynamically adapts to the available hardware and user-specified process layout at run-time. As implemented in Open MPI, the LAMA provides 362,880 mapping permutations and is able to naturally scale out to additional hardware resources as they become available in future architectures.

  13. Automatic Mapping Of Large Signal Processing Systems To A Parallel Machine

    NASA Astrophysics Data System (ADS)

    Printz, Harry; Kung, H. T.; Mummert, Todd; Scherer, Paul M.

    1989-12-01

    Since the spring of 1988, Carnegie Mellon University and the Naval Air Development Center have been working together to implement several large signal processing systems on the Warp parallel computer. In the course of this work, we have developed a prototype of a software tool that can automatically and efficiently map signal processing systems to distributed-memory parallel machines, such as Warp. We have used this tool to produce Warp implementations of small test systems. The automatically generated programs compare favorably with hand-crafted code. We believe this tool will be a significant aid in the creation of high speed signal processing systems. We assume that signal processing systems have the following characteristics: •They can be described by directed graphs of computational tasks; these graphs may contain thousands of task vertices. • Some tasks can be parallelized in a systolic or data-partitioned manner, while others cannot be parallelized at all. • The side effects of each task, if any, are limited to changes in local variables. • Each task has a data-independent execution time bound, which may be expressed as a function of the way it is parallelized, and the number of processors it is mapped to. In this paper we describe techniques to automatically map such systems to Warp-like parallel machines. We identify and address key issues in gracefully combining different parallel programming styles, in allocating processor, memory and communication bandwidth, and in generating and scheduling efficient parallel code. When iWarp, the VLSI version of the Warp machine, becomes available in 1990, we will extend this tool to generate efficient code for very large applications, which may require as many as 3000 iWarp processors, with an aggregate peak performance of 60 gigaflops.

  14. IntenCD: an application for CD uniformity mapping of photomask and process control at maskshops

    NASA Astrophysics Data System (ADS)

    Kim, Heebom; Lee, MyoungSoo; Lee, Sukho; Sung, Young-Su; Kim, Byunggook; Woo, Sang-Gyun; Cho, HanKu; Yishai, Michael Ben; Shoval, Lior; Couderc, Christophe

    2008-05-01

    Lithographic process steps used in today's integrated circuit production require tight control of critical dimensions (CD). With new design rules dropping to 32 nm and emerging double patterning processes, parameters that were of secondary importance in previous technology generations have now become determining for the overall CD budget in the wafer fab. One of these key parameters is the intra-field mask CD uniformity (CDU) error, which is considered to consume an increasing portion of the overall CD budget for IC fabrication process. Consequently, it has become necessary to monitor and characterize CDU in both the maskshop and the wafer fab. Here, we describe the introduction of a new application for CDU monitoring into the mask making process at Samsung. The IntenCDTM application, developed by Applied Materials, is implemented on an aerial mask inspection tool. It uses transmission inspection data, which contains information about CD variation over the mask, to create a dense yet accurate CDU map of the whole mask. This CDU map is generated in parallel to the normal defect inspection run, thus adding minimal overhead to the regular inspection time. We present experimental data showing examples of mask induced CD variations from various sources such as geometry, transmission and phase variations. We show how these small variations were captured by IntenCDTM and demonstrate a high level of correlation between CD SEM analysis and IntenCDTM mapping of mask CDU. Finally, we suggest a scheme for integrating the IntenCDTM application as part of mask qualification procedure at maskshops.

  15. Comparison of ArcToolbox and Terrain Tiles processing procedures for inundation mapping in mountainous terrain.

    PubMed

    Darnell, Andrew; Wise, Richard; Quaranta, John

    2013-01-01

    Floodplain management consists of efforts to reduce flood damage to critical infrastructure and to protect the life and health of individuals from flooding. A major component of this effort is the monitoring of flood control structures such as dams because the potential failure of these structures may have catastrophic consequences. To prepare for these threats, engineers use inundation maps that illustrate the flood resulting from high river stages. To create the maps, the structure and river systems are modeled using engineering software programs, and hydrologic events are used to simulate the conditions leading to the failure of the structure. The output data are then exported to other software programs for the creation of inundation maps. Although the computer programs for this process have been established, the processing procedures vary and yield inconsistent results. Thus, these processing methods need to be examined to determine the functionality of each in floodplain management practices. The main goal of this article is to present the development of a more integrated, accurate, and precise graphical interface tool for interpretation by emergency managers and floodplain engineers. To accomplish this purpose, a potential dam failure was simulated and analyzed for a candidate river system using two processing methods: ArcToolbox and Terrain Tiles. The research involved performing a comparison of the outputs, which revealed that both procedures yielded similar inundations for single river reaches. However, the results indicated key differences when examining outputs for large river systems. On the basis of criteria involving the hydrologic accuracy and effects on infrastructure, the Terrain Tiles inundation surpassed the ArcToolbox inundation in terms of following topography and depicting flow rates and flood extents at confluences, bends, and tributary streams. Thus, the Terrain Tiles procedure is a more accurate representation of flood extents for use by

  16. Electromagnetic oil field mapping for improved process monitoring and reservoir characterization: A poster presentation

    SciTech Connect

    Waggoner, J.R.; Mansure, A.J.

    1992-02-01

    This report is a permanent record of a poster paper presented by the authors at the Third International Reservoir Characterization Technical Conference in Tulsa, Oklahoma on November 3--5, 1991. The subject is electromagnetic (EM) techniques that are being developed to monitor oil recovery processes to improve overall process performance. The potential impact of EM surveys is very significant, primarily in the areas of locating oil, identifying oil inside and outside the pattern, characterizing flow units, and pseudo-real time process control to optimize process performance and efficiency. Since a map of resistivity alone has little direct application to these areas, an essential part of the EM technique is understanding the relationship between the process and the formation resistivity at all scales, and integrating this understanding into reservoir characterization and simulation. First is a discussion of work completed on the core scale petrophysics of resistivity changes in an oil recovery process; a steamflood is used as an example. A system has been developed for coupling the petrophysics of resistivity with reservoir simulation to simulate the formation resistivity structure arising from a recovery process. Preliminary results are given for an investigation into the effect of heterogeneity and anisotropy on the EM technique, as well as the use of the resistivity simulator to interpret EM data in terms of reservoir and process parameters. Examples illustrate the application of the EM technique to improve process monitoring and reservoir characterization.

  17. Optimization of the accelerated curing process of concrete using a fibre Bragg grating-based control system and microwave technology

    NASA Astrophysics Data System (ADS)

    Fabian, Matthias; Jia, Yaodong; Shi, Shi; McCague, Colum; Bai, Yun; Sun, Tong; Grattan, Kenneth T. V.

    2016-05-01

    In this paper, an investigation into the suitability of using fibre Bragg gratings (FBGs) for monitoring the accelerated curing process of concrete in a microwave heating environment is presented. In this approach, the temperature data provided by the FBGs are used to regulate automatically the microwave power so that a pre-defined temperature profile is maintained to optimize the curing process, achieving early strength values comparable to those of conventional heat-curing techniques but with significantly reduced energy consumption. The immunity of the FBGs to interference from the microwave radiation used ensures stable readings in the targeted environment, unlike conventional electronic sensor probes.

  18. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  19. Observations of energetic water-group ions at Comet Giacobini-Zinner - Implications for ion acceleration processes

    NASA Astrophysics Data System (ADS)

    Richardson, I. G.; Cowley, S. W. H.; Hynds, R. J.; Tranquille, C.; Sanderson, T. R.; Wenzel, K.-P.

    1987-10-01

    Observations of energetic water-group pick-up ions made by the EPAS instrument during ICE fly-by of Comet P/Giacobini-Zinner are investigated for evidence concerning the processes which accelerate the ions from initial pick-up energies of around 10 keV to energies of a few hundred keV. The form of the ion spectrum in the ion rest frame is investigated and compared with theoretical suggestions that exponential energy distributions might be produced by either first- or second-order Fermi acceleration in the cometary environment. It is shown that such distributions do not fit the data at all well, but that rather the observed distribution functions closely approximate an exponential in ion speed. The variations of ion intensity and spectral hardness which occur during the comet encounter are investigated, and an indication of the degree of isotropy of the ion distribution in the rest frame of the flow is obtained.

  20. What can we learn from inverse methods regarding the processes behind the acceleration and retreat of Helheim glacier (Greenland)?

    NASA Astrophysics Data System (ADS)

    Gagliardini, O.; Gillet-chaulet, F.; Martin, N.; Monnier, J.; Singh, J.

    2011-12-01

    Greenland outlet glaciers control the ice discharge toward the sea and the resulting contribution to sea level rise. Physical processes at the root of the observed acceleration and retreat, - decrease of the back force at the calving terminus, increase of basal lubrication and decrease of the lateral friction -, are still not well understood. All these three processes certainly play a role but their relative contributions have not yet been quantified. Helheim glacier, located on the east coast of Greenland, has undergone an enhanced retreat since 2003, and this retreat was concurrent with accelerated ice flow. In this study, the flowline dataset including surface elevation, surface velocity and front position of Helheim from 2001 to 2006 is used to quantify the sensitivity of each of these processes. For that, we used the full-Stokes finite element ice flow model DassFlow/Ice, including adjoint code and full 4d-var data assimilation process in which the control variables are the basal and lateral friction parameters as well as the calving front pressure. For each available date, the sensitivity of each processes is first studied and an optimal distribution is then inferred from the surface measurements. Using this optimal distribution of these parameters, a transient simulation is performed over the whole dataset period. The relative contributions of the basal friction, lateral friction and front back force are then discussed under the light of these new results.

  1. Accelerating the commercialization of university technologies for military healthcare applications: the role of the proof of concept process

    NASA Astrophysics Data System (ADS)

    Ochoa, Rosibel; DeLong, Hal; Kenyon, Jessica; Wilson, Eli

    2011-06-01

    The von Liebig Center for Entrepreneurism and Technology Advancement at UC San Diego (vonliebig.ucsd.edu) is focused on accelerating technology transfer and commercialization through programs and education on entrepreneurism. Technology Acceleration Projects (TAPs) that offer pre-venture grants and extensive mentoring on technology commercialization are a key component of its model which has been developed over the past ten years with the support of a grant from the von Liebig Foundation. In 2010, the von Liebig Entrepreneurism Center partnered with the U.S. Army Telemedicine and Advanced Technology Research Center (TATRC), to develop a regional model of Technology Acceleration Program initially focused on military research to be deployed across the nation to increase awareness of military medical needs and to accelerate the commercialization of novel technologies to treat the patient. Participants to these challenges are multi-disciplinary teams of graduate students and faculty in engineering, medicine and business representing universities and research institutes in a region, selected via a competitive process, who receive commercialization assistance and funding grants to support translation of their research discoveries into products or services. To validate this model, a pilot program focused on commercialization of wireless healthcare technologies targeting campuses in Southern California has been conducted with the additional support of Qualcomm, Inc. Three projects representing three different universities in Southern California were selected out of forty five applications from ten different universities and research institutes. Over the next twelve months, these teams will conduct proof of concept studies, technology development and preliminary market research to determine the commercial feasibility of their technologies. This first regional program will help build the needed tools and processes to adapt and replicate this model across other regions in the

  2. Molecular Mechanisms and Evolutionary Processes Contributing to Accelerated Divergence of Gene Expression on the Drosophila X Chromosome.

    PubMed

    Coolon, Joseph D; Stevenson, Kraig R; McManus, C Joel; Yang, Bing; Graveley, Brenton R; Wittkopp, Patricia J

    2015-10-01

    In species with a heterogametic sex, population genetics theory predicts that DNA sequences on the X chromosome can evolve faster than comparable sequences on autosomes. Both neutral and nonneutral evolutionary processes can generate this pattern. Complex traits like gene expression are not predicted to have accelerated evolution by these theories, yet a "faster-X" pattern of gene expression divergence has recently been reported for both Drosophila and mammals. Here, we test the hypothesis that accelerated adaptive evolution of cis-regulatory sequences on the X chromosome is responsible for this pattern by comparing the relative contributions of cis- and trans-regulatory changes to patterns of faster-X expression divergence observed between strains and species of Drosophila with a range of divergence times. We find support for this hypothesis, especially among male-biased genes, when comparing different species. However, we also find evidence that trans-regulatory differences contribute to a faster-X pattern of expression divergence both within and between species. This contribution is surprising because trans-acting regulators of X-linked genes are generally assumed to be randomly distributed throughout the genome. We found, however, that X-linked transcription factors appear to preferentially regulate expression of X-linked genes, providing a potential mechanistic explanation for this result. The contribution of trans-regulatory variation to faster-X expression divergence was larger within than between species, suggesting that it is more likely to result from neutral processes than positive selection. These data show how accelerated evolution of both coding and noncoding sequences on the X chromosome can lead to accelerated expression divergence on the X chromosome relative to autosomes. PMID:26041937

  3. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2014-05-01

    The use of SAR data has become increasingly popular in recent years and in a wide array of industries. Having access to SAR can be highly important and critical especially for public safety. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. SAR imaging offers the great advantage, over its optical counterparts, of not being affected by darkness, meteorological conditions such as clouds, fog, etc., or smoke and dust, frequently associated with disaster zones. In this paper we present the operational processing of SAR data within a GIS environment for rapid disaster mapping. For this technique we integrated the SARscape modules for ENVI with ArcGIS®, eliminating the need to switch between software packages. Thereby the premier algorithms for SAR image analysis can be directly accessed from ArcGIS desktop and server environments. They allow processing and analyzing SAR data in almost real time and with minimum user interaction. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. The Bacchiglione River burst its banks on Nov. 2nd after two days of heavy rainfall throughout the northern Italian region. The community of Bovolenta, 22 km SSE of Padova, was covered by several meters of water. People were requested to stay in their homes; several roads, highways sections and railroads had to be closed. The extent of this flooding is documented by a series of Cosmo-SkyMed acquisitions with a GSD of 2.5 m (StripMap mode). Cosmo-SkyMed is a constellation of four Earth observation satellites, allowing a very frequent coverage, which enables monitoring using a very high temporal resolution. This data is processed in ArcGIS using a single-sensor, multi-mode, multi-temporal approach consisting of 3 steps: (1) The single images are filtered with a Gamma DE-MAP filter. (2) The filtered images are geocoded using a reference

  4. Laser Processing on the Surface of Niobium Superconducting Radio-Frequency Accelerator Cavities

    NASA Astrophysics Data System (ADS)

    Singaravelu, Senthilraja; Klopf, Michael; Krafft, Geoffrey; Kelley, Michael

    2011-03-01

    Superconducting Radio frequency (SRF) niobium cavities are at the heart of an increasing number of particle accelerators.~ Their performance is dominated by a several nm thick layer at the interior surface. ~Maximizing its smoothness is found to be critical and aggressive chemical treatments are employed to this end.~ We describe laser-induced surface melting as an alternative ``greener'' approach.~ Modeling guided selection of parameters for irradiation with a Q-switched Nd:YAG laser.~ The resulting topography was examined by SEM, AFM and Stylus Profilometry.

  5. Study on Flow Stress Model and Processing Map of Homogenized Mg-Gd-Y-Zn-Zr Alloy During Thermomechanical Processes

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Zhang, Zhimin; Lu, Guang; Xie, Zhiping; Yang, Yongbiao; Cui, Ya

    2015-02-01

    Quantities of billets were compressed with 50% height reduction on a hot process simulator to study the plastic flow behaviors of homogenized as-cast Mg-13Gd-4Y-2Zn-0.6Zr alloy. The test alloy was heat treated at 520 °C for 12 h before thermomechanical experiments. The temperature of the processes ranged from 300 to 480 °C. The strain rate was varied between 0.001 and 0.5 s-1. According to the Arrhenius type equation, a flow stress model was established. In this model, flow stress was regarded as the function of the stress peak, strain peak, and the strain. A softening factor was used to characterize the dynamic softening phenomenon that occurred in the deformation process. Meanwhile, the processing maps based on the dynamic material modeling were constructed. The optimum temperature and strain rate for hot working of the test alloy were 480 °C and 0.01 s-1, respectively. Furthermore, the flow instability occurred in the two areas where the temperature ranged from 350 to 480 °C at strain rate of 0.01-0.1 s-1, and the temperature ranged from 450 to 480 °C with a strain rate of 0.1 s-1. According to the determined hot deformation parameters, four components were successfully formed, and the ultimate tensile strength, yield strength, and elongation of the component were 386 MPa, 331 MPa, and 6.3%, respectively.

  6. Processive acceleration of actin barbed-end assembly by N-WASP

    PubMed Central

    Khanduja, Nimisha; Kuhn, Jeffrey R.

    2014-01-01

    Neuronal Wiskott–Aldrich syndrome protein (N-WASP)–activated actin polymerization drives extension of invadopodia and podosomes into the basement layer. In addition to activating Arp2/3, N-WASP binds actin-filament barbed ends, and both N-WASP and barbed ends are tightly clustered in these invasive structures. We use nanofibers coated with N-WASP WWCA domains as model cell surfaces and single-actin-filament imaging to determine how clustered N-WASP affects Arp2/3-independent barbed-end assembly. Individual barbed ends captured by WWCA domains grow at or below their diffusion-limited assembly rate. At high filament densities, however, overlapping filaments form buckles between their nanofiber tethers and myosin attachment points. These buckles grew ∼3.4-fold faster than the diffusion-limited rate of unattached barbed ends. N-WASP constructs with and without the native polyproline (PP) region show similar rate enhancements in the absence of profilin, but profilin slows barbed-end acceleration from constructs containing the PP region. Increasing Mg2+ to enhance filament bundling increases the frequency of filament buckle formation, consistent with a requirement of accelerated assembly on barbed-end bundling. We propose that this novel N-WASP assembly activity provides an Arp2/3-independent force that drives nascent filament bundles into the basement layer during cell invasion. PMID:24227886

  7. Improved laser damage threshold performance of calcium fluoride optical surfaces via Accelerated Neutral Atom Beam (ANAB) processing

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, S.; Walsh, M.; Svrluga, R.; Thomas, M.

    2015-11-01

    Optics are not keeping up with the pace of laser advancements. The laser industry is rapidly increasing its power capabilities and reducing wavelengths which have exposed the optics as a weak link in lifetime failures for these advanced systems. Nanometer sized surface defects (scratches, pits, bumps and residual particles) on the surface of optics are a significant limiting factor to high end performance. Angstrom level smoothing of materials such as calcium fluoride, spinel, magnesium fluoride, zinc sulfide, LBO and others presents a unique challenge for traditional polishing techniques. Exogenesis Corporation, using its new and proprietary Accelerated Neutral Atom Beam (ANAB) technology, is able to remove nano-scale surface damage and particle contamination leaving many material surfaces with roughness typically around one Angstrom. This surface defect mitigation via ANAB processing can be shown to increase performance properties of high intensity optical materials. This paper describes the ANAB technology and summarizes smoothing results for calcium fluoride laser windows. It further correlates laser damage threshold improvements with the smoothing produced by ANAB surface treatment. All ANAB processing was performed at Exogenesis Corporation using an nAccel100TM Accelerated Particle Beam processing tool. All surface measurement data for the paper was produced via AFM analysis on a Park Model XE70 AFM, and all laser damage testing was performed at Spica Technologies, Inc. Exogenesis Corporation's ANAB processing technology is a new and unique surface modification technique that has demonstrated to be highly effective at correcting nano-scale surface defects. ANAB is a non-contact vacuum process comprised of an intense beam of accelerated, electrically neutral gas atoms with average energies of a few tens of electron volts. The ANAB process does not apply mechanical forces associated with traditional polishing techniques. ANAB efficiently removes surface

  8. Microstructural control in hot working of IN-718 superalloy using processing map

    SciTech Connect

    Srinivasan, N.; Prasad, Y.V.R.K. . Dept. of Metallurgy)

    1994-10-01

    the hot-working characteristics of IN-718 are studied in the temperature range 900 C to 1,200 C and strain rate range 0.001 to 100 s[sup [minus]1] using hot compression tests. Processing maps for hot working are developed on the basis of the strain-rate sensitivity variations with temperature and strain rate and interpreted using a dynamic materials model. The map exhibits two domains of dynamic recrystallization (DRX): one occurring at 950 C and 0.001 s[sup [minus]1] with an efficiency of power dissipation of 37 pct and the other at 1200 C and 0.1 s[sup [minus]1] with an efficiency of 40 pct. Dynamic recrystallization in the former domain is nucleated by the [delta](Ni[sub 3]Nb) precipitates and results in fine-grained microstructure. In the high-temperature DRX domain, carbides dissolve in the matrix and make interstitial carbon atoms available for increasing the rate of dislocation generation for DRX nucleation. It is recommended that IN-718 may be hot-forged initially at 1,200 C and 0.1 s[sup [minus]1] and finish-forged at 950 C and 0.001 s [sup [minus]1] so that fine-grained structure may be achieved. The available forging practice validates these results from processing maps. At temperatures lower than 1,000 C and strain rates higher than 1 s[sup [minus]1], the material exhibits adiabatic shear bands. Also, at temperatures higher than 1150 C and strain rates more than 1 s[sup [minus]1], IN-718 exhibits intercrystalline cracking. Both these regimes may be avoided in hot-working IN-718.

  9. Microstructural control in hot working of IN-718 superalloy using processing map

    NASA Astrophysics Data System (ADS)

    Srinivasan, N.; Prasad, Y. V. R. K.

    1994-10-01

    The hot-working characteristics of IN-718 are studied in the temperature range 900 °C to 1200 °C and strain rate range 0.001 to 100 s-1 using hot compression tests. Processing maps for hot working are developed on the basis of the strain-rate sensitivity variations with temperature and strain rate and interpreted using a dynamic materials model. The map exhibits two domains of dynamic recrystallization (DRX): one occurring at 950 °C and 0.001 s-1 with an efficiency of power dissipation of 37 pct and the other at 1200 °C and 0.1 s-1 with an efficiency of 40 pct. Dynamic recrystallization in the former domain is nucleated by the δ(Ni3Nb) precipitates and results in fine-grained microstructure. In the high-temperature DRX domain, carbides dissolve in the matrix and make interstitial carbon atoms available for increasing the rate of dislocation generation for DRX nucleation. It is recommended that IN-718 may be hot-forged initially at 1200 °C and 0.1 s-1 and finish-forged at 950 °C and 0.001 s-1 so that fine-grained structure may be achieved. The available forging practice validates these results from processing maps. At temperatures lower than 1000 °C and strain rates higher than 1 s-1 the material exhibits adiabatic shear bands. Also, at temperatures higher than 1150°C and strain rates more than 1s-1, IN-718 exhibits intercrystalline cracking. Both these regimes may be avoided in hotworking IN-718.

  10. SBGNViz: A Tool for Visualization and Complexity Management of SBGN Process Description Maps

    PubMed Central

    Dogrusoz, Ugur; Sumer, Selcuk Onur; Aksoy, Bülent Arman; Babur, Özgün; Demir, Emek

    2015-01-01

    Background Information about cellular processes and pathways is becoming increasingly available in detailed, computable standard formats such as BioPAX and SBGN. Effective visualization of this information is a key recurring requirement for biological data analysis, especially for -omic data. Biological data analysis is rapidly migrating to web based platforms; thus there is a substantial need for sophisticated web based pathway viewers that support these platforms and other use cases. Results Towards this goal, we developed a web based viewer named SBGNViz for process description maps in SBGN (SBGN-PD). SBGNViz can visualize both BioPAX and SBGN formats. Unique features of SBGNViz include the ability to nest nodes to arbitrary depths to represent molecular complexes and cellular locations, automatic pathway layout, editing and highlighting facilities to enable focus on sub-maps, and the ability to inspect pathway members for detailed information from EntrezGene. SBGNViz can be used within a web browser without any installation and can be readily embedded into web pages. SBGNViz has two editions built with ActionScript and JavaScript. The JavaScript edition, which also works on touch enabled devices, introduces novel methods for managing and reducing complexity of large SBGN-PD maps for more effective analysis. Conclusion SBGNViz fills an important gap by making the large and fast-growing corpus of rich pathway information accessible to web based platforms. SBGNViz can be used in a variety of contexts and in multiple scenarios ranging from visualization of the results of a single study in a web page to building data analysis platforms. PMID:26030594

  11. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  12. Boosting intelligence analysis process and situation awareness using the self-organizing map

    NASA Astrophysics Data System (ADS)

    Kärkkäinen, Anssi P.

    2009-05-01

    Situational awareness is critical on the modern battlefield. A large amount of intelligence information is collected to improve decision-making processes, but in many cases this huge information load is even decelerating analysis and decision-making because of the lack of reasonable tools and methods to process information. To improve the decision making process and situational awareness, lots of research is done to analyze and visualize intelligence information data automatically. Different data fusion and mining techniques are applied to produce an understandable situational picture. Automated processes are based on a data model which is used in information exchange between war operators. The data model requires formal message structures which makes information processing simpler in many cases. In this paper, generated formal intelligence message data is visualized and analyzed by using the self-organizing map (SOM). The SOM is a widely used neural network model, and it has shown its effectiveness in representing multi-dimensional data in two or three dimensional space. The results show that multidimensional intelligence data can be visualized and classified with this technique. The SOM can be used for monitoring intelligence message data (e.g. in purpose of error hunting), message classification and hunting correlations. Thus with the SOM it is possible to speed up the intelligence process and make better and faster decisions.

  13. Accelerated Cure Project for Multiple Sclerosis

    MedlinePlus

    ... tweets Tweets by @AcceleratedCure Front The Accelerated Cure Project for MS is a non-profit, 501(c)( ... determines its causes and mechanisms. © 2016 Accelerated Cure Project. All rights reserved. Site Map Contact Us Privacy ...

  14. Drawing the Circle: Collaborative Mind Mapping as a Process for Developing a Constructivist Teacher Preparation Program.

    ERIC Educational Resources Information Center

    Oldfather, Penny; And Others

    1994-01-01

    Describes what research has found about the use of collaborative mind mapping to facilitate the development of constructivist preservice teacher education programs. The paper discusses explicit connections between collaborative mind mapping and constructivism, varieties of mind maps, mind mapping as an invitation to thought, and how to use mind…

  15. Hot Deformation Processing Map and Microstructural Evaluation of the Ni-Based Superalloy IN-738LC

    NASA Astrophysics Data System (ADS)

    Sajjadi, S. A.; Chaichi, A.; Ezatpour, H. R.; Maghsoudlou, A.; Kalaie, M. A.

    2016-04-01

    Hot deformation behavior of the Ni-based superalloy IN-738LC was investigated by means of hot compression tests over the temperature range of 1000-1200 °C and strain rate range of 0.01-1 s-1. The obtained peak flow stresses were related to strain rate and temperature through the hyperbolic sine equation with activation energy of 950 kJ/mol. Dynamic material model was used to obtain the processing map of IN-738LC. Analysis of the microstructure was carried out in order to study each domain's characteristic represented by the processing map. The results showed that dynamic recrystallization occurs in the temperature range of 1150-1200 °C and strain rate of 0.1 s-1 with the maximum power dissipation efficiency of 35%. The unstable domain was exhibited in the temperature range of 1000-1200 °C and strain rate of 1 s-1 on the occurrence of severe deformation bands and grain boundary cracking.

  16. Geological Mapping of Fortuna Tessera (V-2): Venus and Earth's Archean Process Comparisons

    NASA Technical Reports Server (NTRS)

    Head, James W.; Hurwitz,D. M.; Ivanov, M. A.; Basilevsky, A. T.; Kumar, P. Senthil

    2008-01-01

    The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new insight into the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record provides important insight into high heat-flux tectonic and magmatic environments and structures and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Elsewhere we have addressed the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. Here we present the major goals of the Venus-Archean comparison and show how preliminary mapping of the geology of the V-2 Fortuna Tessera quadrangle is providing insight on these problems. We have identified five key themes and questions common to both the Archean and Venus, the assessment of which could provide important new insights into the history and processes of both planets.

  17. An Indexed, Mapped Mutant Library Enables Reverse Genetics Studies of Biological Processes in Chlamydomonas reinhardtii.

    PubMed

    Li, Xiaobo; Zhang, Ru; Patena, Weronika; Gang, Spencer S; Blum, Sean R; Ivanova, Nina; Yue, Rebecca; Robertson, Jacob M; Lefebvre, Paul A; Fitz-Gibbon, Sorel T; Grossman, Arthur R; Jonikas, Martin C

    2016-02-01

    The green alga Chlamydomonas reinhardtii is a leading unicellular model for dissecting biological processes in photosynthetic eukaryotes. However, its usefulness has been limited by difficulties in obtaining mutants in specific genes of interest. To allow generation of large numbers of mapped mutants, we developed high-throughput methods that (1) enable easy maintenance of tens of thousands of Chlamydomonas strains by propagation on agar media and by cryogenic storage, (2) identify mutagenic insertion sites and physical coordinates in these collections, and (3) validate the insertion sites in pools of mutants by obtaining >500 bp of flanking genomic sequences. We used these approaches to construct a stably maintained library of 1935 mapped mutants, representing disruptions in 1562 genes. We further characterized randomly selected mutants and found that 33 out of 44 insertion sites (75%) could be confirmed by PCR, and 17 out of 23 mutants (74%) contained a single insertion. To demonstrate the power of this library for elucidating biological processes, we analyzed the lipid content of mutants disrupted in genes encoding proteins of the algal lipid droplet proteome. This study revealed a central role of the long-chain acyl-CoA synthetase LCS2 in the production of triacylglycerol from de novo-synthesized fatty acids. PMID:26764374

  18. Mapping Rule-Based And Stochastic Constraints To Connection Architectures: Implication For Hierarchical Image Processing

    NASA Astrophysics Data System (ADS)

    Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.

    1988-10-01

    Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.

  19. Real-time dual-mode standard/complex Fourier-domain OCT system using graphics processing unit accelerated 4D signal processing and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Kang; Kang, Jin U.

    2011-03-01

    We realized a real-time dual-mode standard/complex Fourier-domain optical coherence tomography (FD-OCT) system using graphics processing unit (GPU) accelerated 4D (3D+time) signal processing and visualization. For both standard and complex FD-OCT modes, the signal processing tasks were implemented on a dual-GPUs architecture that included λ-to-k spectral re-sampling, fast Fourier transform (FFT), modified Hilbert transform, logarithmic-scaling, and volume rendering. The maximum A-scan processing speeds achieved are >3,000,000 line/s for the standard 1024-pixel-FD-OCT, and >500,000 line/s for the complex 1024-pixel-FD-OCT. Multiple volumerendering of the same 3D data set were preformed and displayed with different view angles. The GPU-acceleration technique is highly cost-effective and can be easily integrated into most ultrahigh speed FD-OCT systems to overcome the 3D data processing and visualization bottlenecks.

  20. VLSI architectures for geometrical mapping problems in high-definition image processing

    NASA Astrophysics Data System (ADS)

    Kim, K.; Lee, J.

    This paper explores a VLSI architecture for geometrical mapping address computation. The geometric transformation is discussed in the context of plane projective geometry, which invokes a set of basic transformations to be implemented for the general image processing. The homogeneous and 2-dimensional cartesian coordinates are employed to represent the transformations, each of which is implemented via an augmented CORDIC as a processing element. A specific scheme for a processor, which utilizes full-pipelining at the macro-level and parallel constant-factor-redundant arithmetic and full-pipelining at the micro-level, is assessed to produce a single VLSI chip for HDTV applications using state-of-art MOS technology.

  1. VLSI architectures for geometrical mapping problems in high-definition image processing

    NASA Technical Reports Server (NTRS)

    Kim, K.; Lee, J.

    1991-01-01

    This paper explores a VLSI architecture for geometrical mapping address computation. The geometric transformation is discussed in the context of plane projective geometry, which invokes a set of basic transformations to be implemented for the general image processing. The homogeneous and 2-dimensional cartesian coordinates are employed to represent the transformations, each of which is implemented via an augmented CORDIC as a processing element. A specific scheme for a processor, which utilizes full-pipelining at the macro-level and parallel constant-factor-redundant arithmetic and full-pipelining at the micro-level, is assessed to produce a single VLSI chip for HDTV applications using state-of-art MOS technology.

  2. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  3. Studies of $${\\rm Nb}_{3}{\\rm Sn}$$ Strands Based on the Restacked-Rod Process for High Field Accelerator Magnets

    DOE PAGESBeta

    Barzi, E.; Bossert, M.; Gallo, G.; Lombardo, V.; Turrioni, D.; Yamada, R.; Zlobin, A. V.

    2011-12-21

    A major thrust in Fermilab's accelerator magnet R&D program is the development of Nb3Sn wires which meet target requirements for high field magnets, such as high critical current density, low effective filament size, and the capability to withstand the cabling process. The performance of a number of strands with 150/169 restack design produced by Oxford Superconducting Technology was studied for round and deformed wires. To optimize the maximum plastic strain, finite element modeling was also used as an aid in the design. Results of mechanical, transport and metallographic analyses are presented for round and deformed wires.

  4. Impact absorption of four processed soft denture liners as influenced by accelerated aging.

    PubMed

    Kawano, F; Koran, A; Nuryanti, A; Inoue, S

    1997-01-01

    The cushioning effect of soft denture liners was evaluated by using a free drop test with an accelerometer. Materials tested included SuperSoft (Coe Laboratories, Chicago, IL), Kurepeet-Dough (Kreha Chemical, Tokyo), Molteno Soft (Molten, Hiroshima, Japan), and Molloplast-B (Molloplast Regneri, Karlsruhe, Germany). All materials were found to reduce the impact force when compared to acrylic denture base resin. A 2.4-mm layer of soft denture material demonstrated good impact absorption, and Molloplast-B and Molteno had excellent impact absorption. When the soft denture liner was kept in an accelerated aging chamber for 900 hours, the damping effect recorded increased for all materials tested. Aging of all materials also affected the cushioning effect. PMID:9484071

  5. Relative characterization of rosemary samples according to their geographical origins using microwave-accelerated distillation, solid-phase microextraction and Kohonen self-organizing maps.

    PubMed

    Tigrine-Kordjani, N; Chemat, F; Meklati, B Y; Tuduri, L; Giraudel, J L; Montury, M

    2007-09-01

    For centuries, rosemary (Rosmarinus officinalis L.) has been used to prepare essential oils which, even now, are highly valued due to their various biological activities. Nevertheless, it has been noted that these activities often depend on the origin of the rosemary plant and the method of extraction. Since both of these quality parameters can greatly influence the chemical composition of rosemary oil, an original analytical method was developed where "dry distillation" was coupled to headspace solid-phase microextraction (HS-SPME) and then a data mining technique using the Kohonen self-organizing map algorithm was applied to the data obtained. This original approach uses the newly described microwave-accelerated distillation technique (MAD) and HS-SPME; neither of these techniques require external solvent and so this approach provides a novel "green" chemistry sampling method in the field of biological matrix analysis. The large data set obtained was then treated with a rarely used chemometric technique based on nonclassical statistics. Applied to 32 rosemary samples collected at the same time from 12 different sites in the north of Algeria, this method highlighted a strong correlation between the volatile chemical compositions of the samples and their origins, and it therefore allowed the samples to be grouped according to geographical distribution. Moreover, the method allowed us to identify the constituents that exerted the most influence during classification. PMID:17646972

  6. Young coconut juice can accelerate the healing process of cutaneous wounds

    PubMed Central

    2012-01-01

    Background Estrogen has been reported to accelerate cutaneous wound healing. This research studies the effect of young coconut juice (YCJ), presumably containing estrogen-like substances, on cutaneous wound healing in ovairectomized rats. Methods Four groups of female rats (6 in each group) were included in this study. These included sham-operated, ovariectomized (ovx), ovx receiving estradiol benzoate (EB) injections intraperitoneally, and ovx receiving YCJ orally. Two equidistant 1-cm full-thickness skin incisional wounds were made two weeks after ovariectomy. The rats were sacrificed at the end of the third and the fourth week of the study, and their serum estradiol (E2) level was measured by chemiluminescent immunoassay. The skin was excised and examined in histological sections stained with H&E, and immunostained using anti-estrogen receptor (ER-α an ER-β) antibodies. Results Wound healing was accelerated in ovx rats receiving YCJ, as compared to controls. This was associated with significantly higher density of immunostaining for ER-α an ER-β in keratinocytes, fibroblasts, white blood cells, fat cells, sebaceous gland, skeletal muscles, and hair shafts and follicles. This was also associated with thicker epidermis and dermis, but with thinner hypodermis. In addition, the number and size of immunoreactive hair follicles for both ER-α and ER-β were the highest in the ovx+YCJ group, as compared to the ovx+EB group. Conclusions This study demonstrates that YCJ has estrogen-like characteristics, which in turn seem to have beneficial effects on cutaneous wound healing. PMID:23234369

  7. Remote sensing of the energy of Jovian auroral electrons with STIS: a clue to unveil plasma acceleration processes

    NASA Astrophysics Data System (ADS)

    Gerard, Jean-Claude

    2013-10-01

    The polar aurora, an important energy source for the Earth's upper atmosphere, is about two orders of magnitude more intense at Jupiter where it releases approximately 10 GW in Jupiter's thermosphere. So far, HST observations of Jupiter's aurora have concentrated on the morphology and the relationship between the solar wind and the brightness distribution. While STIS-MAMA is still operational, time is now critical to move into a new era where FUV long-slit spectroscopy and the spatial scanning capabilities of HST are combined. We propose to use this powerful tool to remotely sense the characteristics of the precipitated electrons by slewing the spectral slit over the different auroral components. It will then be possible to associate electron energies with spatial auroral components and constrain acceleration mechanisms {field-aligned acceleration, magnetic field reconnection, pitch angle electron scattering} associated with specific emission regions. For this, a combination of FUV imaging with STIS long slit spectroscopy will map the spatial variations of the auroral depth and thus the energy of the precipitated electrons. These results will be compared with current models of the Jovian magnetosphere-ionosphere interactions and will provide key inputs to a 3-D model of the Jupiter's atmosphere global heat budget and dynamics currently under development. This compact timely program is designed to provide a major step forward for a better understanding of the physical interactions taking place in Jupiter's magnetosphere and their effects on giant planets' atmospheres, a likely paradigm for many giant fast spinning planets with massive magnetic field in the universe.

  8. In-Database Raster Analytics: Map Algebra and Parallel Processing in Oracle Spatial Georaster

    NASA Astrophysics Data System (ADS)

    Xie, Q. J.; Zhang, Z. Z.; Ravada, S.

    2012-07-01

    Over the past decade several products have been using enterprise database technology to store and manage geospatial imagery and raster data inside RDBMS, which in turn provides the best manageability and security. With the data volume growing exponentially, real-time or near real-time processing and analysis of such big data becomes more challenging. Oracle Spatial GeoRaster, different from most other products, takes the enterprise database-centric approach for both data management and data processing. This paper describes one of the central components of this database-centric approach: the processing engine built completely inside the database. Part of this processing engine is raster algebra, which we call the In-database Raster Analytics. This paper discusses the three key characteristics of this in-database analytics engine and the benefits. First, it moves the data processing closer to the data instead of moving the data to the processing, which helps achieve greater performance by overcoming the bottleneck of computer networks. Second, we designed and implemented a new raster algebra expression language. This language is based on PL/SQL and is currently focused on the "local" function type of map algebra. This language includes general arithmetic, logical and relational operators and any combination of them, which dramatically improves the analytical capability of the GeoRaster database. The third feature is the implementation of parallel processing of such operations to further improve performance. This paper also presents some sample use cases. The testing results demonstrate that this in-database approach for raster analytics can effectively help solve the biggest performance challenges we are facing today with big raster and image data.

  9. Neogene cratonic erosion fluxes and landform evolution processes from regional regolith mapping (Burkina Faso, West Africa)

    NASA Astrophysics Data System (ADS)

    Grimaud, Jean-Louis; Chardon, Dominique; Metelka, Václav; Beauvais, Anicet; Bamba, Ousmane

    2015-07-01

    The regionally correlated and dated regolith-paleolandform sequence of Sub-Saharan West Africa offers a unique opportunity to constrain continental-scale regolith dynamics as the key part of the sediment routing system. In this study, a regolith mapping protocol is developed and applied at the scale of Southwestern Burkina Faso. Mapping combines field survey and remote sensing data to reconstruct the topography of the last pediplain that formed over West Africa in the Early and Mid-Miocene (24-11 Ma). The nature and preservation pattern of the pediplain are controlled by the spatial variation of bedrock lithology and are partitioned among large drainage basins. Quantification of pediplain dissection and drainage growth allows definition of a cratonic background denudation rate of 2 m/My and a minimum characteristic timescale of 20 Ma for shield resurfacing. These results may be used to simulate minimum export fluxes of drainage basins of constrained size over geological timescales. Background cratonic denudation results in a clastic export flux of ~ 4 t/km2/year, which is limited by low denudation efficiency of slope processes and correlatively high regolith storage capacity of tropical shields. These salient characteristics of shields' surface dynamics would tend to smooth the riverine export fluxes of shields through geological time.

  10. Hot Deformation Characteristics of 13Cr-4Ni Stainless Steel Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Kishor, Brij; Chaudhari, G. P.; Nath, S. K.

    2016-06-01

    Hot compression tests were performed to study the hot deformation characteristics of 13Cr-4Ni stainless steel. The tests were performed in the strain rate range of 0.001-10 s-1 and temperature range of 900-1100 °C using Gleeble® 3800 simulator. A constitutive equation of Arrhenius type was established based on the experimental data to calculate the different material constants, and average value of apparent activation energy was found to be 444 kJ/mol. Zener-Hollomon parameter, Z, was estimated in order to characterize the flow stress behavior. Power dissipation and instability maps developed on the basis of dynamic materials model for true strain of 0.5 show optimum hot working conditions corresponding to peak efficiency range of about 28-32%. These lie in the temperature range of 950-1025 °C and corresponding strain rate range of 0.001-0.01 s-1 and in the temperature range of 1050-1100 °C and corresponding strain rate range of 0.01-0.1 s-1. The flow characteristics in these conditions show dynamic recrystallization behavior. The microstructures are correlated to the different stability domains indicated in the processing map.

  11. Hot Deformation Characteristics of 13Cr-4Ni Stainless Steel Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Kishor, Brij; Chaudhari, G. P.; Nath, S. K.

    2016-07-01

    Hot compression tests were performed to study the hot deformation characteristics of 13Cr-4Ni stainless steel. The tests were performed in the strain rate range of 0.001-10 s-1 and temperature range of 900-1100 °C using Gleeble® 3800 simulator. A constitutive equation of Arrhenius type was established based on the experimental data to calculate the different material constants, and average value of apparent activation energy was found to be 444 kJ/mol. Zener-Hollomon parameter, Z, was estimated in order to characterize the flow stress behavior. Power dissipation and instability maps developed on the basis of dynamic materials model for true strain of 0.5 show optimum hot working conditions corresponding to peak efficiency range of about 28-32%. These lie in the temperature range of 950-1025 °C and corresponding strain rate range of 0.001-0.01 s-1 and in the temperature range of 1050-1100 °C and corresponding strain rate range of 0.01-0.1 s-1. The flow characteristics in these conditions show dynamic recrystallization behavior. The microstructures are correlated to the different stability domains indicated in the processing map.

  12. Heat Capacity Mapping Radiometer (HCMR) data processing algorithm, calibration, and flight performance evaluation

    NASA Technical Reports Server (NTRS)

    Bohse, J. R.; Bewtra, M.; Barnes, W. L.

    1979-01-01

    The rationale and procedures used in the radiometric calibration and correction of Heat Capacity Mapping Mission (HCMM) data are presented. Instrument-level testing and calibration of the Heat Capacity Mapping Radiometer (HCMR) were performed by the sensor contractor ITT Aerospace/Optical Division. The principal results are included. From the instrumental characteristics and calibration data obtained during ITT acceptance tests, an algorithm for post-launch processing was developed. Integrated spacecraft-level sensor calibration was performed at Goddard Space Flight Center (GSFC) approximately two months before launch. This calibration provided an opportunity to validate the data calibration algorithm. Instrumental parameters and results of the validation are presented and the performances of the instrument and the data system after launch are examined with respect to the radiometric results. Anomalies and their consequences are discussed. Flight data indicates a loss in sensor sensitivity with time. The loss was shown to be recoverable by an outgassing procedure performed approximately 65 days after the infrared channel was turned on. It is planned to repeat this procedure periodically.

  13. Data processing for fabrication of GMT primary segments: raw data to final surface maps

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Hubler, William; Martin, Hubert M.; West, Steven C.; Zhou, Ping

    2014-07-01

    The Giant Magellan Telescope (GMT) primary mirror is a 25 meter f/0.7 surface composed of seven 8.4 meter circular segments, six of which are identical off-axis segments. The fabrication and testing challenges with these severely aspheric segments (about 14 mm of aspheric departure, mostly astigmatism) are well documented. Converting the raw phase data to useful surface maps involves many steps and compensations. They include large corrections for: image distortion from the off-axis null test; misalignment of the null test; departure from the ideal support forces; and temperature gradients in the mirror. The final correction simulates the active-optics correction that will be made at the telescope. Data are collected and phase maps are computed in 4D Technology's 4SightTM software. The data are saved to a .h5 (HDF5) file and imported into MATLAB® for further analysis. A semi-automated data pipeline has been developed to reduce the analysis time as well as reducing the potential for error. As each operation is performed, results and analysis parameters are appended to a data file, so in the end, the history of data processing is embedded in the file. A report and a spreadsheet are automatically generated to display the final statistics as well as how each compensation term varied during the data acquisition. This gives us valuable statistics and provides a quick starting point for investigating atypical results.

  14. An Investigation into Hot Deformation Characteristics and Processing Maps of High-Strength Armor Steel

    NASA Astrophysics Data System (ADS)

    Bobbili, Ravindranadh; Madhu, V.

    2015-12-01

    The isothermal hot compression tests of high-strength armor steel over wide ranges of strain rates (0.01-10 /s) and deformation temperatures (950-1100 °C) are carried out using Gleeble thermo-simulation machine. The true stress-strain data obtained from the experiments are employed to establish the constitutive equations based on the strain-compensated Arrhenius model. With strain-compensated Arrhenius model, good agreement between the experimental and predicted values is achieved, which represents the highest accuracy in comparison with the other models. The hot deformation activation energy is estimated to be 512 kJ/mol. By employing dynamic material model, the processing maps of high-strength armor steel at various strains are established. A maximum efficiency of about 45% of power dissipation is obtained at high temperature and low strain rate. Due to the high power dissipation efficiency and excellent processing ability in dynamic recrystallization zone for metal material, the optimum processing conditions are selected such that the temperature range is between 1050 and 1100°C and the strain rate range is between 0.01 and 0.1/s. Transmission electron microscopy observations show that the dislocation density is directly associated with the value of processing efficiency.

  15. High-resolution mapping of combustion processes and implications for CO2 emissions

    NASA Astrophysics Data System (ADS)

    Wang, R.; Tao, S.; Ciais, P.; Shen, H. Z.; Huang, Y.; Chen, H.; Shen, G. F.; Wang, B.; Li, W.; Zhang, Y. Y.; Lu, Y.; Zhu, D.; Chen, Y. C.; Liu, X. P.; Wang, W. T.; Wang, X. L.; Liu, W. X.; Li, B. G.; Piao, S. L.

    2013-05-01

    High-resolution mapping of fuel combustion and CO2 emission provides valuable information for modeling pollutant transport, developing mitigation policy, and for inverse modeling of CO2 fluxes. Previous global emission maps included only few fuel types, and emissions were estimated on a grid by distributing national fuel data on an equal per capita basis, using population density maps. This process distorts the geographical distribution of emissions within countries. In this study, a sub-national disaggregation method (SDM) of fuel data is applied to establish a global 0.1° × 0.1° geo-referenced inventory of fuel combustion (PKU-FUEL) and corresponding CO2 emissions (PKU-CO2) based upon 64 fuel sub-types for the year 2007. Uncertainties of the emission maps are evaluated using a Monte Carlo method. It is estimated that CO2 emission from combustion sources including fossil fuel, biomass, and solid wastes in 2007 was 11.2 Pg C yr-1 (9.1 Pg C yr-1 and 13.3 Pg C yr-1 as 5th and 95th percentiles). Of this, emission from fossil fuel combustion is 7.83 Pg C yr-1, which is very close to the estimate of the International Energy Agency (7.87 Pg C yr-1). By replacing national data disaggregation with sub-national data in this study, the average 95th minus 5th percentile ranges of CO2 emission for all grid points can be reduced from 417 to 68.2 Mg km-2 yr-1. The spread is reduced because the uneven distribution of per capita fuel consumptions within countries is better taken into account by using sub-national fuel consumption data directly. Significant difference in per capita CO2 emissions between urban and rural areas was found in developing countries (2.08 vs. 0.598 Mg C/(cap. × yr)), but not in developed countries (3.55 vs. 3.41 Mg C/(cap. × yr)). This implies that rapid urbanization of developing countries is very likely to drive up their emissions in the future.

  16. Developmental Changes in Processing Speed: Influence of Accelerated Education for Gifted Children

    ERIC Educational Resources Information Center

    Duan, Xiaoju; Shi, Jiannong; Zhou, Dan

    2010-01-01

    There are two major hypotheses concerning the developmental trends of processing speeds. These hypotheses explore both local and global trends. The study presented here investigates the effects of people's different knowledge on the speed with which they are able to process information. The participants in this study are gifted children aged 9,…

  17. Application of Low Level, Uniform Ultrasound Field for Acceleration of Enzymatic Bio-processing of Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. Our research has found that th...

  18. Implications of acceleration environments on scaling materials processing in space to production

    NASA Technical Reports Server (NTRS)

    Demel, Ken

    1990-01-01

    Some considerations regarding materials processing in space are covered from a commercial perspective. Key areas include power, proprietary data, operational requirements (including logistics), and also the center of gravity location, and control of that location with respect to materials processing payloads.

  19. Topographic power spectral density study of the effect of surface treatment processes on niobium for superconducting radio frequency accelerator cavities

    SciTech Connect

    Charles Reece, Hui Tian, Michael Kelley, Chen Xu

    2012-04-01

    Microroughness is viewed as a critical issue for attaining optimum performance of superconducting radio frequency accelerator cavities. The principal surface smoothing methods are buffered chemical polish (BCP) and electropolish (EP). The resulting topography is characterized by atomic force microscopy (AFM). The power spectral density (PSD) of AFM data provides a more thorough description of the topography than a single-value roughness measurement. In this work, one dimensional average PSD functions derived from topography of BCP and EP with different controlled starting conditions and durations have been fitted with a combination of power law, K correlation, and shifted Gaussian models to extract characteristic parameters at different spatial harmonic scales. While the simplest characterizations of these data are not new, the systematic tracking of scale-specific roughness as a function of processing is new and offers feedback for tighter process prescriptions more knowledgably targeted at beneficial niobium topography for superconducting radio frequency applications.

  20. Evaluation of an accelerated mineral carbonation process using Class F fly ash to sequester flue gas CO2

    NASA Astrophysics Data System (ADS)

    Erikson, Lisa N.

    Accelerated mineral carbonation (AMC) processes have been recommended as a mitigation technique to reduce coal-fired flue gas carbon dioxide (CO 2) emissions. This research concerns an AMC process which reacts humid flue gas with fly ash in a fluidized bed reactor (FBR). In theory the alkaline products in the fly ash will react with the CO2 in the flue gas, producing stable mineral carbonates. Although the original goal of the current testing was to optimize an AMC process, recent testing revealed no significant mineralization taking place over the duration of the tests. Further evaluation of the AMC process indicated that small amounts of available alkalinity in the fly ash and limited amounts of water in the flue gas compromised the mineralization reactions. The use of a FBR coupled with an elemental analyzer for carbon measurement may have resulted in quantifying movement of unburned carbon as opposed to changes in carbonation. A subsequent review of previous testing regarding this AMC process indicated less than 1% CO2 uptake by the fly ash with past claims of mineralization based on incorrect data reduction. In general, AMC processes prove to be impractical for the reduction of large quantities of CO2 emissions produced by coal-fired power plants.

  1. Tests of an environmental and personnel safe cleaning process for BNL accelerator and storage ring components

    SciTech Connect

    Foerster, C.L.; Lanni, C.; Lee, R.; Mitchell, G.; Quade, W.

    1996-10-01

    A large measure of the successful operation of the National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory (BNL) for over a decade can be attributed to the cleaning of its UHV components during and after construction. A new UHV cleaning process, which had to be environmentally and personnel safe, was needed to replace the harsh, unfriendly process which was still in use. Dow Advanced Cleaning Systems was contracted to develop a replacement process without the use of harsh chemicals and which must clean vacuum surfaces as well as the existing process. Acceptance of the replacement process was primarily based on Photon Stimulated Desorption (PSD) measurements of beam tube samples run on NSLS beam line U10B. One meter long beam tube samples were fabricated from aluminum, 304 stainless steel and oxygen free copper. Initially, coupon samples were cleaned and passed preliminary testing for the proposed process. Next, beam tube samples of each material were cleaned, and the PSD measured on beam line U10B using white light with a critical energy of 487 ev. Prior to cleaning, the samples were contaminated with a mixture of cutting oils, lubricants, vacuum oils and vacuum grease. The contaminated samples were then baked. Samples of each material were also cleaned with the existing process after the same preparation. Beam tube samples were exposed to between 10{sup 22} and 10{sup 23} photons per meter for a PSD measurement. Desorption yields for H{sub 2}, CO, CO{sub 2}, CH{sub 4} and H{sub 2}O are reported for both the existing cleaning and for the replacement cleaning process. Preliminary data, residual gas scans, and PSD results are given and discussed. The new process is also compared with new cleaning methods developed in other laboratories.

  2. Constitutive Modeling, Microstructure Evolution, and Processing Map for a Nitride-Strengthened Heat-Resistant Steel

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Feng; Sha, Wei; Yan, Wei; Wang, Wei; Shan, Yi-Yin; Yang, Ke

    2014-08-01

    A constitutive equation was established to describe the deformation behavior of a nitride-strengthened (NS) steel through isothermal compression simulation test. All the parameters in the constitutive equation including the constant and the activation energy were precisely calculated for the NS steel. The result also showed that from the stress-strain curves, there existed two different linear relationships between critical stress and critical strain in the NS steel due to the augmentation of auxiliary softening effect of the dynamic strain-induced transformation. In the calculation of processing maps, with the change of Zener-Hollomon value, three domains of different levels of workability were found, namely excellent workability region with equiaxed-grain microstructure, good workability region with "stripe" microstructure, and the poor workability region with martensitic-ferritic blend microstructure. With the increase of strain, the poor workability region first expanded, then shrank to barely existing, but appeared again at the strain of 0.6.

  3. DIGITAL PROCESSING TECHNIQUES FOR IMAGE MAPPING WITH LANDSAT TM AND SPOT SIMULATOR DATA.

    USGS Publications Warehouse

    Chavez, Pat S., Jr.

    1984-01-01

    To overcome certain problems associated with the visual selection of Landsat TM bands for image mapping, the author used a quantitative technique that ranks the 20 possible three-band combinations based upon their information content. Standard deviations and correlation coefficients can be used to compute a value called the Optimum Index Factor (OIF) for each of the 20 possible combinations. SPOT simulator images were digitally processed and compared with Landsat-4 Thematic Mapper (TM) images covering a semi-arid region in northern Arizona and a highly vegetated urban area near Washington, D. C. Statistical comparisons indicate the more radiometric or color information exists in certain TM three-band combinations than in the three SPOT bands.

  4. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    NASA Astrophysics Data System (ADS)

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-01

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized system.

  5. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    SciTech Connect

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  6. Demonstration of wetland vegetation mapping in Florida from computer-processed satellite and aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1979-01-01

    The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.

  7. A colour-map plugin for the open source, Java based, image processing package, ImageJ

    NASA Astrophysics Data System (ADS)

    Moodley, Keagan; Murrell, Hugh

    2004-07-01

    We present an interactive approach to the pseudo-colouring of greyscale images. We implement the technique by computing mappings from a three- dimensional (3D) colour space to a one- dimensional greyscale space (i.e. R3to R). To compute our maps, we employ both linear and nonlinear interpolation in 3D colour space. We validate our work by applying our maps to greyscale images resulting in significant image enhancement. Applications include space imagery, geological topographies, medical scans and many more. Our tool is coded as a Java plug-in for the open source image processing package, ImageJ.

  8. Mapping Glacial Weathering Processes with Thermal Infrared Remote Sensing: A Case Study at Robertson Glacier, Canada

    NASA Astrophysics Data System (ADS)

    Rutledge, A. M.; Christensen, P. R.; Shock, E.; Canovas, P. A., III

    2014-12-01

    Geologic weathering processes in cold environments, especially subglacial chemical processes acting on rock and sediment, are not well characterized due to the difficulty of accessing these environments. Glacial weathering of geologic materials contributes to the solute flux in meltwater and provides a potential source of energy to chemotrophic microbes, and is thus an important component to understand. In this study, we use Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data to map the extent of glacial weathering in the front range of the Canadian Rockies using remotely detected infrared spectra. We ground-truth our observations using laboratory infrared spectroscopy, x-ray diffraction, and geochemical analyses of field samples. The major goals of the project are to quantify weathering inputs to the glacial energy budget, and to link in situ sampling with remote sensing capabilities. Robertson Glacier, Alberta, Canada is an excellent field site for this technique as it is easily accessible and its retreating stage allows sampling of fresh subglacial and englacial sediments. Infrared imagery of the region was collected with the ASTER satellite instrument. At that same time, samples of glacially altered rock and sediments were collected on a downstream transect of the glacier and outwash plain. Infrared laboratory spectroscopy and x-ray diffraction were used to determine the composition and abundance of minerals present. Geochemical data were also collected at each location, and ice and water samples were analyzed for major and minor elements. Our initial conclusion is that the majority of the weathering seems to be occurring at the glacier-rock interface rather than in the outwash stream. Results from both laboratory and ASTER data indicate the presence of leached weathering rinds. A general trend of decreasing carbonate abundances with elevation (i.e. residence time in ice) is observed, which is consistent with increasing calcium ion

  9. Accelerated process development for protease production in continuous multi-stage cultures.

    PubMed

    Raninger, A; Steiner, W

    2003-06-01

    A fermentation process was developed and optimized for the production of a specific protease from Bacillus licheniformis PWD-1. Media formulations were constructed and crucial environmental parameters were optimized to enhance growth and product formation. Process dynamics of substrate consumption, biomass-, product-, as well as by-product formation were determined under controlled conditions in a bioreactor. Using kinetic data from batch- and continuous-culture experiments, a fed-batch process was developed producing proteolytic activities 10 times those found during regular batch culture. In one stage continuous stirred tank culture protease formation was completely decoupled from sporulation. Shift experiments in one-stage continuous cultures led to the development of a two-stage continuous stirred tank fermentation process using optimized conditions for growth in the first stage and protease formation in the second stage. Accordingly, the basis for a continuous production of the enzyme on a pilot scale was accomplished. PMID:12652475

  10. Comparing Two Forms of Concept Map Critique Activities to Facilitate Knowledge Integration Processes in Evolution Education

    ERIC Educational Resources Information Center

    Schwendimann, Beat A.; Linn, Marcia C.

    2016-01-01

    Concept map activities often lack a subsequent revision step that facilitates knowledge integration. This study compares two collaborative critique activities using a Knowledge Integration Map (KIM), a form of concept map. Four classes of high school biology students (n?=?81) using an online inquiry-based learning unit on evolution were assigned…

  11. Specialized and independent processing of orientation and shape in visual field maps LO1 and LO2.

    PubMed

    Silson, Edward H; McKeefry, Declan J; Rodgers, Jessica; Gouws, Andre D; Hymers, Mark; Morland, Antony B

    2013-03-01

    We identified human visual field maps, LO1 and LO2, in object-selective lateral occipital cortex. Using transcranial magnetic stimulation (TMS), we assessed the functions of these maps in the perception of orientation and shape. TMS of LO1 disrupted orientation, but not shape, discrimination, whereas TMS of LO2 disrupted shape, but not orientation, discrimination. This double dissociation suggests that specialized and independent processing of different visual attributes occurs in LO1 and LO2. PMID:23377127

  12. Brightest Fermi-LAT flares of PKS 1222+216: implications on emission and acceleration processes

    SciTech Connect

    Kushwaha, Pankaj; Singh, K. P.; Sahayanathan, Sunder

    2014-11-20

    We present a high time resolution study of the two brightest γ-ray outbursts from a blazar PKS 1222+216 observed by the Fermi Large Area Telescope (LAT) in 2010. The γ-ray light curves obtained in four different energy bands, 0.1-3, 0.1-0.3, 0.3-1, and 1-3 GeV, with time bins of six hours, show asymmetric profiles with similar rise times in all the bands but a rapid decline during the April flare and a gradual one during the June flare. The light curves during the April flare show an ∼2 day long plateau in 0.1-0.3 GeV emission, erratic variations in 0.3-1 GeV emission, and a daily recurring feature in 1-3 GeV emission until the rapid rise and decline within a day. The June flare shows a monotonic rise until the peak, followed by a gradual decline powered mainly by the multi-peak 0.1-0.3 GeV emission. The peak fluxes during both the flares are similar except in the 1-3 GeV band in April, which is twice the corresponding flux during the June flare. Hardness ratios during the April flare indicate spectral hardening in the rising phase followed by softening during the decay. We attribute this behavior to the development of a shock associated with an increase in acceleration efficiency followed by its decay leading to spectral softening. The June flare suggests hardening during the rise followed by a complicated energy dependent behavior during the decay. Observed features during the June flare favor multiple emission regions while the overall flaring episode can be related to jet dynamics.

  13. The IBA Rhodotron: an industrial high-voltage high-powered electron beam accelerator for polymers radiation processing

    NASA Astrophysics Data System (ADS)

    Van Lancker, Marc; Herer, Arnold; Cleland, Marshall R.; Jongen, Yves; Abs, Michel

    1999-05-01

    The Rhodotron is a high-voltage, high-power electron beam accelerator based on a design concept first proposed in 1989 by J. Pottier of the French Atomic Agency, Commissariat à l'Energie Atomique (CEA). In December 1991, the Belgian particle accelerator manufacturer, Ion Beam Applications s.a. (IBA) entered into an exclusive agreement with the CEA to develop and industrialize the Rhodotron. Electron beams have long been used as the preferential method to cross-link a variety of polymers, either in their bulk state or in their final form. Used extensively in the wire and cable industry to toughen insulating jackets, electron beam-treated plastics can demonstrate improved tensile and impact strength, greater abrasion resistance, increased temperature resistance and dramatically improved fire retardation. Electron beams are used to selectively cross-link or degrade a wide range of polymers in resin pellets form. Electron beams are also used for rapid curing of advanced composites, for cross-linking of floor-heating and sanitary pipes and for cross-linking of formed plastic parts. Other applications include: in-house and contract medical device sterilization, food irradiation in both electron and X-ray modes, pulp processing, electron beam doping of semi-conductors, gemstone coloration and general irradiation research. IBA currently markets three models of the Rhodotron, all capable of 10 MeV and alternate beam energies from 3 MeV upwards. The Rhodotron models TT100, TT200 and TT300 are typically specified with guaranteed beam powers of 35, 80 and 150 kW, respectively. Founded in 1986, IBA, a spin-off of the Cyclotron Research Center at the University of Louvain (UCL) in Belgium, is a pioneer in accelerator design for industrial-scale production.

  14. Hot deformation behavior and processing map of a 9Cr ferritic/martensitic ODS steel

    NASA Astrophysics Data System (ADS)

    Zhang, Guangming; Zhou, Zhangjian; Sun, Hongying; Zou, Lei; Wang, Man; Li, Shaofu

    2014-12-01

    The hot deformation behavior of 9Cr oxide-dispersion-strengthened (ODS) steel fabricated through the process of mechanical alloying and hot isostatic pressing (HIP) as investigated through hot compression deformation tests on the Gleeble-1500D simulator in the temperature range of 1050-1200 °C and strain rate range of 0.001 s-1-1 s-1. The relationship between the rheological stress and the strain rate was also studied. The activation energy and the stress and material parameters of the hyperbolic-sine equation were resolved according to the data obtained. The processing map was also proposed. The results show that the flow stress decreases as the temperature increases, and that decreasing of the strain rate of the 9Cr ODS steel results in a positive strain rate sensitivity. It is clear that dynamic recrystallization is influenced by both temperature and strain rate. The results of this study may provide a good reference for the selection of hot working parameters for 9Cr ODS steel. The optimum processing domains are at 1200 °C with a strain rate of 1 s-1 and in the range of 1080-1100 °C with a strain rate between 0.018 s-1 and 0.05 s-1.

  15. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  16. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  17. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  18. NREL Develops Accelerated Sample Activation Process for Hydrogen Storage Materials (Fact Sheet)

    SciTech Connect

    Not Available

    2010-12-01

    This fact sheet describes NREL's accomplishments in developing a new sample activation process that reduces the time to prepare samples for measurement of hydrogen storage from several days to five minutes and provides more uniform samples. Work was performed by NREL's Chemical and Materials Science Center.

  19. Insights on Arctic Sea Ice Processes from New Seafloor and Coastline Mapping

    NASA Astrophysics Data System (ADS)

    Nghiem, S. V.; Hall, D. K.; Rigor, I. G.; Clemente-Colon, P.; Li, P.; Neumann, G.

    2014-12-01

    The seafloor can exert a significant control on Arctic sea ice patterns by guiding the distribution of ocean water masses and river discharge in the Arctic Ocean. Satellite observations of sea ice and surface temperature are used together with bathymetry data to understand dynamic and thermodynamic processes of sea ice. In particular, data from satellite radars, including scatterometer and synthetic aperture radar (SAR) instruments, are used to identify and map sea ice with different spatial and temporal resolutions across the Arctic. Data from a satellite spectroradiometer, such as MODIS, are used to accurately measure surface temperature under clear sky conditions. For seafloor measurements, advances have been made with new observations surveyed to modern standards in different regions of the Arctic, enabling the production of an improved bathymetry dataset, such as the International Bathymetric Chart of the Arctic Ocean Version 3.0 (IBCAO 3.0) released in 2012. The joint analyses of these datasets reveal that the seafloor can govern warm- and cold-water distribution and thereby dictate sea ice patterns on the sea surface from small local scales to a large regional scale extending over thousands of km. Satellite results show that warm river waters can intrude into the Arctic Ocean and affect sea ice melt hundreds of km away from the river mouths. The Arctic rivers bring significant heat as their waters come from sources across vast watersheds influenced by warm continental climate effects in summertime. In the case of the Mackenzie River, results from the analysis with the new IBCAO 3.0 indicated that the formation and break-up of landfast sea ice is related to the depth and not the slope of the seafloor. In turn, such ice processes can impact the discharge and distribution of warm river waters and influence the melting of sea ice. Animations of satellite observations of sea ice overlaid on both the old and new versions of IBCAO will be presented to illustrate

  20. Updated mapping and seismic reflection data processing along the Queen Charlotte fault system, southeast Alaska

    NASA Astrophysics Data System (ADS)

    Walton, M. A. L.; Gulick, S. P. S.; Haeussler, P. J.; Rohr, K.; Roland, E. C.; Trehu, A. M.

    2014-12-01

    The Queen Charlotte Fault (QCF) is an obliquely convergent strike-slip system that accommodates offset between the Pacific and North America plates in southeast Alaska and western Canada. Two recent earthquakes, including a M7.8 thrust event near Haida Gwaii on 28 October 2012, have sparked renewed interest in the margin and led to further study of how convergent stress is accommodated along the fault. Recent studies have looked in detail at offshore structure, concluding that a change in strike of the QCF at ~53.2 degrees north has led to significant differences in stress and the style of strain accommodation along-strike. We provide updated fault mapping and seismic images to supplement and support these results. One of the highest-quality seismic reflection surveys along the Queen Charlotte system to date, EW9412, was shot aboard the R/V Maurice Ewing in 1994. The survey was last processed to post-stack time migration for a 1999 publication. Due to heightened interest in high-quality imaging along the fault, we have completed updated processing of the EW9412 seismic reflection data and provide prestack migrations with water-bottom multiple reduction. Our new imaging better resolves fault and basement surfaces at depth, as well as the highly deformed sediments within the Queen Charlotte Terrace. In addition to re-processing the EW9412 seismic reflection data, we have compiled and re-analyzed a series of publicly available USGS seismic reflection data that obliquely cross the QCF. Using these data, we are able to provide updated maps of the Queen Charlotte fault system, adding considerable detail along the northernmost QCF where it links up with the Chatham Strait and Transition fault systems. Our results support conclusions that the changing geometry of the QCF leads to fundamentally different convergent stress accommodation north and south of ~53.2 degrees; namely, reactivated splay faults to the north vs. thickening of sediments and the upper crust to the south

  1. Retinotopic mapping of categorical and coordinate spatial relation processing in early visual cortex.

    PubMed

    van der Ham, Ineke J M; Duijndam, Maarten J A; Raemaekers, Mathijs; van Wezel, Richard J A; Oleksiak, Anna; Postma, Albert

    2012-01-01

    Spatial relations are commonly divided in two global classes. Categorical relations concern abstract relations which define areas of spatial equivalence, whereas coordinate relations are metric and concern exact distances. Categorical and coordinate relation processing are thought to rely on at least partially separate neurocognitive mechanisms, as reflected by differential lateralization patterns, in particular in the parietal cortex. In this study we address this textbook principle from a new angle. We studied retinotopic activation in early visual cortex, as a reflection of attentional distribution, in a spatial working memory task with either a categorical or a coordinate instruction. Participants were asked to memorize a dot position, with regard to a central cross, and to indicate whether a subsequent dot position matched the first dot position, either categorically (opposite quadrant of the cross) or coordinately (same distance to the centre of the cross). BOLD responses across the retinotopic maps of V1, V2, and V3 indicate that the spatial distribution of cortical activity was different for categorical and coordinate instructions throughout the retention interval; a more local focus was found during categorical processing, whereas focus was more global for coordinate processing. This effect was strongest for V3, approached significance in V2 and was absent in V1. Furthermore, during stimulus presentation the two instructions led to different levels of activation in V3 during stimulus encoding; a stronger increase in activity was found for categorical processing. Together this is the first demonstration that instructions for specific types of spatial relations may yield distinct attentional patterns which are already reflected in activity early in the visual cortex. PMID:22723872

  2. Searching for optimal setting conditions in technological processes using parametric estimation models and neural network mapping approach: a tutorial.

    PubMed

    Fjodorova, Natalja; Novič, Marjana

    2015-09-01

    Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits. PMID:26388367

  3. [Acceleration of osmotic dehydration process through ohmic heating of foods: raspberries (Rubus idaeus)].

    PubMed

    Simpson, Ricardo R; Jiménez, Maite P; Carevic, Erica G; Grancelli, Romina M

    2007-06-01

    Raspberries (Rubus idaeus) were osmotically dehydrated by applying a conventional method under the supposition of a homogeneous solution, all in a 62% glucose solution at 50 degrees C. Raspberries (Rubus idaeus) were also osmotically dehydrated by using ohmic heating in a 57% glucose solution at a variable voltage (to maintain temperature between 40 and 50 degrees C) and an electric field intensity <100 V/cm. When comparing the results from both experiments it was evident that processing time is reduced when ohmic heating technique was used. In some cases this reduction reached even 50%. This is explained by the additional effect to the thermal damage that is generated in an ohmic process, denominated electroporation. PMID:17992985

  4. Web Based Rapid Mapping of Disaster Areas using Satellite Images, Web Processing Service, Web Mapping Service, Frequency Based Change Detection Algorithm and J-iView

    NASA Astrophysics Data System (ADS)

    Bandibas, J. C.; Takarada, S.

    2013-12-01

    Timely identification of areas affected by natural disasters is very important for a successful rescue and effective emergency relief efforts. This research focuses on the development of a cost effective and efficient system of identifying areas affected by natural disasters, and the efficient distribution of the information. The developed system is composed of 3 modules which are the Web Processing Service (WPS), Web Map Service (WMS) and the user interface provided by J-iView (fig. 1). WPS is an online system that provides computation, storage and data access services. In this study, the WPS module provides online access of the software implementing the developed frequency based change detection algorithm for the identification of areas affected by natural disasters. It also sends requests to WMS servers to get the remotely sensed data to be used in the computation. WMS is a standard protocol that provides a simple HTTP interface for requesting geo-registered map images from one or more geospatial databases. In this research, the WMS component provides remote access of the satellite images which are used as inputs for land cover change detection. The user interface in this system is provided by J-iView, which is an online mapping system developed at the Geological Survey of Japan (GSJ). The 3 modules are seamlessly integrated into a single package using J-iView, which could rapidly generate a map of disaster areas that is instantaneously viewable online. The developed system was tested using ASTER images covering the areas damaged by the March 11, 2011 tsunami in northeastern Japan. The developed system efficiently generated a map showing areas devastated by the tsunami. Based on the initial results of the study, the developed system proved to be a useful tool for emergency workers to quickly identify areas affected by natural disasters.

  5. GPR data processing for 3D fracture mapping in a marble quarry (Thassos, Greece)

    NASA Astrophysics Data System (ADS)

    Grandjean, G.; Gourry, J. C.

    1996-11-01

    Ground Penetrating Radar (GPR) has been successfully applied to detect and map fractures in marble quarries. The aim was to distinguish quickly intact marketable marble areas from fractured ones in order to improve quarry management. The GPR profiling method was chosen because it is non destructive and quickly provides a detailed image of the subsurface. It was performed in domains corresponding to future working areas in real quarry-exploitation conditions. Field surveying and data processing were adapted to the local characteristics of the fractures: E-W orientation, sub-vertical dip, and karst features. After the GPR profiles had been processed, using methods adapted from seismics (amplitude compensation, filtering and Fourier migration), the interpreted fractures from a 12 × 24 × 15 m zone were incorporated into a 3D model. Due to the low electrical conductivity of the marble, GPR provides penetration depths of about 8 and 15 m, and resolutions of about 1 and 5 cm for frequencies of 900 and 300 MHz respectively. The detection power thus seems to be sufficient to recommend use of this method. As requested by the quarriers, the 3D representation can be used directly by themselves to locate high- or low-quality marble areas. Comparison between the observed surface fractures and the fractures detected using GPR showed reasonable correlation.

  6. Process maps for plasma spray: Part 1: Plasma-particle interactions

    SciTech Connect

    GILMORE,DELWYN L.; NEISER JR.,RICHARD A.; WAN,YUEPENG; SAMPATH,SANJAY

    2000-01-26

    This is the first paper of a two part series based on an integrated study carried out at Sandia National Laboratories and the State University of New York at Stony Brook. The aim of the study is to develop a more fundamental understanding of plasma-particle interactions, droplet-substrate interactions, deposit formation dynamics and microstructural development as well as final deposit properties. The purpose is to create models that can be used to link processing to performance. Process maps have been developed for air plasma spray of molybdenum. Experimental work was done to investigate the importance of such spray parameters as gun current, auxiliary gas flow, and powder carrier gas flow. In-flight particle diameters, temperatures, and velocities were measured in various areas of the spray plume. Samples were produced for analysis of microstructures and properties. An empirical model was developed, relating the input parameters to the in-flight particle characteristics. Multi-dimensional numerical simulations of the plasma gas flow field and in-flight particles under different operating conditions were also performed. In addition to the parameters which were experimentally investigated, the effect of particle injection velocity was also considered. The simulation results were found to be in good general agreement with the experimental data.

  7. Effect of Grain Size Distribution on Processing Maps for Isothermal Compression of Inconel 718 Superalloy

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Liu, Dong; Hu, Yang; Yang, Yanhui; Zhu, Xinglin

    2016-02-01

    Cylindrical specimens of Inconel 718 alloys with three types of grain size distribution were used in the compression tests and processing maps were developed in 940-1040 °C and 0.001-10 s-1. The equiaxed fine grain is more effective on the dynamic softening behavior. For partial recrystallized microstructure, the peak efficiency of power dissipation occurs at the strain rate of 0.001 s-1, and the temperature range of 1000-1020 °C. In order to obtain homogeneous microstructure with fine grains, the partial recrystallized microstructure should be deformed at the low temperature and slow strain rates. The area fraction of instability domains decreases with strain increasing. The peak efficiency of power dissipation increases with average grain size decreasing. The efficiency of power dissipation will be stimulated by the precipitation of δ phase at slow strain rate of 0.001-0.01 s-1, and the initial deformed substructure at the strain rate of 0.1-1 s-1. Equiaxed fine grain is the optimum state for forging process and dynamic recrystallization. The grain size distribution has slight influence on the microstructure evolution at high temperatures.

  8. Hot Deformation Characteristics and Processing Maps of the Cu-Cr-Zr-Ag Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chai, Zhe; Volinsky, Alex A.; Sun, Huili; Tian, Baohong; Liu, Ping; Liu, Yong

    2016-03-01

    The hot deformation behavior of the Cu-Cr-Zr-Ag alloy has been investigated by hot compressive tests in the 650-950 °C temperature and 0.001-10 s-1 strain rate ranges using Gleeble-1500D thermo-mechanical simulator. The microstructure evolution of the alloy during deformation was characterized using optical and transmission electron microscopy. The flow stress decreases with the deformation temperature and increases with the strain rate. The apparent activation energy for hot deformation of the alloy was 343.23 kJ/mol. The constitutive equation of the alloy based on the hyperbolic-sine equation was established to characterize the flow stress as a function of the strain rate and the deformation temperature. The processing maps were established based on the dynamic material model. The optimal processing parameters for hot deformation of the Cu-Cr-Zr-Ag alloy are 900-950 °C and 0.001-0.1 s-1 strain rate. The evolution of DRX microstructure strongly depends on the deformation temperature and the strain rate.

  9. Basic Electropolishing Process Research and Development in Support of Improved Reliable Performance SRF Cavities for the Future Accelerator

    SciTech Connect

    H. Tian, C.E. Reece,M.J. Kelley

    2009-05-01

    Future accelerators require unprecedented cavity performance, which is strongly influenced by interior surface nanosmoothness. Electropolishing is the technique of choice to be developed for high-field superconducting radiofrequency cavities. Electrochemical impedance spectroscopy (EIS) and related techniques point to the electropolishing mechanism of Nb in a sulfuric and hydrofluoric acid electrolyte of controlled by a compact surface salt film under F- diffusion-limited mass transport control. These and other findings are currently guiding a systematic characterization to form the basis for cavity process optimization, such as flowrate, electrolyte composition and temperature. This integrated analysis is expected to provide optimum EP parameter sets for a controlled, reproducible and uniform surface leveling for Nb SRF cavities.

  10. Nanoscale mapping of excitonic processes in single-layer MoS2 using tip-enhanced photoluminescence microscopy.

    PubMed

    Su, Weitao; Kumar, Naresh; Mignuzzi, Sandro; Crain, Jason; Roy, Debdulal

    2016-05-19

    In two-dimensional (2D) semiconductors, photoluminescence originating from recombination processes involving neutral electron-hole pairs (excitons) and charged complexes (trions) is strongly affected by the localized charge transfer due to inhomogeneous interactions with the local environment and surface defects. Herein, we demonstrate the first nanoscale mapping of excitons and trions in single-layer MoS2 using the full spectral information obtained via tip-enhanced photoluminescence (TEPL) microscopy along with tip-enhanced Raman spectroscopy (TERS) imaging of a 2D flake. Finally, we show the mapping of the PL quenching centre in single-layer MoS2 with an unprecedented spatial resolution of 20 nm. In addition, our research shows that unlike in aperture-scanning near field microscopy, preferential exciton emission mapping at the nanoscale using TEPL and Raman mapping using TERS can be obtained simultaneously using this method that can be used to correlate the structural and excitonic properties. PMID:27152366

  11. Audit Report on "Waste Processing and Recovery Act Acceleration Efforts for Contact-Handled Transuranic Waste at the Hanford Site"

    SciTech Connect

    2010-05-01

    The Department of Energy's Office of Environmental Management's (EM), Richland Operations Office (Richland), is responsible for disposing of the Hanford Site's (Hanford) transuranic (TRU) waste, including nearly 12,000 cubic meters of radioactive contact-handled TRU wastes. Prior to disposing of this waste at the Department's Waste Isolation Pilot Plant (WIPP), Richland must certify that it meets WIPP's waste acceptance criteria. To be certified, the waste must be characterized, screened for prohibited items, treated (if necessary) and placed into a satisfactory disposal container. In a February 2008 amendment to an existing Record of Decision (Decision), the Department announced its plan to ship up to 8,764 cubic meters of contact-handled TRU waste from Hanford and other waste generator sites to the Advanced Mixed Waste Treatment Project (AMWTP) at Idaho's National Laboratory (INL) for processing and certification prior to disposal at WIPP. The Department decided to maximize the use of the AMWTP's automated waste processing capabilities to compact and, thereby, reduce the volume of contact-handled TRU waste. Compaction reduces the number of shipments and permits WIPP to more efficiently use its limited TRU waste disposal capacity. The Decision noted that the use of AMWTP would avoid the time and expense of establishing a processing capability at other sites. In May 2009, EM allocated $229 million of American Recovery and Reinvestment Act of 2009 (Recovery Act) funds to support Hanford's Solid Waste Program, including Hanford's contact-handled TRU waste. Besides providing jobs, these funds were intended to accelerate cleanup in the short term. We initiated this audit to determine whether the Department was effectively using Recovery Act funds to accelerate processing of Hanford's contact-handled TRU waste. Relying on the availability of Recovery Act funds, the Department changed course and approved an alternative plan that could increase costs by about $25 million

  12. Influence of processing procedure on the quality of Radix Scrophulariae: a quantitative evaluation of the main compounds obtained by accelerated solvent extraction and high-performance liquid chromatography.

    PubMed

    Cao, Gang; Wu, Xin; Li, Qinglin; Cai, Hao; Cai, Baochang; Zhu, Xuemei

    2015-02-01

    An improved high-performance liquid chromatography with diode array detection combined with accelerated solvent extraction method was used to simultaneously determine six compounds in crude and processed Radix Scrophulariae samples. Accelerated solvent extraction parameters such as extraction solvent, temperature, number of cycles, and analysis procedure were systematically optimized. The results indicated that compared with crude Radix Scrophulariae samples, the processed samples had lower contents of harpagide and harpagoside but higher contents of catalpol, acteoside, angoroside C, and cinnamic acid. The established method was sufficiently rapid and reliable for the global quality evaluation of crude and processed herbal medicines. PMID:25431110

  13. Hot deformation characterization of duplex low-density steel through 3D processing map development

    SciTech Connect

    Mohamadizadeh, A.; Zarei-Hanzaki, A.; Abedi, H.R.; Mehtonen, S.; Porter, D.

    2015-09-15

    The high temperature deformation behavior of duplex low-density Fe–18Mn–8Al–0.8C steel was investigated at temperatures in the range of 600–1000 °C. The primary constitutive analysis indicated that the Zener–Hollomon parameter, which represents the coupled effects of temperature and strain rate, significantly varies with the amount of deformation. Accordingly, the 3D processing maps were developed considering the effect of strain and were used to determine the safe and unsafe deformation conditions in association with the microstructural evolution. The deformation at efficiency domain I (900–1100 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}) was found to be safe at different strains due to the occurrence of dynamic recrystallization in austenite. The safe efficiency domain II (700–900 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1}), which appeared at logarithmic strain of 0.4, was characterized by deformation induced ferrite formation. Scanning electron microscopy revealed that the microband formation and crack initiation at ferrite\\austenite interphases were the main causes of deformation instability at 600–800 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}. The degree of instability was found to decrease by increasing the strain due to the uniformity of microbanded structure obtained at higher strains. The shear band formation at 900–1100 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1} was verified by electron backscattered diffraction. The local dynamic recrystallization of austenite and the deformation induced ferrite formation were observed within shear-banded regions as the results of flow localization. - Graphical abstract: Display Omitted - Highlights: • The 3D processing map is developed for duplex low-density Fe–Mn–Al–C steel. • The efficiency domains shrink, expand or appear with increasing strain. • The occurrence of DRX and DIFF increases the power efficiency. • Crack initiation

  14. Linear Accelerators

    NASA Astrophysics Data System (ADS)

    Sidorin, Anatoly

    2010-01-01

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  15. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  16. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing

    PubMed Central

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-01-01

    Motivation: Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. Results: We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5′-end processing and 3′-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. Availability and Implementation: The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA

  17. Sediment budget for a polluted Hawaiian reef using hillslope monitoring and process mapping (Invited)

    NASA Astrophysics Data System (ADS)

    Stock, J. D.; Rosener, M.; Schmidt, K. M.; Hanshaw, M. N.; Brooks, B. A.; Tribble, G.; Jacobi, J.

    2010-12-01

    Pollution from coastal watersheds threatens the ecology of the nearshore, including tropical reefs. Suspended sediment concentrations off the reefs of Molokai, Hawaii, chronically exceed a toxic 10 mg/L, threatening reef ecosystems. We hypothesize that historic conversion of hillslope processes from soil creep to overland flow increased both magnitude and frequency of erosion. To create a process sediment budget, we used surficial and ecological mapping, hillslope and stream gages, and novel sensors to locate, quantify and model the generation of fine sediments polluting the reef. Ecological and geomorphic mapping from LiDAR and multi-spectral imagery located overland flow areas with vegetation cover below a threshold preventing erosion. Here, feral goat grazing exposed volcanic soils whose low matrix hydraulic conductivities (1-25 mm/hour) promote Horton overland flow. We instrumented steep, barren hillslopes with soil moisture sensors, overland flow meters, Parshal flumes, ISCO sediment samplers, and a rain gage and conducted repeat Tripod LiDAR and infiltration tests. To characterize soil resistance to overland flow erosion, we used a Cohesive Strength Meter (CSM) to simulate water stress. At the 13.5 km 2 watershed mouth we used a USGS stream gage with an ISCO sediment sampler to estimate total load. Over 3 years, storms triggered overland flow during rainfall intensities above 10-15 mm/hr. Overland flow meters indicate such flows can be up to 3 cm deep, with a tendency to deepen downslope. CSM tests indicate that these depths are insufficient to erode soils where vegetation is dense, but far above threshold values of 2-3 mm for bare soils. Sediment ratings curves for both hillslope and downstream catchment gages show clock-wise hysteresis during the first intense storms in the fall, becoming linear later in the season. During fall storms, sediment concentration is often 10X higher at a given stage. Revised annual lowering rates from experimental hillslopes are

  18. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes

    NASA Astrophysics Data System (ADS)

    Sobotta, B.; Söhn, M.; Alber, M.

    2012-12-01

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  19. Rock varnish in New York: An accelerated snapshot of accretionary processes

    NASA Astrophysics Data System (ADS)

    Krinsley, David H.; Dorn, Ronald I.; DiGregorio, Barry E.; Langworthy, Kurt A.; Ditto, Jeffrey

    2012-02-01

    Samples of manganiferous rock varnish collected from fluvial, bedrock outcrop and Erie Barge Canal settings in New York state host a variety of diatom, fungal and bacterial microbial forms that are enhanced in manganese and iron. Use of a Dual-Beam Focused Ion Beam Scanning Electron Microscope to manipulate the varnish in situ reveals microbial forms that would not have otherwise been identified. The relative abundance of Mn-Fe-enriched biotic forms in New York samples is far greater than varnishes collected from warm deserts. Moisture availability has long been noted as a possible control on varnish growth rates, a hypothesis consistent with the greater abundance of Mn-enhancing bioforms. Sub-micron images of incipient varnish formation reveal that varnishing in New York probably starts with the mortality of microorganisms that enhanced Mn on bare mineral surfaces; microbial death results in the adsorption of the Mn-rich sheath onto the rock in the form of filamentous networks. Clay minerals are then cemented by remobilization of the Mn-rich material. Thus, the previously unanswered question of what comes first - clay mineral deposition or enhancement of Mn - can be answered in New York because of the faster rate of varnish growth. In contrast, very slow rates of varnishing seen in warm deserts, of microns per thousand years, make it less likely that collected samples will reveal varnish accretionary processes than samples collected from fast-accreting moist settings.

  20. Hot Compression of TC8M-1: Constitutive Equations, Processing Map, and Microstructure Evolution

    NASA Astrophysics Data System (ADS)

    Yue, Ke; Chen, Zhiyong; Liu, Jianrong; Wang, Qingjiang; Fang, Bo; Dou, Lijun

    2016-06-01

    Hot compression of TC8M-1 was carried out under isothermal working conditions with temperature from 1173 K to 1323 K (900 °C to 1050 °C), strain rate from 0.001 to 10/s, and height reduction from 20 to 80 pct (corresponding true strain from 0.22 to 1.61). Constitutive equations were constructed and apparent activation energies of 149.5 and 617.4 kJ/mol were obtained for deformation in the β and upper α/ β phase regions, respectively. Microstructure examination confirmed the dominant role of dynamic recrystallization in the α/ β phase region and that of dynamic recovery in the β phase region, with the occurrence of grain boundary sliding at very low strain rate (0.001/s) in both regions. Based on the dynamic materials model, processing maps were constructed, providing optimal domains for hot working at the temperature of 1253 K (980 °C) and the strain rate of 0.01 to 0.1/s, or at 1193 K to 1213 K (920 °C to 940 °C) and 0.001/s. Moreover, our results indicated that the initial temperature non-uniformity along the specimen axis before compression existed and influenced the strain distribution, which contributed to the abnormal oscillations and/or abrupt rise-up of true stress and inhomogeneous deformation.

  1. Water quality mapping and assessment, and weathering processes of selected aflaj in Oman.

    PubMed

    Ghrefat, Habes Ahmad; Jamarh, Ahmad; Al-Futaisi, Ahmed; Al-Abri, Badr

    2011-10-01

    There are more than 4,000 falaj (singular of a peculiar dug channel) distributed in different regions in Oman. The chemical characteristics of the water in 42 falaj were studied to evaluate the major ion chemistry; geochemical processes controlling water composition; and suitability of water for drinking, domestic, and irrigation uses. GIS-based maps indicate that the spatial distribution of chemical properties and concentrations vary within the same region and the different regions as well. The molar ratios of (Ca + Mg)/Total cations, (Na + K)/Total cations, (Ca + Mg)/(Na + K), (Ca + Mg)/(HCO₃ + SO₄), and Na/Cl reveal that the water chemistry of the majority of aflaj are dominated by carbonate weathering and evaporite dissolution, with minor contribution of silicate weathering. The concentrations of most of the elements were less than the permissible limits of Omani standards and WHO guidelines for drinking water and domestic use and do not generally pose any health and environmental problems. Some aflaj in ASH Sharqiyah and Muscat regions can be used for irrigation with slight to severe restriction because of the high levels of electrical conductivity, total dissolved solids, chloride, and sodium absorption ratio. PMID:21210214

  2. Processing multi temporal Thematic Mapper data for mapping the submarine shelf of the Island Kerkennah

    NASA Astrophysics Data System (ADS)

    Katlane, Rim; Berges, Jean-Claude; Beltrando, Gérard; Zargouni, Fouad

    2014-05-01

    Gulf of Gabes in Tunisia is unique among Mediterranean coastal environments by shallow water extension and tide amplitude. Kerkennah islands, located in this this gulf, are characterized by a -10 m isobath few kilometers away from the shoreline and by a lithology composition dominated by smooth rocks (sandstone and mio-plocene clay). These features, combined with a sea level rise and an active subsidence, constitute major risk factors. Islands vulnerability is increased by sebkha (salted low lands) extension which accounts now for 45% of the total area. Thus assessing the littoral sea depth change is a key issue for risk monitoring. Our study relies on the 30 years archive of Landsat 5 TM sensor managed by GSFC/NASA. The depth assessment has been carried out by an empiric method based on TM1 channel which has the better water penetration properties (up to 25 m). We focused on summer period and selected images from July 1986, August 1987, June 2003 and July 2009. After a first step of data preprocessing to ensure data homogeneity, we produced sub-aquatic morphology change maps. The observed features (submarine channels enlargement, cells sinking) are consistent with the hypothesis of the ebb tide as the process leading phenomenon.

  3. Hot Compression of TC8M-1: Constitutive Equations, Processing Map, and Microstructure Evolution

    NASA Astrophysics Data System (ADS)

    Yue, Ke; Chen, Zhiyong; Liu, Jianrong; Wang, Qingjiang; Fang, Bo; Dou, Lijun

    2016-04-01

    Hot compression of TC8M-1 was carried out under isothermal working conditions with temperature from 1173 K to 1323 K (900 °C to 1050 °C), strain rate from 0.001 to 10/s, and height reduction from 20 to 80 pct (corresponding true strain from 0.22 to 1.61). Constitutive equations were constructed and apparent activation energies of 149.5 and 617.4 kJ/mol were obtained for deformation in the β and upper α/β phase regions, respectively. Microstructure examination confirmed the dominant role of dynamic recrystallization in the α/β phase region and that of dynamic recovery in the β phase region, with the occurrence of grain boundary sliding at very low strain rate (0.001/s) in both regions. Based on the dynamic materials model, processing maps were constructed, providing optimal domains for hot working at the temperature of 1253 K (980 °C) and the strain rate of 0.01 to 0.1/s, or at 1193 K to 1213 K (920 °C to 940 °C) and 0.001/s. Moreover, our results indicated that the initial temperature non-uniformity along the specimen axis before compression existed and influenced the strain distribution, which contributed to the abnormal oscillations and/or abrupt rise-up of true stress and inhomogeneous deformation.

  4. Constitutive Modeling of Dynamic Recrystallization Kinetics and Processing Maps of Solution and Aging FGH96 Superalloy

    NASA Astrophysics Data System (ADS)

    Nie, Longfei; Zhang, Liwen; Zhu, Zhi; Xu, Wei

    2013-12-01

    The hot deformation behavior of solution and aging FGH96 superalloy were investigated in the deformation temperature range of 1000-1175 °C and strain rate range of 0.001-5.0/s on a Gleeble-1500 thermo-mechanical simulator. The results show that the true stress-strain curves are typical of the occurrence of dynamic recrystallization (DRX). The value of the activation energy and materials constants of A and n was obtained through the hyperbolic sine function between the peak stress and Zener-Hollomon parameter. Optical microscopy observations of the grains showed that Zener-Hollomon parameter affected the DRX grain size obviously. In addition, the constitutive equations and DRX kinetics model were also built. The processing maps with the strain of 0.3 and 0.6 were obtained on the basis of dynamic materials model. The results predicted that there existed instability regions at around 1050 °C when the strain rate exceeds 0.01/s.

  5. Using surface curvature to map geomorphic process regimes in a bedrock landscape, Henry Mountains, Utah

    NASA Astrophysics Data System (ADS)

    Corbett, S.; Sklar, L. S.; Davis, J.

    2009-12-01

    Linkages between form and process are much better understood in soil-mantled landscapes than in bedrock landscapes, despite the wide occurrence of bedrock landscapes in arid and mountainous terrain. Soil-mantled hillslope topography can be characterized by hillslope gradient and its spatial derivative, which is commonly referred to as curvature and defined as the Laplacian of elevation. Surface curvature can also be quantified using techniques that are invariant to the orientation of the surface. These approaches are useful in many geoscience applications, including structural analysis of folded surfaces within deforming crustal blocks. Here we explore the use of surface curvature of bedrock topography as a metric to identify and map distinct geomorphic process regimes in a landscape devoid of soil cover. Our study site is Simpson Creek, a 2.5 km2 watershed on the east flank of Mt. Hillers in the Henry Mountains, Utah, which drains to the Colorado River in Glen Canyon. The land surface is entirely exposed Navajo Sandstone bedrock, with isolated patches of wind-blown sand deposits. The channel network is discontinuous, with alternating reaches of steep, deeply-incised, frequently-potholed slots, and lower-gradient, sand-bedded channels. Hillslope topography is characterized by dome-shaped and sub-linear ridges, and is influenced by prominent structural joints. We calculate two measures of the surface-normal curvature using an ALSM-derived digital elevation model. The mean and Gaussian surface curvatures are the average and product respectively of the magnitudes of the maximum and minimum curvature vectors, obtained by differentiating a polynomial fit at each point in a grid with 1 m spacing. Plots of mean versus Gaussian curvature reveal distinct clusters of landscape elements, which we associate with specific process regimes. In this parameter space, there are four quadrants, classified as dome, basin, synformal saddle and antiformal saddle. The channel and valley

  6. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings. PMID:18406925

  7. Tools for Developing a Quality Management Program: Proactive Tools (Process Mapping, Value Stream Mapping, Fault Tree Analysis, and Failure Mode and Effects Analysis)

    SciTech Connect

    Rath, Frank

    2008-05-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  8. Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques

    SciTech Connect

    Danforth, W.W.; Schwab, W.C.; O'Brien, T.F. ); Karl, H. )

    1990-05-01

    The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred to a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.

  9. BESIII Physics Data Storing and Processing on HBase and MapReduce

    NASA Astrophysics Data System (ADS)

    LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu

    2015-12-01

    In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.

  10. Assessment of hydrothermal processes associated with Proterozoic mineral systems in Finland using self-organizing maps.

    NASA Astrophysics Data System (ADS)

    Lerssi, J.; Sorjonen-Ward, P.; Fraser, S. J.; Ruotsalainen, A.

    2009-04-01

    An increasingly urgent challenge in mineral system analysis is to extract relevant information from diverse datasets, and to effectively discriminate between "hydrothermal noise" and alteration and structures that may relate to significant mineralization potential. The interpretation of geophysical data is notorious for the problem of ambiguity in defining source dimensions and geometry. An additional issue, which also applies to geochemical and hyperspectral datasets, in terrain that has been overprinted by several tectonic, metamorphic and hydrothermal events, is that while anomalies represent the sum of geological processes affecting an area, we are usually interesting in extracting the signals diagnostic of a mineralizing event. Spatial analysis using weights of evidence, fuzzy logic and neural networks have been widely applied to mineral prospectivity assessment in recent years. Here however, we present an alternative, albeit complementary approach, based on the concept of self-organizing maps [1], in which natural patterns in large, unstructured datasets are derived, correlated and readily visualized, provides an alternative approach to analysis of geophysical and geochemical anomalies and integration with other geological data. We have applied SiroSOM software to airborne and ground magnetic, EM and radiometric data for two mutually adjacent areas in eastern Finland that have superficially similar structural architecture and geophysical expression, yet differ significantly in terms of mineral system character: (1) the Outokumpu Cu-Co-Zn-Ni system, hosted by metamorphosed serpentinites and their hydrothermal derivatives, which are usually highly magnetic due to both magnetite and pyrrhotite; (2) the Hammaslahti Cu-Zn system, hosted by coarse-clastic turbidites intercalated with mafic volcanics and graphitic pelites having characteristically intense magnetic and EM responses. Although the initial stage of the analysis is unsupervised, ongoing iteration and

  11. Mapping mass movement processes using terrestrial LIDAR: a swift mechanism for hazard and disaster risk assessment

    NASA Astrophysics Data System (ADS)

    Garnica-Peña, Ricardo; Murillo-García, Franny; Alcántara-Ayala, Irasema

    2014-05-01

    The impact of disasters associated with mass movement processes has increased in the past decades. Either triggered by earthquakes, volcanic activity or rainfall, mass movement processes have affected people, infrastructure, economic activities and the environment in different parts of the world. Extensive damage is particularly linked to rainfall induced landslides due to the occurrence of tropical storms, hurricanes, and the combination of different meteorological phenomenon on exposed vulnerable communities. Therefore, landslide susceptibility analysis, hazard and risk assessments are considered as significant mechanisms to lessen the impact of disasters. Ideally, these procedures ought to be carried out before disasters take place. However, under intense or persistent periods of rainfall, the evaluation of potentially unstable slopes becomes a critical issue. Such evaluations are constrained by the availability of resources, capabilities and scientific and technological tools. Among them, remote sensing has proved to be a valuable tool to evaluate areas affected by mass movement processes during the post-disaster stage. Nonetheless, the high cost of imagery acquisition inhibits their wide use. High resolution topography field surveys consequently, turn out to be an essential approach to address landslide evaluation needs. In this work, we present the evaluation and mapping of a series of mass movement processes induced by hurricane Ingrid in September, 2013, in Teziutlán, Puebla, México, a municipality situated 265 km Northeast of Mexico City. Geologically, Teziutlán is characterised by the presence, in the North, of siltstones and conglomerates of the Middle Jurassic, whereas the central and Southern sectors consist of volcanic deposits of various types: andesitic tuffs of Tertiary age, and basalts, rhyolitic tuffs and ignimbrites from the Quaternary. Major relief structures are formed by the accumulation of volcanic material; lava domes, partially buried

  12. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  13. Higher Education Planning for a Strategic Goal with a Concept Mapping Process at a Small Private College

    ERIC Educational Resources Information Center

    Driscoll, Deborah P.

    2010-01-01

    Faculty, staff, and administrators at a small independent college determined that planning with a Concept Mapping process efficiently produced strategic thinking and action plans for the accomplishment of a strategic goal to expand experiential learning within the curriculum. One year into a new strategic plan, the college enjoyed enrollment…

  14. Urban land use mapping by machine processing of ERTS-1 multispectral data: A San Francisco Bay area example

    NASA Technical Reports Server (NTRS)

    Ellefsen, R.; Swain, P. H.; Wray, J. R.

    1973-01-01

    The study is reported to develop computer produced urban land use maps using multispectral scanner data from a satellite is reported. Data processing is discussed along with the results of the San Francisco Bay area, which was chosen as the test area.

  15. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  16. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    SciTech Connect

    Badal, Andreu; Badano, Aldo

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  17. Mass transport perspective on an accelerated exclusion process: Analysis of augmented current and unit-velocity phases

    NASA Astrophysics Data System (ADS)

    Dong, Jiajia; Klumpp, Stefan; Zia, R. K. P.

    2013-02-01

    In an accelerated exclusion process (AEP), each particle can “hop” to its adjacent site if empty as well as “kick” the frontmost particle when joining a cluster of size ℓ⩽ℓmax. With various choices of the interaction range, ℓmax, we find that the steady state of AEP can be found in a homogeneous phase with augmented currents (AC) or a segregated phase with holes moving at unit velocity (UV). Here we present a detailed study on the emergence of the novel phases, from two perspectives: the AEP and a mass transport process (MTP). In the latter picture, the system in the UV phase is composed of a condensate in coexistence with a fluid, while the transition from AC to UV can be regarded as condensation. Using Monte Carlo simulations, exact results for special cases, and analytic methods in a mean field approach (within the MTP), we focus on steady state currents and cluster sizes. Excellent agreement between data and theory is found, providing an insightful picture for understanding this model system.

  18. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  19. Accelerators and the Accelerator Community

    SciTech Connect

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  20. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Weiland, J.L.; Hill, R.S.; Odegard, 3.; Larson, D.; Bennett, C.L.; Dunkley, J.; Jarosik, N.; Page, L.; Spergel, D.N.; Halpern, M.; Meyer, S.S.; Tucker, G.S.; Wright, E.L.

    2008-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between

  1. Monitoring of pigmented and wooden surfaces in accelerated ageing processes by FT-Raman spectroscopy and multivariate control charts.

    PubMed

    Marengo, Emilio; Robotti, Elisa; Liparota, Maria Cristina; Gennaro, Maria Carla

    2004-07-01

    Two of the most suitable analytical techniques used in the field of cultural heritage are NIR (near-infrared) and Raman spectroscopy. FT-Raman spectroscopy coupled to multivariate control charts is applied here for the development of a new method for monitoring the conservation state of pigmented and wooden surfaces. These materials were exposed to different accelerated ageing processes in order to evaluate the effect of the applied treatments on the goods surfaces. In this work, a new approach based on the principles of statistical process control (SPC) to the monitoring of cultural heritage, has been developed: the conservation state of samples simulating works-of-art has been treated like an industrial process, monitored with multivariate control charts, owing to the complexity of the spectroscopic data collected. The Raman spectra were analysed by principal component analysis (PCA) and the relevant principal components (PCs) were used for constructing multivariate Shewhart and cumulative sum (CUSUM) control charts. These tools were successfully applied for the identification of the presence of relevant modifications occurring on the surfaces. CUSUM charts however proved to be more effective in the identification of the exact beginning of the applied treatment. In the case of wooden boards, where a sufficient number of PCs were available, simultaneous scores monitoring and residuals tracking (SMART) charts were also investigated. The exposure to a basic attack and to high temperatures produced deep changes on the wooden samples, clearly identified by the multivariate Shewhart, CUSUM and SMART charts. A change on the pigment surface was detected after exposure to an acidic solution and to the UV light, while no effect was identified on the painted surface after the exposure to natural atmospheric events. PMID:18969526

  2. Fluid expulsion sites on the Cascadia accretionary prism: mapping diagenetic deposits with processed GLORIA imagery

    USGS Publications Warehouse

    Carson, Bobb; Seke, Erol; Paskevich, Valerie F.; Holmes, Mark L.

    1994-01-01

     Point-discharge fluid expulsion on accretionary prisms is commonly indicated by diagenetic deposition of calcium carbonate cements and gas hydrates in near-surface (<10 m below seafloor; mbsf) hemipelagic sediment. The contrasting clastic and diagenetic lithologies should be apparent in side scan images. However, sonar also responds to variations in bottom slope, so unprocessed images mix topographic and lithologic information. We have processed GLORIA imagery from the Oregon continental margin to remove topographic effects. A synthetic side scan image was created initially from Sea Beam bathymetric data and then was subtracted iteratively from the original GLORIA data until topographic features disappeared. The residual image contains high-amplitude backscattering that we attribute to diagenetic deposits associated with fluid discharge, based on submersible mapping, Ocean Drilling Program drilling, and collected samples. Diagenetic deposits are concentrated (1) near an out-of-sequence thrust fault on the second ridge landward of the base of the continental slope, (2) along zones characterized by deep-seated strikeslip faults that cut transversely across the margin, and (3) in undeformed Cascadia Basin deposits which overlie incipient thrust faults seaward of the toe of the prism. There is no evidence of diagenetic deposition associated with the frontal thrust that rises from the dècollement. If the dècollement is an important aquifer, apparently the fluids are passed either to the strike-slip faults which intersect the dècollement or to the incipient faults in Cascadia Basin for expulsion. Diagenetic deposits seaward of the prism toe probably consist dominantly of gas hydrates

  3. Mapping acoustic emissions from hydraulic fracture treatments using coherent array processing: Concept

    SciTech Connect

    Harris, D.B.; Sherwood, R.J.; Jarpe, S.P.; Harben, P.E.

    1991-09-01

    Hydraulic fracturing is a widely-used well completion technique for enhancing the recovery of gas and oil in low-permeability formations. Hydraulic fracturing consists of pumping fluids into a well under high pressure (1000--5000 psi) to wedge-open and extend a fracture into the producing formation. The fracture acts as a conduit for gas and oil to flow back to the well, significantly increasing communication with larger volumes of the producing formation. A considerable amount of research has been conducted on the use of acoustic (microseismic) emission to delineate fracture growth. The use of transient signals to map the location of discrete sites of emission along fractures has been the focus of most research on methods for delineating fractures. These methods depend upon timing the arrival of compressional (P) or shear (S) waves from discrete fracturing events at one or more clamped geophones in the treatment well or in adjacent monitoring wells. Using a propagation model, the arrival times are used to estimate the distance from each sensor to the fracturing event. Coherent processing methods appear to have sufficient resolution in the 75 to 200 Hz band to delineate the extent of fractures induced by hydraulic fracturing. The medium velocity structure must be known with a 10% accuracy or better and no major discontinuities should be undetected. For best results, the receiving array must be positioned directly opposite the perforations (same depths) at a horizontal range of 200 to 400 feet from the region to be imaged. Sources of acoustic emission may be detectable down to a single-sensor SNR of 0.25 or somewhat less. These conclusions are limited by the assumptions of this study: good coupling to the formation, acoustic propagation, and accurate knowledge of the velocity structure.

  4. Regional assessment of boreal forest productivity using an ecological process model and remote sensing parameter maps.

    PubMed

    Kimball, J. S.; Keyser, A. R.; Running, S. W.; Saatchi, S. S.

    2000-06-01

    An ecological process model (BIOME-BGC) was used to assess boreal forest regional net primary production (NPP) and response to short-term, year-to-year weather fluctuations based on spatially explicit, land cover and biomass maps derived by radar remote sensing, as well as soil, terrain and daily weather information. Simulations were conducted at a 30-m spatial resolution, over a 1205 km(2) portion of the BOREAS Southern Study Area of central Saskatchewan, Canada, over a 3-year period (1994-1996). Simulations of NPP for the study region were spatially and temporally complex, averaging 2.2 (+/- 0.6), 1.8 (+/- 0.5) and 1.7 (+/- 0.5) Mg C ha(-1) year(-1) for 1994, 1995 and 1996, respectively. Spatial variability of NPP was strongly controlled by the amount of aboveground biomass, particularly photosynthetic leaf area, whereas biophysical differences between broadleaf deciduous and evergreen coniferous vegetation were of secondary importance. Simulations of NPP were strongly sensitive to year-to-year variations in seasonal weather patterns, which influenced the timing of spring thaw and deciduous bud-burst. Reductions in annual NPP of approximately 17 and 22% for 1995 and 1996, respectively, were attributed to 3- and 5-week delays in spring thaw relative to 1994. Boreal forest stands with greater proportions of deciduous vegetation were more sensitive to the timing of spring thaw than evergreen coniferous stands. Similar relationships were found by comparing simulated snow depth records with 10-year records of aboveground NPP measurements obtained from biomass harvest plots within the BOREAS region. These results highlight the importance of sub-grid scale land cover complexity in controlling boreal forest regional productivity, the dynamic response of the biome to short-term interannual climate variations, and the potential implications of climate change and other large-scale disturbances. PMID:12651512

  5. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  6. PARTICLE ACCELERATOR

    DOEpatents

    Teng, L.C.

    1960-01-19

    ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.

  7. EMITTING ELECTRONS SPECTRA AND ACCELERATION PROCESSES IN THE JET OF Mrk 421: FROM THE LOW STATE TO THE GIANT FLARE STATE

    SciTech Connect

    Yan Dahai; Zhang Li; Fan Zhonghui; Zeng Houdun; Yuan Qiang

    2013-03-10

    We investigate the electron energy distributions (EEDs) and the acceleration processes in the jet of Mrk 421 through fitting the spectral energy distributions (SEDs) in different active states in the frame of a one-zone synchrotron self-Compton model. After assuming two possible EEDs formed in different acceleration models: the shock-accelerated power law with exponential cut-off (PLC) EED and the stochastic-turbulence-accelerated log-parabolic (LP) EED, we fit the observed SEDs of Mrk 421 in both low and giant flare states using the Markov Chain Monte Carlo method which constrains the model parameters in a more efficient way. The results from our calculations indicate that (1) the PLC and LP models give comparably good fits for the SED in the low state, but the variations of model parameters from low state to flaring can be reasonably explained only in the case of the PLC in the low state; and (2) the LP model gives better fits compared to the PLC model for the SED in the flare state, and the intra-day/night variability observed at GeV-TeV bands can be accommodated only in the LP model. The giant flare may be attributed to the stochastic turbulence re-acceleration of the shock-accelerated electrons in the low state. Therefore, we may conclude that shock acceleration is dominant in the low state, while stochastic turbulence acceleration is dominant in the flare state. Moreover, our result shows that the extrapolated TeV spectra from the best-fit SEDs from optical through GeV with the two EEDs are different. It should be considered with caution when such extrapolated TeV spectra are used to constrain extragalactic background light models.

  8. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Weiland, J. L.; Hill, R. S.; Odegard, N.; Larson, D.; Bennett, C. L.; Dunkley, J.; Gold, B.; Greason, M. R.; Jarosik, N.; Komatsu, E.; Nolta, M. R.; Page, L.; Spergel, D. N.; Wollack, E.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wright, E. L.

    2010-01-01

    We present new full-sky temperature and polarization maps in five frequency bands from 23 to 94 GHz, based on data from the first five years of the Wilkinson Microwave Anisotropy Probe (WMAP) sky survey. The new maps are consistent with previous maps and are more sensitive. The five-year maps incorporate several improvements in data processing made possible by the additional years of data and by a more complete analysis of the instrument calibration and in-flight beam response. We present several new tests for systematic errors in the polarization data and conclude that W-band polarization data is not yet suitable for cosmological studies, but we suggest directions for further study. We do find that Ka-band data is suitable for use; in conjunction with the additional years of data, the addition of Ka band to the previously used Q- and V-band channels significantly reduces the uncertainty in the optical depth parameter, tau. Further scientific results from the five-year data analysis are presented in six companion papers and are summarized in Section 7 of this paper. With the five-year WMAP data, we detect no convincing deviations from the minimal six-parameter ACDM model: a flat universe dominated by a cosmological constant, with adiabatic and nearly scale-invariant Gaussian fluctuations. Using WMAP data combined with measurements of Type Ia supernovae and Baryon Acoustic Oscillations in the galaxy distribution, we find (68% CL uncertainties): OMEGA(sub b)h(sup 2) = 0.02267(sup +0.00058)(sub -0.00059), OMEGA(sub c)h(sup 2) = 0.1131 plus or minus 0.0034, OMEGA(sub logical and) = 0.726 plus or minus 0.015, ns = .960 plus or minus 0.013, tau = 0.84 plus or minus 0.016, and DELTA(sup 2)(sub R) = (22.445 plus or minus 0.096) x 10(exp -9) at k = 0.002 Mpc(exp -1). From these we derive sigma(sub 8) = 0.812 plus or minus 0.026, H(sub 0) = 70.5 plus or minus 1.3 kilometers per second Mpc(exp -1), OMEGA(sub b) = 0.0456 plus or minus 0.0015, OMEGA(sub c) = .228 plus or minus

  9. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  10. Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP

    NASA Astrophysics Data System (ADS)

    Shen, B.; Li, J. F.; Cheung, S.

    2013-12-01

    In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural

  11. Evaluation of acceleration and deceleration cardiac processes using phase-rectified signal averaging in healthy and idiopathic dilated cardiomyopathy subjects.

    PubMed

    Bas, Rosana; Vallverdú, Montserrat; Valencia, Jose F; Voss, Andreas; de Luna, Antonio Bayés; Caminal, Pere

    2015-02-01

    The aim of the present study was to investigate the suitability of the Phase-Rectified Signal Averaging (PRSA) method for improved risk prediction in cardiac patients. Moreover, this technique, which separately evaluates acceleration and deceleration processes of cardiac rhythm, allows the effect of sympathetic and vagal modulations of beat-to-beat intervals to be characterized. Holter recordings of idiopathic dilated cardiomyopathy (IDC) patients were analyzed: high-risk (HR), who suffered sudden cardiac death (SCD) during the follow-up; and low-risk (LR), without any kind of cardiac-related death. Moreover, a control group of healthy subjects was analyzed. PRSA indexes were analyzed, for different time scales T and wavelet scales s, from RR series of 24 h-ECG recordings, awake periods and sleep periods. Also, the behavior of these indexes from simulated data was analyzed and compared with real data results. Outcomes demonstrated the PRSA capacity to significantly discriminate healthy subjects from IDC patients and HR from LR patients on a higher level than traditional temporal and spectral measures. The behavior of PRSA indexes agrees with experimental evidences related to cardiac autonomic modulations. Also, these parameters reflect more regularity of the autonomic nervous system (ANS) in HR patients. PMID:25585858

  12. World Stress Map Release 2003 - A key to tectonic processes and industrial applications

    NASA Astrophysics Data System (ADS)

    Müller, B.; Reinecker, J.; Heidbach, O.; Fuchs, K.

    2003-04-01

    Geoscientists are exploring and penetrating the interior of the Earth crust, to recover from it and to store into it solids, fluids and gas. Management of subsurface underground buildings such as boreholes or reservoirs has to take into account the existing tectonic stress either to their advantage or at least to minimize the effects of manmade stress concentrations and destructive effects as for instance borehole breakouts. The World Map of tectonic stress (in short: World Stress Map or WSM) is a fundamental geophysical database. The impact of the WSM on various aspects of modern civilization is pointed out. There is a whole range from seismic hazard quantification to the increasing interest of the industry in the WSM. The WSM becomes a valuable tool applied to a wide range of technological problems within the stressed crust such as oil reservoir management, and to stability of underground openings (tunnels, boreholes and waste disposal sites). The new release 2003 of the WSM has now more than 13,500 stress data sets. All data were classified according to a unified quality ranking. This provides the comparabilty of data which originate from a wide range of measurement methods. The data base is free available on the website http://www.world-stress-map.org. With the new version 1.1 of the interactive tool CASMO (Create A Stress Map Online) the user can ask for an own stress map.

  13. Nanoscale mapping of excitonic processes in single-layer MoS2 using tip-enhanced photoluminescence microscopy

    NASA Astrophysics Data System (ADS)

    Su, Weitao; Kumar, Naresh; Mignuzzi, Sandro; Crain, Jason; Roy, Debdulal

    2016-05-01

    In two-dimensional (2D) semiconductors, photoluminescence originating from recombination processes involving neutral electron-hole pairs (excitons) and charged complexes (trions) is strongly affected by the localized charge transfer due to inhomogeneous interactions with the local environment and surface defects. Herein, we demonstrate the first nanoscale mapping of excitons and trions in single-layer MoS2 using the full spectral information obtained via tip-enhanced photoluminescence (TEPL) microscopy along with tip-enhanced Raman spectroscopy (TERS) imaging of a 2D flake. Finally, we show the mapping of the PL quenching centre in single-layer MoS2 with an unprecedented spatial resolution of 20 nm. In addition, our research shows that unlike in aperture-scanning near field microscopy, preferential exciton emission mapping at the nanoscale using TEPL and Raman mapping using TERS can be obtained simultaneously using this method that can be used to correlate the structural and excitonic properties.In two-dimensional (2D) semiconductors, photoluminescence originating from recombination processes involving neutral electron-hole pairs (excitons) and charged complexes (trions) is strongly affected by the localized charge transfer due to inhomogeneous interactions with the local environment and surface defects. Herein, we demonstrate the first nanoscale mapping of excitons and trions in single-layer MoS2 using the full spectral information obtained via tip-enhanced photoluminescence (TEPL) microscopy along with tip-enhanced Raman spectroscopy (TERS) imaging of a 2D flake. Finally, we show the mapping of the PL quenching centre in single-layer MoS2 with an unprecedented spatial resolution of 20 nm. In addition, our research shows that unlike in aperture-scanning near field microscopy, preferential exciton emission mapping at the nanoscale using TEPL and Raman mapping using TERS can be obtained simultaneously using this method that can be used to correlate the structural

  14. Investigation into the hot workability of the as-extruded WE43 magnesium alloy using processing map.

    PubMed

    Wang, Lixiao; Fang, Gang; Leeflang, Sander; Duszczyk, Jurek; Zhou, Jie

    2014-04-01

    The research concerned the characterization of the hot-working behavior of the as-extruded WE43 magnesium alloy potentially for biomedical applications and the construction of processing maps to guide the choice of forming process parameters. Isothermal uniaxial compression tests were performed over a temperature range of 350-480°C and strain rate range of 0.001-10s(-1). Flow stresses obtained were used to construct processing maps. Domains in processing maps corresponding to relevant deformation mechanisms, i.e., dynamic recrystallization (DRX), dynamic recovery (DRV) and flow instability, were identified, according to power dissipation efficiency and flow instability parameter values. Microstructures of compression-tested specimens were examined to validate these deformation mechanisms. Two mechanisms of DRX nucleation, i.e., particle-stimulated nucleation (PSN) and grain boundary bulging, were found to be operative at the low-temperature and high-temperature DRX domains, respectively. Flow instability was related to adiabatic shear bands and abnormal grain growth. An optimum condition for the hot working of this alloy was determined to be at a temperature of 475°C and a strain rate of 0.1s(-1). PMID:24508713

  15. User alternatives in post-processing for raster-to-vector conversion. [Landsat-based forest mapping

    NASA Technical Reports Server (NTRS)

    Logan, T. L.; Woodcock, C. E.

    1983-01-01

    A number of Landsat-based coniferous forest stratum maps have been created of the Eldorado National Forest in California. These maps were produced in raster image format which is not directly usable by the U.S. Forest Service's vector-based Wildland Resource Information System (WRIS). As a solution, raster-to-vector conversion software has been developed for processing classified images into polygonal data structures. Before conversion, however, the digital classification images must be simplified to remove high spatial variance ('noise', 'speckle') and meet a USFS ten acre minimum requirement. A post-processing (simplification) strategy different from those commonly used in raster image processing may be desired for preparing maps for conversion to vector format, because simplification routines typically permit diagonal connections in the process of reclassifying pixels and forming new polygons. Diagonal connections are often undesirable when converting to vector format because they permit polygons to effectively cross over each other and occupy a common location. Three alternative methodologies are discussed for simplifying raster data for conversion to vector format.

  16. Image based cardiac acceleration map using statistical shape and 3D+t myocardial tracking models; in-vitro study on heart phantom

    NASA Astrophysics Data System (ADS)

    Pashaei, Ali; Piella, Gemma; Planes, Xavier; Duchateau, Nicolas; de Caralt, Teresa M.; Sitges, Marta; Frangi, Alejandro F.

    2013-03-01

    It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.

  17. Compact Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Foster, John E.

    2004-01-01

    A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.

  18. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated

  19. Tuning maps for setpoint changes and load disturbance upsets in a three capacity process under multivariable control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Smith, Ira C.

    1991-01-01

    Tuning maps are an aid in the controller tuning process because they provide a convenient way for the plant operator to determine the consequences of adjusting different controller parameters. In this application the maps provide a graphical representation of the effect of varying the gains in the state feedback matrix on startup and load disturbance transients for a three capacity process. Nominally, the three tank system, represented in diagonal form, has a Proportional-Integral control on each loop. Cross coupling is then introduced between the loops by using non-zero off-diagonal proportional parameters. Changes in transient behavior due to setpoint and load changes are examined by varying the gains of the cross coupling terms.

  20. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  1. Processing Flexible Form-to-Meaning Mappings: Evidence for Enriched Composition as Opposed to Indeterminacy

    ERIC Educational Resources Information Center

    Roehm, Dietmar; Sorace, Antonella; Bornkessel-Schlesewsky, Ina

    2013-01-01

    Sometimes, the relationship between form and meaning in language is not one-to-one. Here, we used event-related brain potentials (ERPs) to illuminate the neural correlates of such flexible syntax-semantics mappings during sentence comprehension by examining split-intransitivity. While some ("rigid") verbs consistently select one…

  2. A Soft OR Approach to Fostering Systems Thinking: SODA Maps plus Joint Analytical Process

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2016-01-01

    Higher order thinking skills are important for managers. Systems thinking is an important type of higher order thinking in business education. This article investigates a soft Operations Research approach to teaching and learning systems thinking. It outlines the integrative use of Strategic Options Development and Analysis maps for visualizing…

  3. Using Saliency Maps to Separate Competing Processes in Infant Visual Cognition

    ERIC Educational Resources Information Center

    Althaus, Nadja; Mareschal, Denis

    2012-01-01

    This article presents an eye-tracking study using a novel combination of visual saliency maps and "area-of-interest" analyses to explore online feature extraction during category learning in infants. Category learning in 12-month-olds (N = 22) involved a transition from looking at high-saliency image regions to looking at more informative, highly…

  4. USING IMAGE PROCESSING METHODS WITH RASTER EDITING TOOLS FOR MAPPING EELGRASS DISTRIBUTIONS IN PACIFIC NORHWEST ESTUARIES

    EPA Science Inventory

    False-color near-infrared (CIR) aerial photography of seven Oregon estuaries was acquired at extreme low tides and digitally orthorectified with a ground pixel resolution of 25 cm to provide data for intertidal vegetation mapping. Exposed, semi-exposed and some submerged eelgras...

  5. ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION

    EPA Science Inventory

    The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...

  6. Processing the CONSOL Energy, Inc. Mine Maps and Records Collection at the University of Pittsburgh

    ERIC Educational Resources Information Center

    Rougeux, Debora A.

    2011-01-01

    This article describes the efforts of archivists and student assistants at the University of Pittsburgh's Archives Service Center to organize, describe, store, and provide timely and efficient access to over 8,000 maps of underground coal mines in southwestern Pennsylvania, as well the records that accompanied them, donated by CONSOL Energy, Inc.…

  7. Exploring Students' Mapping Behaviors and Interactive Discourses in a Case Diagnosis Problem: Sequential Analysis of Collaborative Causal Map Drawing Processes

    ERIC Educational Resources Information Center

    Lee, Woon Jee

    2012-01-01

    The purpose of this study was to explore the nature of students' mapping and discourse behaviors while constructing causal maps to articulate their understanding of a complex, ill-structured problem. In this study, six graduate-level students were assigned to one of three pair groups, and each pair used the causal mapping software program,…

  8. Accelerating patient-care improvement in the ED.

    PubMed

    Forrester, Nancy E

    2003-08-01

    Quality improvement is always in the best interest of healthcare providers. One hospital examined the patient-care delivery process used in its emergency department to determine ways to improve patient satisfaction while increasing the effectiveness and efficiency of healthcare delivery. The hospital used activity-based costing (ABC) plus additional data related to rework, information opportunity costs, and other effectiveness measures to create a process map that helped it accelerate diagnosis and improve redesign of the care process. PMID:12938618

  9. Added value products for imaging remote sensing by processing actual GNSS reflectometry delay doppler maps

    NASA Astrophysics Data System (ADS)

    Schiavulli, Domenico; Frappart, Frédéric; Ramilien, Guillaume; Darrozes, José; Nunziata, Ferdinando; Migliaccio, Maurizio

    2016-04-01

    Global Navigation Satellite System Reflectometry (GNSS-R) is an innovative and promising tool for remote sensing. It is based on the exploitation of GNSS signals reflected off Earth's surface as signals of opportunity to infer geophysical information of the reflecting surface. The main advantages of GNSS-R with respect dedicated sensors are: the unprecedented spatial-temporal coverage due to the availability of a great amount of transmitting satellite, e.g. GPS, Galileo, Glonass, etc…, long term GNSS mission life and cost effectiveness. In fact only a simple receiver is needed. In the last years several works demonstrated the meaningful of this technique in several Earth Observation applications. All these applications presented results obtained by using a receiver mounted on an aircraft or on a fixed platform. Moreover, space borne missions have been launched or are planned: UK-DMC, TechDemoSat-1 (TDS-1), NASA CYGNSS, Geros ISS. Practically, GNSS-R can be seen as a bistatic radar system where the GNSS satellites continuously transmit the L-band all-weather night-and-day signals that are reflected off a surface, called Glistening Zone (GZ), and a receiver measures the scattered microwave signals in terms of Delay-Doppler maps (DDMs) or delay waveforms. These two products have been widely studied in the literature to extract compact parameters for different remote sensing applications. However, products measured in the Delay Doppler (DD) domain are not able to provide any spatial information of the scattering scene. This could represent a drawback for applications related to imaging remote sensing, e.g. target detection, sea/land and sea/ice transition, oil spill detection, etc…. To overcome these limitations some deconvolution techniques have been proposed in the state of the art aiming at the reconstruction of a radar image of the observed scene by processing the measured DDMs. These techniques have been tested on DDMs related to simulated marine scenario

  10. An Improved Arrhenius Constitutive Model and Three-Dimensional Processing Map of a Solution-Treated Ni-Based Superalloy

    NASA Astrophysics Data System (ADS)

    Li, Hong-Bin; Feng, Yun-Li

    2016-01-01

    The hot deformation behaviors of a solution-treated Ni-based superalloy are investigated by hot compression tests over wide ranges of strain rate and forming temperature. Based on the experimental data, the effects of forming temperature and strain rate on the hot deformation behaviors are discussed in detail. Considering the effects of strain on material constants, comprehensive constitutive models are developed to describe the relationships between the flow stress, strain rate and forming temperature for the studied superalloy. The three-dimensional processing map is constructed to optimize the hot working parameters. Meanwhile, the microstructures are analyzed to correlate with the processing map. It is found that the flow stress is sensitive to the forming temperature, strain rate and deformation degree. With the increase of forming temperature or the decrease of strain rate, the flow stress significantly decreases. The predicted flow stresses agree well with experimentally measured results, which confirm that the developed constitutive model can accurately estimate the flow stress of the studied superalloy. The three-dimensional processing map shows that the optimum deformation windows for hot working are the domains with 980-1,040°C or 0.001-0.1 s^{-1} when the strain is 0.6. Also, it is found that the dynamically recrystallized grain size increases with the increase of forming temperature or the decrease of strain rate.

  11. An Improved Arrhenius Constitutive Model and Three-Dimensional Processing Map of a Solution-Treated Ni-Based Superalloy

    NASA Astrophysics Data System (ADS)

    Li, Hong-Bin; Feng, Yun-Li

    2016-01-01

    The hot deformation behaviors of a solution-treated Ni-based superalloy are investigated by hot compression tests over wide ranges of strain rate and forming temperature. Based on the experimental data, the effects of forming temperature and strain rate on the hot deformation behaviors are discussed in detail. Considering the effects of strain on material constants, comprehensive constitutive models are developed to describe the relationships between the flow stress, strain rate and forming temperature for the studied superalloy. The three-dimensional processing map is constructed to optimize the hot working parameters. Meanwhile, the microstructures are analyzed to correlate with the processing map. It is found that the flow stress is sensitive to the forming temperature, strain rate and deformation degree. With the increase of forming temperature or the decrease of strain rate, the flow stress significantly decreases. The predicted flow stresses agree well with experimentally measured results, which confirm that the developed constitutive model can accurately estimate the flow stress of the studied superalloy. The three-dimensional processing map shows that the optimum deformation windows for hot working are the domains with 980-1,040°C or 0.001-0.1 {s}^{-1} when the strain is 0.6. Also, it is found that the dynamically recrystallized grain size increases with the increase of forming temperature or the decrease of strain rate.

  12. Susceptibility mapping of shallow landslides using kernel-based Gaussian process, support vector machines and logistic regression

    NASA Astrophysics Data System (ADS)

    Colkesen, Ismail; Sahin, Emrehan Kutlug; Kavzoglu, Taskin

    2016-06-01

    Identification of landslide prone areas and production of accurate landslide susceptibility zonation maps have been crucial topics for hazard management studies. Since the prediction of susceptibility is one of the main processing steps in landslide susceptibility analysis, selection of a suitable prediction method plays an important role in the success of the susceptibility zonation process. Although simple statistical algorithms (e.g. logistic regression) have been widely used in the literature, the use of advanced non-parametric algorithms in landslide susceptibility zonation has recently become an active research topic. The main purpose of this study is to investigate the possible application of kernel-based Gaussian process regression (GPR) and support vector regression (SVR) for producing landslide susceptibility map of Tonya district of Trabzon, Turkey. Results of these two regression methods were compared with logistic regression (LR) method that is regarded as a benchmark method. Results showed that while kernel-based GPR and SVR methods generally produced similar results (90.46% and 90.37%, respectively), they outperformed the conventional LR method by about 18%. While confirming the superiority of the GPR method, statistical tests based on ROC statistics, success rate and prediction rate curves revealed the significant improvement in susceptibility map accuracy by applying kernel-based GPR and SVR methods.

  13. Effect of the Drying Process on the Intensification of Phenolic Compounds Recovery from Grape Pomace Using Accelerated Solvent Extraction

    PubMed Central

    Rajha, Hiba N.; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G.; Maroun, Richard G.

    2014-01-01

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications. PMID:25322155

  14. Effect of the drying process on the intensification of phenolic compounds recovery from grape pomace using accelerated solvent extraction.

    PubMed

    Rajha, Hiba N; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G; Maroun, Richard G

    2014-01-01

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications. PMID:25322155

  15. Identifying Human Disease Genes through Cross-Species Gene Mapping of Evolutionary Conserved Processes

    PubMed Central

    Poot, Martin; Badea, Alexandra; Williams, Robert W.; Kas, Martien J.

    2011-01-01

    Background Understanding complex networks that modulate development in humans is hampered by genetic and phenotypic heterogeneity within and between populations. Here we present a method that exploits natural variation in highly diverse mouse genetic reference panels in which genetic and environmental factors can be tightly controlled. The aim of our study is to test a cross-species genetic mapping strategy, which compares data of gene mapping in human patients with functional data obtained by QTL mapping in recombinant inbred mouse strains in order to prioritize human disease candidate genes. Methodology We exploit evolutionary conservation of developmental phenotypes to discover gene variants that influence brain development in humans. We studied corpus callosum volume in a recombinant inbred mouse panel (C57BL/6J×DBA/2J, BXD strains) using high-field strength MRI technology. We aligned mouse mapping results for this neuro-anatomical phenotype with genetic data from patients with abnormal corpus callosum (ACC) development. Principal Findings From the 61 syndromes which involve an ACC, 51 human candidate genes have been identified. Through interval mapping, we identified a single significant QTL on mouse chromosome 7 for corpus callosum volume with a QTL peak located between 25.5 and 26.7 Mb. Comparing the genes in this mouse QTL region with those associated with human syndromes (involving ACC) and those covered by copy number variations (CNV) yielded a single overlap, namely HNRPU in humans and Hnrpul1 in mice. Further analysis of corpus callosum volume in BXD strains revealed that the corpus callosum was significantly larger in BXD mice with a B genotype at the Hnrpul1 locus than in BXD mice with a D genotype at Hnrpul1 (F = 22.48, p<9.87*10−5). Conclusion This approach that exploits highly diverse mouse strains provides an efficient and effective translational bridge to study the etiology of human developmental disorders, such as autism and schizophrenia

  16. Plasma inverse transition acceleration

    SciTech Connect

    Xie, Ming

    2001-06-18

    It can be proved fundamentally from the reciprocity theorem with which the electromagnetism is endowed that corresponding to each spontaneous process of radiation by a charged particle there is an inverse process which defines a unique acceleration mechanism, from Cherenkov radiation to inverse Cherenkov acceleration (ICA) [1], from Smith-Purcell radiation to inverse Smith-Purcell acceleration (ISPA) [2], and from undulator radiation to inverse undulator acceleration (IUA) [3]. There is no exception. Yet, for nearly 30 years after each of the aforementioned inverse processes has been clarified for laser acceleration, inverse transition acceleration (ITA), despite speculation [4], has remained the least understood, and above all, no practical implementation of ITA has been found, until now. Unlike all its counterparts in which phase synchronism is established one way or the other such that a particle can continuously gain energy from an acceleration wave, the ITA to be discussed here, termed plasma inverse transition acceleration (PITA), operates under fundamentally different principle. As a result, the discovery of PITA has been delayed for decades, waiting for a conceptual breakthrough in accelerator physics: the principle of alternating gradient acceleration [5, 6, 7, 8, 9, 10]. In fact, PITA was invented [7, 8] as one of several realizations of the new principle.

  17. Utilization of Workflow Process Maps to Analyze Gaps in Critical Event Notification at a Large, Urban Hospital.

    PubMed

    Bowen, Meredith; Prater, Adam; Safdar, Nabile M; Dehkharghani, Seena; Fountain, Jack A

    2016-08-01

    Stroke care is a time-sensitive workflow involving multiple specialties acting in unison, often relying on one-way paging systems to alert care providers. The goal of this study was to map and quantitatively evaluate such a system and address communication gaps with system improvements. A workflow process map of the stroke notification system at a large, urban hospital was created via observation and interviews with hospital staff. We recorded pager communication regarding 45 patients in the emergency department (ED), neuroradiology reading room (NRR), and a clinician residence (CR), categorizing transmissions as successful or unsuccessful (dropped or unintelligible). Data analysis and consultation with information technology staff and the vendor informed a quality intervention-replacing one paging antenna and adding another. Data from a 1-month post-intervention period was collected. Error rates before and after were compared using a chi-squared test. Seventy-five pages regarding 45 patients were recorded pre-intervention; 88 pages regarding 86 patients were recorded post-intervention. Initial transmission error rates in the ED, NRR, and CR were 40.0, 22.7, and 12.0 %. Post-intervention, error rates were 5.1, 18.8, and 1.1 %, a statistically significant improvement in the ED (p < 0.0001) and CR (p = 0.004) but not NRR (p = 0.208). This intervention resulted in measureable improvement in pager communication to the ED and CR. While results in the NRR were not significant, this intervention bolsters the utility of workflow process maps. The workflow process map effectively defined communication failure parameters, allowing for systematic testing and intervention to improve communication in essential clinical locations. PMID:26667658

  18. Machine processing of S-192 and supporting aircraft data: Studies of atmospheric effects, agricultural classifications, and land resource mapping

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1975-01-01

    Two tasks of machine processing of S-192 multispectral scanner data are reviewed. In the first task, the effects of changing atmospheric and base altitude on the ability to machine-classify agricultural crops were investigated. A classifier and atmospheric effects simulation model was devised and its accuracy verified by comparison of its predicted results with S-192 processed results. In the second task, land resource maps of a mountainous area near Cripple Creek, Colorado were prepared from S-192 data collected on 4 August 1973.

  19. World Stress Map of the Earth: a key to tectonic processes and technological applications

    NASA Astrophysics Data System (ADS)

    Fuchs, Karl; Müller, Birgit

    2001-08-01

    Modern civilisation explores and penetrates the interior of the Earth's crust, recovers from it and stores into it solids, fluids and gases to a hitherto unprecedented degree. Management of underground structures such as boreholes or reservoirs take into account the existing stress either to take advantage of it or at least to minimise the effects of man-made stress. This paper presents the World Map of Tectonic Stresses (in short: World Stress Map or WSM) as a fundamental geophysical data-base. The impact of the WSM is pointed out: in the context of global tectonics, in seismic hazard quantification and in a wide range of technological problems in industrial applications such as oil reservoir management and stability of underground openings (tunnels, boreholes and waste disposal sites).

  20. Evaluation of Waveform Mapping as a Signal Processing Tool for Quantitative Ultrasonic NDE

    NASA Technical Reports Server (NTRS)

    Johnston, Patrick H.; Kishoni, Doron

    1993-01-01

    The mapping of one pulsed waveform into another, more desirable waveform by the application of a time-domain filter has been employed in a number of NDE situations. The primary goal of these applications has been to improve the range resolution of an ultrasonic signal for detection of echoes arising from particular interfaces masked by the response of the transducer. The work presented here addresses the use of this technique for resolution enhancement in imaging situations and in mapping signals from different transducers to a common target waveform, allowing maintenance of quantitative calibration of ultrasonic systems. We also describe the use of this technique in terms of the frequency analysis of the resulting waveforms.

  1. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  2. Production of a water quality map of Saginaw Bay by computer processing of LANDSAT-2 data

    NASA Technical Reports Server (NTRS)

    Mckeon, J. B.; Rogers, R. H.; Smith, V. E.

    1977-01-01

    Surface truth and LANDSAT measurements collected July 31, 1975, for Saginaw Bay were used to demonstrate a technique for producing a color coded water quality map. On this map, color was used as a code to quantify five discrete ranges in the following water quality parameters: (1) temperature, (2) Secchi depth, (3) chloride, (4) conductivity, (5) total Kjeldahl nitrogen, (6) total phosphorous, (7)chlorophyll a, (8) total solids and (9) suspended solids. The LANDSAT and water quality relationship was established through the use of a set of linear regression equations where the water quality parameters are the dependent variables and LANDSAT measurements are the independent variables. Although the procedure is scene and surface truth dependent, it provides both a basis for extrapolating water quality parameters from point samples to unsampled areas and a synoptic view of water mass boundaries over the 3000 sq. km bay area made from one day's ship data that is superior, in many ways, to the traditional machine contoured maps made from three day's ship data.

  3. The use of concept maps during knowledge elicitation in ontology development processes – the nutrigenomics use case

    PubMed Central

    Castro, Alexander Garcia; Rocca-Serra, Philippe; Stevens, Robert; Taylor, Chris; Nashar, Karim; Ragan, Mark A; Sansone, Susanna-Assunta

    2006-01-01

    Background Incorporation of ontologies into annotations has enabled 'semantic integration' of complex data, making explicit the knowledge within a certain field. One of the major bottlenecks in developing bio-ontologies is the lack of a unified methodology. Different methodologies have been proposed for different scenarios, but there is no agreed-upon standard methodology for building ontologies. The involvement of geographically distributed domain experts, the need for domain experts to lead the design process, the application of the ontologies and the life cycles of bio-ontologies are amongst the features not considered by previously proposed methodologies. Results Here, we present a methodology for developing ontologies within the biological domain. We describe our scenario, competency questions, results and milestones for each methodological stage. We introduce the use of concept maps during knowledge acquisition phases as a feasible transition between domain expert and knowledge engineer. Conclusion The contributions of this paper are the thorough description of the steps we suggest when building an ontology, example use of concept maps, consideration of applicability to the development of lower-level ontologies and application to decentralised environments. We have found that within our scenario conceptual maps played an important role in the development process. PMID:16725019

  4. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population

    PubMed Central

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  5. Towards the effect of transverse inhomogeneity of electromagnetic pulse on the process of ion acceleration in the RPDA regime

    NASA Astrophysics Data System (ADS)

    Lezhnin, K. V.; Kamenets, F. F.; Beskin, V. S.; Kando, M.; Esirkepov, T. Z.; Bulanov, S. V.

    2015-05-01

    The stability of accleration of ions in the RPDA regime against transversal shift of the cluster target relative to gaussian and supergaussian laser pulses is considered. It is shown that the maximum energy of ions decreases while the shift increases, as the target escapes the acceleration domain. The effect of self-focusing for the supergaussian pulse profile is found and interpreted. An analytical approach based on the relativistic mirror model is developed. We also conduct PIC simulations that prove our theoretical estimations. The results obtained can be applied to the optimization of ion acceleration by the laser radiation pressure with mass-limited targets.

  6. Application of ERTS images and image processing to regional geologic problems and geologic mapping in northern Arizona

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H. (Principal Investigator); Billingsley, F. C.; Gillespie, A. R.; Abrams, M. J.; Squires, R. L.; Shoemaker, E. M.; Lucchitta, I.; Elston, D. P.

    1975-01-01

    The author has identified the following significant results. Computer image processing was shown to be both valuable and necessary in the extraction of the proper subset of the 200 million bits of information in an ERTS image to be applied to a specific problem. Spectral reflectivity information obtained from the four MSS bands can be correlated with in situ spectral reflectance measurements after path radiance effects have been removed and a proper normalization has been made. A detailed map of the major fault systems in a 90,000 sq km area in northern Arizona was compiled from high altitude photographs and pre-existing published and unpublished map data. With the use of ERTS images, three major fault systems, the Sinyala, Bright Angel, and Mesa Butte, were identified and their full extent measured. A byproduct of the regional studies was the identification of possible sources of shallow ground water, a scarce commodity in these regions.

  7. Plasma accelerators

    SciTech Connect

    Ruth, R.D.; Chen, P.

    1986-03-01

    In this paper we discuss plasma accelerators which might provide high gradient accelerating fields suitable for TeV linear colliders. In particular we discuss two types of plasma accelerators which have been proposed, the Plasma Beat Wave Accelerator and the Plasma Wake Field Accelerator. We show that the electric fields in the plasma for both schemes are very similar, and thus the dynamics of the driven beams are very similar. The differences appear in the parameters associated with the driving beams. In particular to obtain a given accelerating gradient, the Plasma Wake Field Accelerator has a higher efficiency and a lower total energy for the driving beam. Finally, we show for the Plasma Wake Field Accelerator that one can accelerate high quality low emittance beams and, in principle, obtain efficiencies and energy spreads comparable to those obtained with conventional techniques.

  8. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  9. Formation Mechanisms, Structure, and Properties of HVOF-Sprayed WC-CoCr Coatings: An Approach Toward Process Maps

    NASA Astrophysics Data System (ADS)

    Varis, T.; Suhonen, T.; Ghabchi, A.; Valarezo, A.; Sampath, S.; Liu, X.; Hannula, S.-P.

    2014-08-01

    Our study focuses on understanding the damage tolerance and performance reliability of WC-CoCr coatings. In this paper, the formation of HVOF-sprayed tungsten carbide-based cermet coatings is studied through an integrated strategy: First-order process maps are created by using online-diagnostics to assess particle states in relation to process conditions. Coating properties such as hardness, wear resistance, elastic modulus, residual stress, and fracture toughness are discussed with a goal to establish a linkage between properties and particle characteristics via second-order process maps. A strong influence of particle state on the mechanical properties, wear resistance, and residual stress stage of the coating was observed. Within the used processing window (particle temperature ranged from 1687 to 1831 °C and particle velocity from 577 to 621 m/s), the coating hardness varied from 1021 to 1507 HV and modulus from 257 to 322 GPa. The variation in coating mechanical state is suggested to relate to the microstructural changes arising from carbide dissolution, which affects the properties of the matrix and, on the other hand, cohesive properties of the lamella. The complete tracking of the coating particle state and its linking to mechanical properties and residual stresses enables coating design with desired properties.

  10. Social comparison processes, narrative mapping and their shaping of the cancer experience: a case study of an elite athlete.

    PubMed

    Sparkes, Andrew C; Pérez-Samaniego, Víctor; Smith, Brett

    2012-09-01

    Drawing on data generated by life history interviews and fieldwork observations we illuminate the ways in which a young elite athlete named David (a pseudonym) gave meaning to his experiences of cancer that eventually led to his death. Central to this process were the ways in which David utilized both social comparisons and a narrative map provided by the published autobiography of Lance Armstrong (2000). Our analysis reveals the selective manner in which social comparison processes operated around the following key dimensions: mental attitude to treatment; the sporting body; the ageing body; and physical appearance. The manner in which different comparison targets were chosen, the ways in which these were framed by Armstrong's autobiography, and the work that the restitution narrative as an actor did in this process are also examined. Some reflections are offered regarding the experiential consequences of the social comparison processes utilized by David when these are shaped by specific forms of embodiment and selective narrative maps of cancer survival. PMID:22199179

  11. Mapping of geomorphic processes on abandoned fields and cultivated land in small catchments in semi-arid Spain

    NASA Astrophysics Data System (ADS)

    Geißler, C.; Ries, J. B.; Marzolff, I.

    2009-04-01

    In semi-arid landscapes vegetation succession on abandoned agricultural land is a long lasting process due to the water deficit for the best time of the year. During this phase of succession, geomorphic processes like the formation and development of rills, gullies and other geomorphic processes lead to a more or less constant deterioration of the abandoned land. But also on currently cultivated land and under quasi-natural vegetation the processes of soil degradation by flowing water take place. Regarding small catchments like gully catchments, the topography and the land cover (abandoned land, cultivated land, quasi-natural vegetation) are highly important factors in gully formation and soil degradation. Another important point is the distribution of different land cover units and therefore the connectivity of the catchment as described by Bracken & Croke (2007). In this study, 11 catchments of single gullies have been mapped geomorphologically and compared to the rate of gully development derived from small-format aerial photography. It could be shown that there is a high variability of processes due to differences in topography and the way the land is or has been cultivated. On abandoned land, geomorphic processes are highly active and enhance or even predestinate the direction of headcut movement. Another result is that geomorphological mapping of these gully catchments revealed interactions and dependencies of linear erosion features like the connection to the main drainage line, e.g. the gully. In the larger of the observed catchments (>5 ha) it became clear that some catchments have morphological features that tend to enhance connectivity (long rills, shallow drainage lines) and some catchments have features which tend to restrict connectivity (terraces, dense vegetation). In "more connected" catchments the retreat rate of the headcut is generally higher. By the method of geomorphological mapping, valuable information about the soil degrading processes

  12. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2015-09-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  13. Characterization of hot deformation behavior of brasses using processing maps: Part I. α Brass

    NASA Astrophysics Data System (ADS)

    Padmavardhani, D.; Prasad, Y. V. R. K.

    1991-12-01

    The constitutive flow behavior of α brass in the temperature range of 500°C to 850°C and strain rate range of 0.001 to 100 s-1 has been characterized with the help of a power dissipation map generated on the basis of the principles of the Dynamic Materials Model. The map revealed a domain of dynamic recrystallization in the temperature range of 750°C to 850°C and in the strain rate range of 0.001 to 1 s-1, with a maximum efficiency of power dissipation of about 54 pct. The optimum hot working conditions are 850°C and 0.001 s-1, and these match with those generally employed in industrial practice. In the temperature range of 550°C to 750°C and strain rates lower than 0.01 s-1, the efficiency of power dissipation decreases with decreasing strain rate, with its minimum at 650°C. In this regime, solute drag effects similar to dynamic strain aging occur to impair the hot workability. The material undergoes microstructural instabilities at temperatures of 500°C to 650°C and at strain rates of 10 to 100 s-1, as predicted by the continuum instability criterion. The manifestations of the instabilities have been observed to be adiabatic shear bands.

  14. Geomorphology, acoustic backscatter, and processes in Santa Monica Bay from multibeam mapping.

    PubMed

    Gardner, James V; Dartnell, Peter; Mayer, Larry A; Hughes Clarke, John E

    2003-01-01

    Santa Monica Bay was mapped in 1996 using a high-resolution multibeam system, providing the first substantial update of the submarine geomorphology since the initial compilation by Shepard and Emery [(1941) Geol. Soc. Amer. Spec. Paper 31]. The multibeam mapping generated not only high-resolution bathymetry, but also coregistered, calibrated acoustic backscatter at 95 kHz. The geomorphology has been subdivided into six provinces; shelf, marginal plateau, submarine canyon, basin slope, apron, and basin. The dimensions, gradients, and backscatter characteristics of each province is described and related to a combination of tectonics, climate, sea level, and sediment supply. Fluctuations of eustatic sea level have had a profound effect on the area; by periodically eroding the surface of Santa Monica plateau, extending the mouth of the Los Angeles River to various locations along the shelf break, and by connecting submarine canyons to rivers. A wetter glacial climate undoubtedly generated more sediment to the rivers that then transported the increased sediment load to the low-stand coastline and canyon heads. The trends of Santa Monica Canyon and several bathymetric highs suggest a complex tectonic stress field that has controlled the various segments. There is no geomorphic evidence to suggest Redondo Canyon is fault controlled. The San Pedro fault can be extended more than 30 km to the northwest by the alignment of a series of bathymetric highs and abrupt changes in direction of channel thalwegs. PMID:12648948

  15. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works. PMID:26759756

  16. Using Medical Text Extraction, Reasoning and Mapping System (MTERMS) to Process Medication Information in Outpatient Clinical Notes

    PubMed Central

    Zhou, Li; Plasek, Joseph M; Mahoney, Lisa M; Karipineni, Neelima; Chang, Frank; Yan, Xuemin; Chang, Fenny; Dimaggio, Dana; Goldman, Debora S.; Rocha, Roberto A.

    2011-01-01

    Clinical information is often coded using different terminologies, and therefore is not interoperable. Our goal is to develop a general natural language processing (NLP) system, called Medical Text Extraction, Reasoning and Mapping System (MTERMS), which encodes clinical text using different terminologies and simultaneously establishes dynamic mappings between them. MTERMS applies a modular, pipeline approach flowing from a preprocessor, semantic tagger, terminology mapper, context analyzer, and parser to structure inputted clinical notes. Evaluators manually reviewed 30 free-text and 10 structured outpatient clinical notes compared to MTERMS output. MTERMS achieved an overall F-measure of 90.6 and 94.0 for free-text and structured notes respectively for medication and temporal information. The local medication terminology had 83.0% coverage compared to RxNorm’s 98.0% coverage for free-text notes. 61.6% of mappings between the terminologies are exact match. Capture of duration was significantly improved (91.7% vs. 52.5%) from systems in the third i2b2 challenge. PMID:22195230

  17. A comparison of real-time blade-element and rotor-map helicopter simulations using parallel processing

    NASA Technical Reports Server (NTRS)

    Corliss, Lloyd; Du Val, Ronald W.; Gillman, Herbert, III; Huynh, Loc C.

    1990-01-01

    In recent efforts by NASA, the Army, and Advanced Rotorcraft Technology, Inc. (ART), the application of parallel processing techniques to real-time simulation have been studied. Traditionally, real-time helicopter simulations have omitted the modeling of high-frequency phenomena in order to achieve real-time operation on affordable computers. Parallel processing technology can now provide the means for significantly improving the fidelity of real-time simulation, and one specific area for improvement is the modeling of rotor dynamics. This paper focuses on the results of a piloted simulation in which a traditional rotor-map mathematical model was compared with a more sophisticated blade-element mathematical model that had been implemented using parallel processing hardware and software technology.

  18. Hot deformation behavior of uniform fine-grained GH4720Li alloy based on its processing map

    NASA Astrophysics Data System (ADS)

    Yu, Qiu-ying; Yao, Zhi-hao; Dong, Jian-xin

    2016-01-01

    The hot deformation behavior of uniform fine-grained GH4720Li alloy was studied in the temperature range from 1040 to 1130°C and the strain-rate range from 0.005 to 0.5 s-1 using hot compression testing. Processing maps were constructed on the basis of compression data and a dynamic materials model. Considerable flow softening associated with superplasticity was observed at strain rates of 0.01 s-1 or lower. According to the processing map and observations of the microstructure, the uniform fine-grained microstructure remains intact at 1100°C or lower because of easily activated dynamic recrystallization (DRX), whereas obvious grain growth is observed at 1130°C. Metallurgical instabilities in the form of non-uniform microstructures under higher and lower Zener-Hollomon parameters are induced by local plastic flow and primary γ' local faster dissolution, respectively. The optimum processing conditions at all of the investigated strains are proposed as 1090-1130°C with 0.08-0.5 s-1 and 0.005-0.008 s-1 and 1040-1085°C with 0.005-0.06 s-1.

  19. Implementation of a semi-automated post-processing system for parametric MRI mapping of human breast cancer.

    PubMed

    Lee, Robert E; Welch, E Brian; Cobb, Jared G; Sinha, Tuhin; Gore, John C; Yankeelov, Thomas E

    2009-08-01

    Magnetic resonance imaging (MRI) investigations of breast cancer incorporate computationally intense techniques to develop parametric maps of pathophysiological tissue characteristics. Common approaches employ, for example, quantitative measurements of T (1), the apparent diffusion coefficient, and kinetic modeling based on dynamic contrast-enhanced MRI (DCE-MRI). In this paper, an integrated medical image post-processing and archive system (MIPAS) is presented. MIPAS demonstrates how image post-processing and user interface programs, written in the interactive data language (IDL) programming language with data storage provided by a Microsoft Access database, and the file system can reduce turnaround time for creating MRI parametric maps and provide additional organization for clinical trials. The results of developing the MIPAS are discussed including potential limitations of the use of IDL for the application framework and how the MIPAS design supports extension to other programming languages and imaging modalities. We also show that network storage of images and metadata has a significant (p < 0.05) increase in data retrieval time compared to collocated storage. The system shows promise for becoming both a robust research picture archival and communications system working with the standard hospital PACS and an image post-processing environment that extends to other medical image modalities. PMID:18446412

  20. Direct Current Accelerators for Industrial Applications

    NASA Astrophysics Data System (ADS)

    Hellborg, Ragnar; Whitlow, Harry J.

    2011-02-01

    Direct current accelerators form the basis of many front-line industrial processes. They have many advantages that have kept them at the forefront of technology for many decades, such as a small and easily managed environmental footprint. In this article, the basic principles of the different subsystems (ion and electron sources, high voltage generation, control, etc.) are overviewed. Some well-known (ion implantation and polymer processing) and lesser-known (electron beam lithography and particle-induced X-ray aerosol mapping) applications are reviewed.