Science.gov

Sample records for accelerated processing map

  1. 77 FR 21991 - Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)-Lender and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... URBAN DEVELOPMENT Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)--Lender and Underwriter Eligibility Criteria and Credit Watch for MAP Lenders AGENCY: Office of the Assistant... processes for determining lender and underwriter eligibility and tier qualification for MAP...

  2. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  3. Asymmetric neighborhood functions accelerate ordering process of self-organizing maps

    SciTech Connect

    Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji; Aoyagi, Toshio

    2011-02-15

    A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensional stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.

  4. Asymmetric neighborhood functions accelerate ordering process of self-organizing maps

    NASA Astrophysics Data System (ADS)

    Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji; Aoyagi, Toshio

    2011-02-01

    A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensional stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.

  5. Acceleration mapping on Consort 5

    NASA Astrophysics Data System (ADS)

    Naumann, Robert J.

    1994-09-01

    The Consort 5 rocket carrying a set of commercial low-gravity experiments experienced a significant side thrust from an apparent burn-through of the second-stage motor just prior to cut-off. The resulting angular momentum could not be removed by the attitude rate control system, thus the payload was left in an uncontrollable rocking/tumbling mode. Although the primary low-gravity emphasis mission requirements could not be met, it was hoped that some science could be salvaged by mapping the acceleration field over the vehicle so that each investigator could correlate his or her results with the acceleration environment at his or her experiment location. This required some detective work to obtain the body rates and moment of inertia ratios required to solve the full set of Euler equations for a tri-axial rigid body. The techniques for acceleration mapping described in this paper may be applicable to other low-gravity emphasis missions.

  6. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, Nathan

    2016-04-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high

  7. Accelerator simulation of astrophysical processes

    NASA Technical Reports Server (NTRS)

    Tombrello, T. A.

    1983-01-01

    Phenomena that involve accelerated ions in stellar processes that can be simulated with laboratory accelerators are described. Stellar evolutionary phases, such as the CNO cycle, have been partially explored with accelerators, up to the consumption of He by alpha particle radiative capture reactions. Further experimentation is indicated on reactions featuring N-13(p,gamma)O-14, O-15(alpha, gamma)Ne-19, and O-14(alpha,p)F-17. Accelerated beams interacting with thin foils produce reaction products that permit a determination of possible elemental abundances in stellar objects. Additionally, isotopic ratios observed in chondrites can be duplicated with accelerator beam interactions and thus constraints can be set on the conditions producing the meteorites. Data from isotopic fractionation from sputtering, i.e., blasting surface atoms from a material using a low energy ion beam, leads to possible models for processes occurring in supernova explosions. Finally, molecules can be synthesized with accelerators and compared with spectroscopic observations of stellar winds.

  8. The US Muon Accelerator Program (MAP)

    SciTech Connect

    Bross, Alan D.; /Fermilab

    2010-12-01

    The US Department of Energy Office of High Energy Physics has recently approved a Muon Accelerator Program (MAP). The primary goal of this effort is to deliver a Design Feasibility Study for a Muon Collider after a 7 year R&D program. This paper presents a brief physics motivation for, and the description of, a Muon Collider facility and then gives an overview of the program. I will then describe in some detail the primary components of the effort.

  9. Accelerated stochastic diffusion processes

    NASA Astrophysics Data System (ADS)

    Garbaczewski, Piotr

    1990-07-01

    We give a purely probabilistic demonstration that all effects of non-random (external, conservative) forces on the diffusion process can be encoded in the Nelson ansatz for the second Newton law. Each random path of the process together with a probabilistic weight carries a phase accumulation (complex valued) weight. Random path summation (integration) of these weights leads to the transition probability density and transition amplitude respectively between two spatial points in a given time interval. The Bohm-Vigier, Fenyes-Nelson-Guerra and Feynman descriptions of the quantum particle behaviours are in fact equivalent.

  10. ESS Accelerator Cryoplant Process Design

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Arnold, P.; Hees, W.; Hildenbeutel, J.; Weisend, J. G., II

    2015-12-01

    The European Spallation Source (ESS) is a neutron-scattering facility being built with extensive international collaboration in Lund, Sweden. The ESS accelerator will deliver protons with 5 MW of power to the target at 2.0 GeV, with a nominal current of 62.5 mA. The superconducting part of the accelerator is about 300 meters long and contains 43 cryomodules. The ESS accelerator cryoplant (ACCP) will provide the cooling for the cryomodules and the cryogenic distribution system that delivers the helium to the cryomodules. The ACCP will cover three cryogenic circuits: Bath cooling for the cavities at 2 K, the thermal shields at around 40 K and the power couplers thermalisation with 4.5 K forced helium cooling. The open competitive bid for the ACCP took place in 2014 with Linde Kryotechnik AG being selected as the vendor. This paper summarizes the progress in the ACCP development and engineering. Current status including final cooling requirements, preliminary process design, system configuration, machine concept and layout, main parameters and features, solution for the acceptance tests, exergy analysis and efficiency is presented.

  11. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, N. A.; Opher, M.; Kasper, J.; Mewaldt, R.; Moebius, E.; Spence, H. E.; Zurbuchen, T. H.

    2016-11-01

    Our piece of cosmic real estate, the heliosphere, is the domain of all human existence - an astrophysical case history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX is the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (˜5-55 keV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. This paper summarizes the next quantum leap enabled by IMAP that will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal, with unprecedented resolution, global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the same region of sky covered by a portion of the IBEX ribbon. Voyager 2’s plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before, will be a keystone of the Heliophysics System Observatory by providing comprehensive measurements of interstellar neutral atoms and pickup ions, the solar wind distribution, composition, and magnetic field, as well as suprathermal ion, energetic

  12. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap

  13. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  14. Accelerating Parameter Mapping with a Locally Low Rank Constraint

    PubMed Central

    Zhang, Tao; Pauly, John M.; Levesque, Ives R.

    2014-01-01

    Purpose To accelerate MR parameter mapping (MRPM) using a locally low rank (LLR) constraint, and the combination of parallel imaging (PI) and the LLR constraint. Theory and Methods An LLR method is developed for MRPM and compared with a globally low rank (GLR) method in a multi-echo spin-echo T2 mapping experiment. For acquisition with coil arrays, a combined LLR and PI method is proposed. The proposed method is evaluated in a variable flip angle T1 mapping experiment and compared with the LLR method and PI alone. Results In the multi-echo spin-echo T2 mapping experiment, the LLR method is more accurate than the GLR method for acceleration factors 2 and 3, especially for tissues with high T2 values. Variable flip angle T1 mapping is achieved by acquiring datasets with 10 flip angles, each dataset accelerated by a factor of 6, and reconstructed by the proposed method with a small normalized root mean square error of 0.025. Conclusion The LLR method is likely superior to the GLR method for MRPM. The proposed combined LLR and PI method has better performance than the two methods alone, especially with highly accelerated acquisition. PMID:24500817

  15. Auroral plasma acceleration processes at Mars

    NASA Astrophysics Data System (ADS)

    Lundin, R.; Barabash, S.; Winningham, D.

    2012-09-01

    Following the first Mars Express (MEX) findings of auroral plasma acceleration above Martian magnetic anomalies[1, 2], a more detailed analysis is carried out regarding the physical processes that leads to plasma acceleration, and how they connect to the dynamo-, and energy source regions. The ultimate energy source for Martian plasma acceleration is the solar wind. The question is, by what mechanisms is solar wind energy and momentum transferred into the magnetic flux tubes that connect to Martian magnetic anomalies? What are the key plasma acceleration processes that lead to aurora and the associated ionospheric plasma outflow from Mars? The experimental setup on MEX limits our capability to carry out "auroral physics" at Mars. However, with knowledge acquired from the Earth, we may draw some analogies with terrestrial auroral physics. Using the limited data set available, consisting of primarily ASPERA and MARSIS data, an interesting picture of aurora at Mars emerges. There are some strong similarities between accelerated/heated electrons and ions in the nightside high altitude region above Mars and the electron/ion acceleration above Terrestrial discrete aurora. Nearly monoenergetic downgoing electrons are observed in conjunction with nearly monoenergetic upgoing ions. Monoenergetic counterstreaming ions and electrons is the signature of plasma acceleration in quasi-static electric fields. However, compared to the Earth's aurora, with auroral process guided by a dipole field, aurora at Mars is expected to form complex patterns in the multipole environment governed by the Martian crustal magnetic field regions. Moreover, temporal/spatial scales are different at Mars. It is therefore of interest to mention another common characteristics that exist for Earth and Mars, plasma acceleration by waves. Low-frequency, Alfvén, waves is a very powerful means of plasma acceleration in the Earth's magnetosphere. Low-frequency waves associated with plasma acceleration

  16. One map policy (OMP) implementation strategy to accelerate mapping of regional spatial planing (RTRW) in Indonesia

    NASA Astrophysics Data System (ADS)

    Hasyim, Fuad; Subagio, Habib; Darmawan, Mulyanto

    2016-06-01

    A preparation of spatial planning documents require basic geospatial information and thematic accuracies. Recently these issues become important because spatial planning maps are impartial attachment of the regional act draft on spatial planning (PERDA). The needs of geospatial information in the preparation of spatial planning maps preparation can be divided into two major groups: (i). basic geospatial information (IGD), consist of of Indonesia Topographic maps (RBI), coastal and marine environmental maps (LPI), and geodetic control network and (ii). Thematic Geospatial Information (IGT). Currently, mostly local goverment in Indonesia have not finished their regulation draft on spatial planning due to some constrain including technical aspect. Some constrain in mapping of spatial planning are as follows: the availability of large scale ofbasic geospatial information, the availability of mapping guidelines, and human resources. Ideal conditions to be achieved for spatial planning maps are: (i) the availability of updated geospatial information in accordance with the scale needed for spatial planning maps, (ii) the guideline of mapping for spatial planning to support local government in completion their PERDA, and (iii) capacity building of local goverment human resources to completed spatial planning maps. The OMP strategies formulated to achieve these conditions are: (i) accelerating of IGD at scale of 1:50,000, 1: 25,000 and 1: 5,000, (ii) to accelerate mapping and integration of Thematic Geospatial Information (IGT) through stocktaking availability and mapping guidelines, (iii) the development of mapping guidelines and dissemination of spatial utilization and (iv) training of human resource on mapping technology.

  17. Experiment specific processing of residual acceleration data

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. I. D.

    1992-01-01

    To date, most Spacelab residual acceleration data collection projects have resulted in data bases that are overwhelming to the investigator of low-gravity experiments. This paper introduces a simple passive accelerometer system to measure low-frequency accelerations. Model responses for experiments using actual acceleration data are produced and correlations are made between experiment response and the accelerometer time history in order to test the idea that recorded acceleration data and experimental responses can be usefully correlated. Spacelab 3 accelerometer data are used as input to a variety of experiment models, and sensitivity limits are obtained for particular experiment classes. The modeling results are being used to create experiment-specific residual acceleration data processing schemes for interested investigators.

  18. Symplectic maps and chromatic optics in particle accelerators

    SciTech Connect

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles and derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.

  19. Symplectic maps and chromatic optics in particle accelerators

    DOE PAGES

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less

  20. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Kooney, Alex; Bjorkman, Gerry; Russell, Carolyn; Smelser, Jerry (Technical Monitor)

    2002-01-01

    In FSW (friction stir welding), the weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule. The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  1. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Bjorkman, Gerry; Kooney, Alex; Russell, Carolyn

    2003-01-01

    The weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  2. Collaborative Concept Mapping Processes Mediated by Computer.

    ERIC Educational Resources Information Center

    Chiu, Chiung-Hui; Wu, Wei-Shuo; Huang, Chun-Chieh

    This paper reports on a study that investigated group learning processes in computer-supported collaborative concept mapping. Thirty 5th grade Taiwanese students were selected to attend a computer-mediated collaborative concept mapping activity. Dialog messages and map products tracked and recorded by the mapping system were analyzed. The…

  3. Mapping of acceleration field in FSA configuration of a LIS

    NASA Astrophysics Data System (ADS)

    Nassisi, V.; Delle Side, D.; Monteduro, L.; Giuffreda, E.

    2016-05-01

    The Front Surface Acceleration (FSA) obtained in Laser Ion Source (LIS) systems is one of the most interesting methods to produce accelerated protons and ions. We implemented a LIS to study the ion acceleration mechanisms. In this device, the plasma is generated by a KrF excimer laser operating at 248 nm, focused on an aluminum target mounted inside a vacuum chamber. The laser energy was varied from 28 to 56 mJ/pulse and focused onto the target by a 15 cm focal lens forming a spot of 0.05 cm in diameter. A high impedance resistive probe was used to map the electric potential inside the chamber, near the target. In order to avoid the effect of plasma particles investing the probe, a PVC shield was realized. Particles inevitably streaked the shield but their influence on the probe was negligible. We detected the time resolved profiles of the electric potential moving the probe from 4.7 cm to 6.2 cm with respect to the main target axis, while the height of the shield from the surface normal on the target symmetry center was about 3 cm. The corresponding electric field can be very important to elucidate the phenomenon responsible of the accelerating field formation. The behavior of the field depends on the distance x as 1/x1.85 with 28 mJ laser energy, 1/x1.77 with 49 mJ and 1/x1.74 with 56 mJ. The dependence of the field changes slightly for our three cases, the power degree decreases at increasing laser energy. It is possible to hypothesize that the electric field strength stems from the contribution of an electrostatic and an induced field. Considering exclusively the induced field at the center of the created plasma, a strength of some tenth kV/m could be reached, which could deliver ions up to 1 keV of energy. These values were justified by measurement performed with an electrostatic barrier.

  4. Novel phases in an accelerated exclusion process

    NASA Astrophysics Data System (ADS)

    Dong, Jiajia; Klumpp, Stefan; Zia, Royce K. P.

    2013-03-01

    We introduce a class of distance-dependent interactions in an accelerated exclusion process (AEP) inspired by the cooperative speed-up observed in transcribing RNA polymerases. In the simplest scenario, each particle hops to the neighboring site if vacant and when joining a cluster of particles, triggers the frontmost particle to hop. Through both simulation and theoretical work, we discover that the steady state of AEP displays a discontinuous transition with periodic boundary condition. The system transitions from being homogeneous (with augmented currents) to phase-segregated. More surprisingly, the current-density relation in the phase-segregated state is simply J = 1 - ρ , indicating the particles (or holes) are moving at unit velocity despite the inclusion of long-range interactions. US NSF DMR- 1104820 and DMR-1005417

  5. Space charges can significantly affect the dynamics of accelerator maps

    NASA Astrophysics Data System (ADS)

    Bountis, Tassos; Skokos, Charalampos

    2006-10-01

    Space charge effects can be very important for the dynamics of intense particle beams, as they repeatedly pass through nonlinear focusing elements, aiming to maximize the beam's luminosity properties in the storage rings of a high energy accelerator. In the case of hadron beams, whose charge distribution can be considered as “frozen” within a cylindrical core of small radius compared to the beam's dynamical aperture, analytical formulas have been recently derived [C. Benedetti, G. Turchetti, Phys. Lett. A 340 (2005) 461] for the contribution of space charges within first order Hamiltonian perturbation theory. These formulas involve distribution functions which, in general, do not lead to expressions that can be evaluated in closed form. In this Letter, we apply this theory to an example of a charge distribution, whose effect on the dynamics can be derived explicitly and in closed form, both in the case of 2-dimensional as well as 4-dimensional mapping models of hadron beams. We find that, even for very small values of the “perveance” (strength of the space charge effect) the long term stability of the dynamics changes considerably. In the flat beam case, the outer invariant “tori” surrounding the origin disappear, decreasing the size of the beam's dynamical aperture, while beyond a certain threshold the beam is almost entirely lost. Analogous results in mapping models of beams with 2-dimensional cross section demonstrate that in that case also, even for weak tune depressions, orbital diffusion is enhanced and many particles whose motion was bounded now escape to infinity, indicating that space charges can impose significant limitations on the beam's luminosity.

  6. Details and justifications for the MAP concept specification for acceleration above 63 GeV

    SciTech Connect

    Berg, J. Scott

    2014-02-28

    The Muon Accelerator Program (MAP) requires a concept specification for each of the accelerator systems. The Muon accelerators will bring the beam energy from a total energy of 63 GeV to the maximum energy that will fit on the Fermilab site. Justifications and supporting references are included, providing more detail than will appear in the concept specification itself.

  7. Construction of symplectic maps for nonlinear motion of particles in accelerators

    NASA Astrophysics Data System (ADS)

    Berg, J. S.; Warnock, R. L.; Ruth, R. D.; Forest, É.

    1994-01-01

    We explore an algorithm for the construction of symplectic maps to describe nonlinear particle motion in circular accelerators. We emphasize maps for motion over one or a few full turns, which may provide an economical way of studying long-term stability in large machines such as the Superconducting Super Collider (SSC). The map is defined implicitly by a mixed-variable generating function, represented as a Fourier series in betatron angle variables, with coefficients given as B-spline functions of action variables and the total energy. Despite the implicit definition, iteration of the map proves to be a fast process. The method is illustrated with a realistic model of the SSC. We report extensive tests of accuracy and iteration time in various regions of phase space, and demonstrate the results by using single-turn maps to follow trajectories symplectically for 107 turns on a workstation computer. The same method may be used to construct the Poincaré map of Hamiltonian systems in other fields of physics.

  8. Process Mapping: Tools, Techniques, & Critical Success Factors.

    ERIC Educational Resources Information Center

    Kalman, Howard K.

    2002-01-01

    Explains process mapping as an analytical tool and a process intervention that performance technologists can use to improve human performance by reducing error variance. Highlights include benefits of process mapping; and critical success factors, including organizational readiness, time commitment by participants, and the availability of a…

  9. Interstellar Mapping and Acceleration Probe (IMAP) - Its Time Has Come!

    NASA Astrophysics Data System (ADS)

    Schwadron, N.; Kasper, J. C.; Mewaldt, R. A.; Moebius, E.; Opher, M.; Spence, H. E.; Zurbuchen, T.

    2014-12-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence -- an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. The enigmatic IBEX ribbon is an unanticipated discovery demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the vicinity of the IBEX ribbon and its plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive cosmic ray, energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment and understand the fundamental origins of particle acceleration. Thus, IMAP is a mission whose time has come. IMAP is the highest ranked next Solar Terrestrial Probe in the Decadal

  10. Speech processing using maximum likelihood continuity mapping

    SciTech Connect

    Hogden, J.E.

    2000-04-18

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  11. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  12. Mapping the Collaborative Research Process

    ERIC Educational Resources Information Center

    Kochanek, Julie Reed; Scholz, Carrie; Garcia, Alicia N.

    2015-01-01

    Despite significant federal investments in the production of high-quality education research, the direct use of that research in policy and practice is not evident. Some education researchers are increasingly employing collaborative research models that use structures and processes to integrate practitioners into the research process in an effort…

  13. Highly Efficient Proteolysis Accelerated by Electromagnetic Waves for Peptide Mapping

    PubMed Central

    Chen, Qiwen; Liu, Ting; Chen, Gang

    2011-01-01

    Proteomics will contribute greatly to the understanding of gene functions in the post-genomic era. In proteome research, protein digestion is a key procedure prior to mass spectrometry identification. During the past decade, a variety of electromagnetic waves have been employed to accelerate proteolysis. This review focuses on the recent advances and the key strategies of these novel proteolysis approaches for digesting and identifying proteins. The subjects covered include microwave-accelerated protein digestion, infrared-assisted proteolysis, ultraviolet-enhanced protein digestion, laser-assisted proteolysis, and future prospects. It is expected that these novel proteolysis strategies accelerated by various electromagnetic waves will become powerful tools in proteome research and will find wide applications in high throughput protein digestion and identification. PMID:22379392

  14. Ultrasonic acceleration of enzymatic processing of cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. It has been found that the intr...

  15. Rapid learning-based video stereolization using graphic processing unit acceleration

    NASA Astrophysics Data System (ADS)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.

  16. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  17. Nuclear processes and accelerated particles in solar flares

    NASA Technical Reports Server (NTRS)

    Ramaty, R.

    1987-01-01

    Nuclear processes and particle acceleration in solar flares are discussed and the theory of gamma-ray and neutron production is reviewed. Gamma-ray, neutron, and charged-particle observations of solar flares are compared with predictions, and the implications of these comparisons for particle energy spectra, total numbers, anisotropies, electron-to-proton ratios, and acceleration mechanisms are considered. Elemental and isotopic abundances of the ambient gas derived from gamma-ray observations have also been compared to abundances obtained from observations of escaping accelerated particles and other sources.

  18. Granger-causality maps of diffusion processes

    NASA Astrophysics Data System (ADS)

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A.

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  19. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  20. Field size dependent mapping of medical linear accelerator radiation leakage

    NASA Astrophysics Data System (ADS)

    Vũ Bezin, Jérémi; Veres, Attila; Lefkopoulos, Dimitri; Chavaudra, Jean; Deutsch, Eric; de Vathaire, Florent; Diallo, Ibrahima

    2015-03-01

    The purpose of this study was to investigate the suitability of a graphics library based model for the assessment of linear accelerator radiation leakage. Transmission through the shielding elements was evaluated using the build-up factor corrected exponential attenuation law and the contribution from the electron guide was estimated using the approximation of a linear isotropic radioactive source. Model parameters were estimated by a fitting series of thermoluminescent dosimeter leakage measurements, achieved up to 100 cm from the beam central axis along three directions. The distribution of leakage data at the patient plane reflected the architecture of the shielding elements. Thus, the maximum leakage dose was found under the collimator when only one jaw shielded the primary beam and was about 0.08% of the dose at isocentre. Overall, we observe that the main contributor to leakage dose according to our model was the electron beam guide. Concerning the discrepancies between the measurements used to calibrate the model and the calculations from the model, the average difference was about 7%. Finally, graphics library modelling is a readily and suitable way to estimate leakage dose distribution on a personal computer. Such data could be useful for dosimetric evaluations in late effect studies.

  1. Accelerating Computation of Large Biological Datasets using MapReduce Framework.

    PubMed

    Wang, Chao; Dai, Dong; Li, Xi; Wang, Aili; Zhou, Xuehai

    2016-04-05

    The maximal information coefficient (MIC) has been proposed to discover relationships and associations between pairs of variables. It poses significant challenges for bioinformatics scientists to accelerate the MIC calculation, especially in genome sequencing and biological annotations. In this paper we explore a parallel approach which uses MapReduce framework to improve the computing efficiency and throughput of the MIC computation. The acceleration system includes biological data storage on HDFS, preprocessing algorithms, distributed memory cache mechanism, and the partition of MapReduce jobs. Based on the acceleration approach, we extend the traditional two-variable algorithm to multiple variables algorithm. The experimental results show that our parallel solution provides a linear speedup comparing with original algorithm without affecting the correctness and sensitivity.

  2. A Psychophysiological Mapping of Cognitive Processes.

    DTIC Science & Technology

    1986-06-26

    PROCESSES I A’rovP’A fnrwis le rslease;d it:’ t Prepared Byt John A. Stern, Ph.D. Robert Goldstein, Ph.D. and Lance 0. Bauer, Ph.D. Washington University...rcsSION NO. 11. TITLE (InChid SecuwMY O140111bOfn) Unclassified A Psychophysiological Mapping of Cognitive Processes. 12. PERSONAL AUTHOR(S) Stern, John ...Bradshaw & Nettleton , 1983; Bryden, 1982; Hellige, 1983) related to the specialized processing abilities of the left and right cerebral hemispheres

  3. Accelerated Whole-Brain Multi-Parameter Mapping using Blind Compressed Sensing

    PubMed Central

    Bhave, Sampada; Lingala, Sajan Goud; Johnson, Casey P.; Magnotta, Vincent A.; Jacob, Mathews

    2015-01-01

    Purpose To introduce a blind compressed sensing (BCS) framework to accelerate multi-parameter MR mapping, and demonstrate its feasibility in high-resolution, whole-brain T1ρ and T2 mapping. Methods BCS models the evolution of magnetization at every pixel as a sparse linear combination of bases in a dictionary. Unlike compressed sensing (CS), the dictionary and the sparse coefficients are jointly estimated from under-sampled data. Large number of non-orthogonal bases in BCS accounts for more complex signals than low rank representations. The low degree of freedom of BCS, attributed to sparse coefficients, translates to fewer artifacts at high acceleration factors(R). Results From 2D retrospective under-sampling experiments, the mean square errors in T1ρ and T2 maps were observed to be within 0.1% up to R=10. BCS was observed to be more robust to patient-specific motion as compared to other CS schemes and resulted in minimal degradation of parameter maps in the presence of motion. Our results suggested that BCS can provide an acceleration factor of 8 in prospective 3D imaging with reasonable reconstructions. Conclusion BCS considerably reduces scan time for multi-parameter mapping of the whole brain with minimal artifacts, and is more robust to motion-induced signal changes compared to current CS and PCA based techniques. PMID:25850952

  4. Probabilistic earthquake acceleration and velocity maps for the United States and Puerto Rico

    USGS Publications Warehouse

    Algermissen, S.T.; Perkins, D.M.; Thenhaus, P.C.; Hanson, S.L.; Bender, B.L.

    1990-01-01

    The ground-motion maps presented here (maps A-D) show the expected seismic induced or earthquake caused maximum horizontal acceleration and velocity in rock in the contiguous United States, Alaska, Hawaii, and Puerto Rico.  There is a 90 percent probability that the maximum horizontal acceleration and velocity shown on the maps will not be exceeded in the time periods of 50 and 250 years (average return period for the expected ground motion of 474 and 2,372 years).  Rock is taken here to mean material having a shear-wave velocity of between 0.75 and 0.90 kilometers per second. (Algermissen and Perkins, 1976).  

  5. Secondary electron emission from plasma processed accelerating cavity grade niobium

    NASA Astrophysics Data System (ADS)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for higher energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were treated

  6. TU-AB-BRD-01: Process Mapping

    SciTech Connect

    Palta, J.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before a failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to

  7. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  8. Value Stream Mapping: Foam Collection and Processing.

    SciTech Connect

    Sorensen, Christian

    2015-07-01

    The effort to collect and process foam for the purpose of recycling performed by the Material Sustainability and Pollution Prevention (MSP2) team at Sandia National Laboratories is an incredible one, but in order to make it run more efficiently it needed some tweaking. This project started in June of 2015. We used the Value Stream Mapping process to allow us to look at the current state of the foam collection and processing operation. We then thought of all the possible ways the process could be improved. Soon after that we discussed which of the "dreams" were feasible. And finally, we assigned action items to members of the team so as to ensure that the improvements actually occur. These improvements will then, due to varying factors, continue to occur over the next couple years.

  9. Induction linear accelerators for commercial photon irradiation processing

    SciTech Connect

    Matthews, S.M.

    1989-01-13

    A number of proposed irradiation processes requires bulk rather than surface exposure with intense applications of ionizing radiation. Typical examples are irradiation of food packaged into pallet size containers, processing of sewer sludge for recycling as landfill and fertilizer, sterilization of prepackaged medical disposals, treatment of municipal water supplies for pathogen reduction, etc. Volumetric processing of dense, bulky products with ionizing radiation requires high energy photon sources because electrons are not penetrating enough to provide uniform bulk dose deposition in thick, dense samples. Induction Linear Accelerator (ILA) technology developed at the Lawrence Livermore National Laboratory promises to play a key role in providing solutions to this problem. This is discussed in this paper.

  10. Enzyme clustering accelerates processing of intermediates through metabolic channeling

    PubMed Central

    Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.

    2015-01-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  11. Detecting chaos in particle accelerators through the frequency map analysis method.

    PubMed

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  12. Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications

    SciTech Connect

    Brown, Michael R.

    2006-11-16

    Project Title: Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications PI: Michael R. Brown, Swarthmore College The purpose of the project was to provide theoretical and modeling support to the Swarthmore Spheromak Experiment (SSX). Accordingly, the theoretical effort was tightly integrated into the SSX experimental effort. During the grant period, Michael Brown and his experimental collaborators at Swarthmore, with assistance from W. Matthaeus as appropriate, made substantial progress in understanding the physics SSX plasmas.

  13. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    PubMed Central

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m−1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  14. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    SciTech Connect

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-08-16

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m–1 to a similar degree of accuracy. Lastly, these results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.

  15. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    DOE PAGES

    Clayton, C. E.; Adli, E.; Allen, J.; ...

    2016-08-16

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.).more » Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m–1 to a similar degree of accuracy. Lastly, these results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.« less

  16. High energy electron beam processing experiments with induction accelerators

    NASA Astrophysics Data System (ADS)

    Goodman, D. L.; Birx, D. L.; Dave, V. R.

    1995-05-01

    Induction accelerators are capable of producing very high electron beam power for processing at energies of 1-10 MeV. A high energy electron beam (HEEB) material processing system based on all-solid-state induction accelerator technology is in operation at Science Research Laboratory. The system delivers 50 ns 500 A current pulses at 1.5 MeV and is capable of operating at high power (500 kW) and high (˜ 5 kHz) repetition rate. HEEB processing with induction accelerators is useful for a wide variety of applications including the joining of high temperature materials, powder metallurgical fabrication, treatment of organic-contaminated wastewater and the curing of polymer matrix composites. High temperature HEEB experiments at SRL have demonstrated the brazing of carbon-carbon composites to metallic substrates and the melting and sintering of powders for graded-alloy fabrication. Other experiments have demonstrated efficient destruction of low-concentration organic contaminants in water and low temperature free-radical cross-linking of fiber-reinforced composites with acrylated resin matrices.

  17. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  18. Acceleration Processes in the Cusp: Observations by the FAST Satellite

    NASA Technical Reports Server (NTRS)

    Pfaff, R. F.; Carlson, C.; McFadden, J.; Ergun, R.; Clemmons, J.; Klumpar, D.; Moebius, E.; Elphic, R.; Strangeway, R.

    1999-01-01

    The Fast Auroral Snapshot (FAST) spacecraft has encountered the Earth's cusp regions near its apogee of 4175 km on numerous occasions during its first two and half years of operations. The cusp encounters are identified by their signatures of keV dispersed ion injections of solar wind origin. The FAST instruments reveal a complex microphysics inherent to many, but not all, of the cusp regions encountered by the spacecraft, that often include upgoing ion beams within regions of downgoing electrons that may appear as series of inverted-V features with energies near a few hundred eV. In many instances, upgoing electron beams have also been observed. Intense (> 100 mV/m) spikey DC-coupled electric fields and plasma waves are common features of the cusp encounters which also provide evidence for the presence of such local acceleration processes. In some cases, the FAST data show clear modulation of the precipitating magnetosheath ions indicative that they are affected by local electric potentials, as evidenced by simultaneous electron acceleration within such intervals. Furthermore, the acceleration events are sometimes organized with an apparent cellular structure that suggest Alfv6n waves or other large scale phenomena are controlling the localized potentials. We examine several cusp encounters in detail in order to study the complex relation of the cusp energetic particle populations with the plasma waves and DC electric fields.

  19. Acceleration Processes in the Cusp -- Observations by the FAST Satellite

    NASA Technical Reports Server (NTRS)

    Pfaff, R. F.; Carlson, C.; Clemmons, J.; Klumpar, D.; Moebius, E.; Elphic, R.; Strangeway, R.

    1999-01-01

    The FAST spacecraft has encountered the Earth's cusp regions near its apogee of 4175 km on numerous occasions during its first two and half years of operations. The cusp encounters are identified by their signatures of keV dispersed ion injections of solar wind origin. The FAST instruments reveal a complex microphysics inherent to many, but not all, of the cusp regions encountered by the spacecraft, that often include upgoing ion beams within regions of downgoing electrons that may appear as series of inverted-V features with energies near a few hundred eV. In many instances, upgoing electron beams have also been observed. Intense (> 100 mV/m) spikey DC-coupled electric fields and plasma waves are common features of the cusp encounters which also provide evidence for the presence of such local acceleration processes. In some cases, the FAST data show clear modulation of the precipitating magnetosheath ions indicative that they are affected by local electric potentials, as evidenced by simultaneous electron acceleration within such intervals. Furthermore, the acceleration events are sometimes organized with an apparent cellular structure that suggest Alfven waves or other large scale phenomena are controlling the localized potentials. We examine several cusp encounters in detail in order to study the complex relation of the cusp energetic particle populations with the plasma waves and DC electric fields.

  20. Enhancements to Demilitarization Process Maps Program (ProMap)

    DTIC Science & Technology

    2016-10-14

    Mar 16 100% 100% 3 Quarterly Technical and Business Status Report 20 Dec 15 100% 100% 4 Task 3: ProMap program update that imports MIDAS 30 Jun 16...100% 100% 5 Quarterly Technical and Business Status Report 20 Mar 16 100% 100% 6 Quarterly Technical and Business Status Report 20 Jun 16 100% 100

  1. Particle Acceleration via Reconnection Processes in the Supersonic Solar Wind

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-01

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = -(3 + MA )/2, where MA is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index -3(1 + τ c /(8τdiff)), where τ c /τdiff is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τdiff/τ c . Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c -5 (c particle speed) spectra observed by Fisk & Gloeckler and Mewaldt et

  2. Acceleration of Topographic Map Production Using Semi-Automatic DTM from Dsm Radar Data

    NASA Astrophysics Data System (ADS)

    Rizaldy, Aldino; Mayasari, Ratna

    2016-06-01

    Badan Informasi Geospasial (BIG) is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using "stereo-mate", the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image), and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  3. Implementation of a custom hardware-accelerator for short-read mapping using Burrows-Wheeler alignment.

    PubMed

    Waidyasooriya, Hasitha Muthumala; Hariyama, Masanori; Kameyama, Michitaka

    2013-01-01

    The mapping of millions of short DNA fragments to a large genome is a great challenge in modern computational biology. Usually, it takes many hours or days to map a large genome using software. However, the recent progress of programmable hardware such as field programmable gate arrays (FPGAs) provides a cost effective solution to this challenge. FPGAs contain millions of programmable logic gates to design massively parallel accelerators. This paper proposes a hardware architecture to accelerate the short-read mapping using Burrows-Wheeler alignment. The speed-up of the proposed architecture is estimated to be at least 10 times compared to its equivalent software application.

  4. One-Dimensional Particle Processes with Acceleration/Braking Asymmetry

    NASA Astrophysics Data System (ADS)

    Furtlehner, Cyril; Lasgouttes, Jean-Marc; Samsonov, Maxim

    2012-07-01

    The slow-to-start mechanism is known to play an important role in the particular shape of the Fundamental Diagram of traffic and to be associated to hysteresis effects of traffic flow. We study this question in the context of exclusion and queueing processes, by including an asymmetry between deceleration and acceleration in the formulation of these processes. For exclusions processes, this corresponds to a multi-class process with transition asymmetry between different speed levels, while for queueing processes we consider non-reversible stochastic dependency of the service rate w.r.t. the number of clients. The relationship between these 2 families of models is analyzed on the ring geometry, along with their steady state properties. Spatial condensation phenomena and metastability are observed, depending on the level of the aforementioned asymmetry. In addition, we provide a large deviation formulation of the fundamental diagram which includes the level of fluctuations, in the canonical ensemble when the stationary state is expressed as a product form of such generalized queues.

  5. Engineering functionality gradients by dip coating process in acceleration mode.

    PubMed

    Faustini, Marco; Ceratti, Davide R; Louis, Benjamin; Boudot, Mickael; Albouy, Pierre-Antoine; Boissière, Cédric; Grosso, David

    2014-10-08

    In this work, unique functional devices exhibiting controlled gradients of properties are fabricated by dip-coating process in acceleration mode. Through this new approach, thin films with "on-demand" thickness graded profiles at the submillimeter scale are prepared in an easy and versatile way, compatible for large-scale production. The technique is adapted to several relevant materials, including sol-gel dense and mesoporous metal oxides, block copolymers, metal-organic framework colloids, and commercial photoresists. In the first part of the Article, an investigation on the effect of the dip coating speed variation on the thickness profiles is reported together with the critical roles played by the evaporation rate and by the viscosity on the fluid draining-induced film formation. In the second part, dip-coating in acceleration mode is used to induce controlled variation of functionalities by playing on structural, chemical, or dimensional variations in nano- and microsystems. In order to demonstrate the full potentiality and versatility of the technique, original graded functional devices are made including optical interferometry mirrors with bidirectional gradients, one-dimensional photonic crystals with a stop-band gradient, graded microfluidic channels, and wetting gradient to induce droplet motion.

  6. Particle acceleration via reconnection processes in the supersonic solar wind

    SciTech Connect

    Zank, G. P.; Le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-10

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = –(3 + M{sub A} )/2, where M{sub A} is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index –3(1 + τ {sub c}/(8τ{sub diff})), where τ {sub c}/τ{sub diff} is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τ{sub diff}/τ {sub c}. Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c {sup –5} (c particle

  7. A GPU Accelerated Simulation Program for Electron Cooling Process

    NASA Astrophysics Data System (ADS)

    Zhang, He; Huang, He; Li, Rui; Chen, Jie; Luo, Li-Shi

    2015-04-01

    Electron cooling is essential to achieve high luminosity in the medium energy electron ion collider (MIEC) project at Jefferson Lab. Bunched electron beam with energy above 50 MeV is used to cool coasting and/or bunched ion beams. Although the conventional electron cooling technique has been widely used, such an implementation in MEIC is still challenging. We are developing a simulation program for the electron cooling process to fulfill the need of the electron cooling system design for MEIC. The program simulates the evolution of the ion beam under the intrabeam scattering (IBS) effect and the electron cooling effect using Monte Carlo method. To accelerate the calculation, the program is developed on a GPU platform. We will present some preliminary simulation results. Work supported by the Department of Energy, Laboratory Directed Research and Development Funding, under Contract No. DE-AC05-06OR23177.

  8. Radiation mapping inside the bunkers of medium energy accelerators using a robotic carrier.

    PubMed

    Ravishankar, R; Bhaumik, T K; Bandyopadhyay, T; Purkait, M; Jena, S C; Mishra, S K; Sharma, S; Agashe, V; Datta, K; Sarkar, B; Datta, C; Sarkar, D; Pal, P K

    2013-10-01

    The knowledge of ambient and peak radiation levels prevailing inside the bunkers of the accelerator facilities is essential in assessing the accidental human exposure inside the bunkers and in protecting sensitive electronic equipments by minimizing the exposure to high intensity mixed radiation fields. Radiation field mapping dynamically, inside bunkers are rare, though generally dose-rate data are available in every particle accelerator facilities at specific locations. Taking into account of the fact that the existing neutron fields with a spread of energy from thermal up to the energy of the accelerated charged projectiles, prompt photons and other particles prevailing during cyclotron operation inside the bunkers, neutron and gamma survey meters with extended energy ranges attached to a robotic carrier have been used. The robotic carrier movement was controlled remotely from the control room with the help of multiple visible range optical cameras provided inside the bunkers and the wireless and wired protocols of communication helped its movement and data acquisition from the survey meters. Variable Energy Cyclotron Centre, Kolkata has positive ion accelerating facilities such as K-130 room Temperature Cyclotron, K-500 Super Conducting Cyclotron and a forthcoming 30 MeV Proton Medical Cyclotron with high beam current. The dose rates data for K-130 Room Temperature Cyclotron, VECC were collected for various energies of alpha and proton beams losing their total energy at different stages on different materials at various strategic locations of radiological importance inside the bunkers. The measurements established that radiation levels inside the machine bunker dynamically change depending upon the beam type, beam energy, machine operation parameters, deflector condition, slit placement and central region beam tuning. The obtained inference from the association of dose rates with the parameters like beam intensity, type and energy of projectiles, helped in

  9. GPU accelerated processing of astronomical high frame-rate videosequences

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  10. Selective sinoatrial node optical mapping to investigate the mechanism of sinus rate acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Shien-Fong; Shinohara, Tetsuji; Joung, Boyoung; Chen, Peng-Sheng

    2011-03-01

    Studies using isolated sinoatrial node (SAN) cells indicate that rhythmic spontaneous sarcoplasmic reticulum Ca release (Ca clock) plays an important role in SAN automaticity. However, it is difficult to translate these findings into intact SAN because the SAN is embedded in the right atrium (RA). Cross contamination of the optical signals between SAN and RA prevented the definitive testing of Ca clock hypothesis in intact SAN. We use a novel approach to selectively map intact SAN to examine the Ca clock function in intact RA. We simultaneously mapped intracellular Ca (Cai) and membrane potential (Vm) in 7 isolated, Langendorff perfused normal canine RA. Electrical conduction from the SAN to RA was inhibited with high potassium (10 mmol/L) Tyrode's solution, allowing selective optical mapping of Vm and Cai of the SAN. Isoproterenol (ISO, 0.03 μmol/L) decreased cycle length of the sinus beats from 586+/-17 ms at baseline to 366+/-32 ms, and shifted the leading pacemaker site from the middle or inferior SAN to the superior SAN in all RAs. The Cai upstroke preceded the Vm in the leading pacemaker site by up to 18+/-2 ms. ISO-induced changes to SAN were inhibited by ryanodine (3 μmol/L), but not ZD7288 (3 μmol/L), a selective If blocker. We conclude that a high extracellular potassium concentration results in intermittent SAN-RA conduction block, allowing selective optical mapping of the intact SAN. Acceleration of Ca cycling in the superior SAN underlies the mechanism of sinus tachycardia during sympathetic stimulation.

  11. Accelerated Searches of Gravitational Waves Using Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Chung, Shin Kee; Wen, Linqing; Blair, David; Cannon, Kipp

    2010-06-01

    The existence of gravitational waves was predicted by Albert Einstein. Black hole and neutron star binary systems will product strong gravitational waves through their inspiral and eventual merger. The analysis of the gravitational wave data is computationally intensive, requiring matched filtering of terabytes of data with a bank of at least 3000 numerical templates that represent predicted waveforms. We need to complete the analysis in real-time (within the duration of the signal) in order to enable follow-up observations with some conventional optical or radio telescopes. We report a novel application of a graphics processing units (GPUs) for the purpose of accelerating the search pipelines for gravitational waves from coalescing binary systems of compact objects. A speed-up of 16 fold in total has been achieved with an NVIDIA GeForce 8800 Ultra GPU card compared with a standard central processing unit (CPU). We show that further improvements are possible and discuss the reduction in CPU number required for the detection of inspiral sources afforded by the use of GPUs.

  12. Uav Data Processing for Rapid Mapping Activities

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2015-08-01

    During disaster and emergency situations, geospatial data plays an important role to serve as a framework for decision support system. As one component of basic geospatial data, large scale topographical maps are mandatory in order to enable geospatial analysis within quite a number of societal challenges. The increasing role of geo-information in disaster management nowadays consequently needs to include geospatial aspects on its analysis. Therefore different geospatial datasets can be combined in order to produce reliable geospatial analysis especially in the context of disaster preparedness and emergency response. A very well-known issue in this context is the fast delivery of geospatial relevant data which is expressed by the term "Rapid Mapping". Unmanned Aerial Vehicle (UAV) is the rising geospatial data platform nowadays that can be attractive for modelling and monitoring the disaster area with a low cost and timely acquisition in such critical period of time. Disaster-related object extraction is of special interest for many applications. In this paper, UAV-borne data has been used for supporting rapid mapping activities in combination with high resolution airborne Interferometric Synthetic Aperture Radar (IFSAR) data. A real disaster instance from 2013 in conjunction with Mount Sinabung eruption, Northern Sumatra, Indonesia, is used as the benchmark test for the rapid mapping activities presented in this paper. On this context, the reliable IFSAR dataset from airborne data acquisition in 2011 has been used as a comparable dataset for accuracy investigation and assessment purpose in 3 D reconstructions. After all, this paper presents a proper geo-referencing and feature extraction method of UAV data to support rapid mapping activities.

  13. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    NASA Astrophysics Data System (ADS)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching

  14. Analyzing Collision Processes with the Smartphone Acceleration Sensor

    ERIC Educational Resources Information Center

    Vogt, Patrik; Kuhn, Jochen

    2014-01-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1 ). The physical issues in that case are manifold and apply, for example, to free fall, radial acceleration, several pendula, or the exploitation…

  15. Scanning probe acceleration microscopy (SPAM) in fluids: Mapping mechanical properties of surfaces at the nanoscale

    PubMed Central

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-01-01

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip–sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip–sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by “comb” filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach. PMID:16551751

  16. Scanning probe acceleration microscopy (SPAM) in fluids: Mapping mechanical properties of surfaces at the nanoscale

    NASA Astrophysics Data System (ADS)

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-03-01

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by “comb” filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach.

  17. Scanning probe acceleration microscopy (SPAM) in fluids: mapping mechanical properties of surfaces at the nanoscale.

    PubMed

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-03-28

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by "comb" filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach.

  18. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    PubMed

    Wang, Yu; Du, Haixiao; Xia, Mingrui; Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong

    2013-01-01

    Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  19. Accelerating the Next Generation Long Read Mapping with the FPGA-Based System.

    PubMed

    Chen, Peng; Wang, Chao; Li, Xi; Zhou, Xuehai

    2014-01-01

    To compare the newly determined sequences against the subject sequences stored in the databases is a critical job in the bioinformatics. Fortunately, recent survey reports that the state-of-the-art aligners are already fast enough to handle the ultra amount of short sequence reads in the reasonable time. However, for aligning the long sequence reads (>400 bp) generated by the next generation sequencing (NGS) technology, it is still quite inefficient with present aligners. Furthermore, the challenge becomes more and more serious as the lengths and the amounts of the sequence reads are both keeping increasing with the improvement of the sequencing technology. Thus, it is extremely urgent for the researchers to enhance the performance of the long read alignment. In this paper, we propose a novel FPGA-based system to improve the efficiency of the long read mapping. Compared to the state-of-the-art long read aligner BWA-SW, our accelerating platform could achieve a high performance with almost the same sensitivity. Experiments demonstrate that, for reads with lengths ranging from 512 up to 4,096 base pairs, the described system obtains a 10x -48x speedup for the bottleneck of the software. As to the whole mapping procedure, the FPGA-based platform could achieve a 1.8x -3:3x speedup versus the BWA-SW aligner, reducing the alignment cycles from weeks to days.

  20. Preliminary map of peak horizontal ground acceleration for the Hanshin-Awaji earthquake of January 17, 1995, Japan - Description of Mapped Data Sets

    USGS Publications Warehouse

    Borcherdt, R.D.; Mark, R.K.

    1995-01-01

    The Hanshin-Awaji earthquake (also known as the Hyogo-ken Nanbu and the Great Hanshin earthquake) provided an unprecedented set of measurements of strong ground shaking. The measurements constitute the most comprehensive set of strong- motion recordings yet obtained for sites underlain by soft soil deposits of Holocene age within a few kilometers of the crustal rupture zone. The recordings, obtained on or near many important structures, provide an important new empirical data set for evaluating input ground motion levels and site amplification factors for codes and site-specific design procedures world wide. This report describes the data used to prepare a preliminary map summarizing the strong motion data in relation to seismicity and underlying geology (Wentworth, Borcherdt, and Mark., 1995; Figure 1, hereafter referred to as Figure 1/I). The map shows station locations, peak acceleration values, and generalized acceleration contours superimposed on pertinent seismicity and the geologic map of Japan. The map (Figure 1/I) indicates a zone of high acceleration with ground motions throughout the zone greater than 400 gal and locally greater than 800 gal. This zone encompasses the area of most intense damage mapped as JMA intensity level 7, which extends through Kobe City. The zone of most intense damage is parallel, but displaced slightly from the surface projection of the crustal rupture zone implied by aftershock locations. The zone is underlain by soft-soil deposits of Holocene age.

  1. Mapping individual logical processes in information searching

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.

    1974-01-01

    An interactive dialog with a computerized information collection was recorded and plotted in the form of a flow chart. The process permits one to identify the logical processes employed in considerable detail and is therefore suggested as a tool for measuring individual thought processes in a variety of situations. A sample of an actual test case is given.

  2. Image and geometry processing with Oriented and Scalable Map.

    PubMed

    Hua, Hao

    2016-05-01

    We turn the Self-organizing Map (SOM) into an Oriented and Scalable Map (OS-Map) by generalizing the neighborhood function and the winner selection. The homogeneous Gaussian neighborhood function is replaced with the matrix exponential. Thus we can specify the orientation either in the map space or in the data space. Moreover, we associate the map's global scale with the locality of winner selection. Our model is suited for a number of graphical applications such as texture/image synthesis, surface parameterization, and solid texture synthesis. OS-Map is more generic and versatile than the task-specific algorithms for these applications. Our work reveals the overlooked strength of SOMs in processing images and geometries.

  3. Accelerating molecular docking calculations using graphics processing units.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2011-04-25

    The generation of molecular conformations and the evaluation of interaction potentials are common tasks in molecular modeling applications, particularly in protein-ligand or protein-protein docking programs. In this work, we present a GPU-accelerated approach capable of speeding up these tasks considerably. For the evaluation of interaction potentials in the context of rigid protein-protein docking, the GPU-accelerated approach reached speedup factors of up to over 50 compared to an optimized CPU-based implementation. Treating the ligand and donor groups in the protein binding site as flexible, speedup factors of up to 16 can be observed in the evaluation of protein-ligand interaction potentials. Additionally, we introduce a parallel version of our protein-ligand docking algorithm PLANTS that can take advantage of this GPU-accelerated scoring function evaluation. We compared the GPU-accelerated parallel version to the same algorithm running on the CPU and also to the highly optimized sequential CPU-based version. In terms of dependence of the ligand size and the number of rotatable bonds, speedup factors of up to 10 and 7, respectively, can be observed. Finally, a fitness landscape analysis in the context of rigid protein-protein docking was performed. Using a systematic grid-based search methodology, the GPU-accelerated version outperformed the CPU-based version with speedup factors of up to 60.

  4. Estimating Cortical Feature Maps with Dependent Gaussian Processes.

    PubMed

    Hughes, Nicholas J; Goodhill, Geoffrey J

    2016-11-02

    A striking example of brain organisation is the stereotyped arrangement of cell preferences in the visual cortex for edges of particular orientations in the visual image. These "orientation preference maps" appear to have remarkably consistent statistical properties across many species. However fine scale analysis of these properties requires the accurate reconstruction of maps from imaging data which is highly noisy. A new approach for solving this reconstruction problem is to use Bayesian Gaussian process methods, which produce more accurate results than classical techniques. However, so far this work has not considered the fact that maps for several other features of visual input coexist with the orientation preference map and that these maps have mutually dependent spatial arrangements. Here we extend the Gaussian process framework to the multiple output case, so that we can consider multiple maps simultaneously. We demonstrate that this improves reconstruction of multiple maps compared to both classical techniques and the single output approach, can encode the empirically observed relationships, and is easily extendible. This provides the first principled approach for studying the spatial relationships between feature maps in visual cortex.

  5. Modular Automated Processing System (MAPS) for analysis of biological samples.

    SciTech Connect

    Gil, Geun-Cheol; Chirica, Gabriela S.; Fruetel, Julia A.; VanderNoot, Victoria A.; Branda, Steven S.; Schoeniger, Joseph S.; Throckmorton, Daniel J.; Brennan, James S.; Renzi, Ronald F.

    2010-10-01

    We have developed a novel modular automated processing system (MAPS) that enables reliable, high-throughput analysis as well as sample-customized processing. This system is comprised of a set of independent modules that carry out individual sample processing functions: cell lysis, protein concentration (based on hydrophobic, ion-exchange and affinity interactions), interferent depletion, buffer exchange, and enzymatic digestion of proteins of interest. Taking advantage of its unique capacity for enclosed processing of intact bioparticulates (viruses, spores) and complex serum samples, we have used MAPS for analysis of BSL1 and BSL2 samples to identify specific protein markers through integration with the portable microChemLab{trademark} and MALDI.

  6. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    SciTech Connect

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Serianni, G.; Nakano, H.; Takeiri, Y.; Tsumori, K.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  7. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  8. Emitting electron spectra and acceleration processes in the jet of PKS 0447-439

    NASA Astrophysics Data System (ADS)

    Zhou, Yao; Yan, Dahai; Dai, Benzhong; Zhang, Li

    2014-02-01

    We investigate the electron energy distributions (EEDs) and the corresponding acceleration processes in the jet of PKS 0447-439, and estimate its redshift through modeling its observed spectral energy distribution (SED) in the frame of a one-zone synchrotron-self Compton (SSC) model. Three EEDs formed in different acceleration scenarios are assumed: the power-law with exponential cut-off (PLC) EED (shock-acceleration scenario or the case of the EED approaching equilibrium in the stochastic-acceleration scenario), the log-parabolic (LP) EED (stochastic-acceleration scenario and the acceleration dominating), and the broken power-law (BPL) EED (no acceleration scenario). The corresponding fluxes of both synchrotron and SSC are then calculated. The model is applied to PKS 0447-439, and modeled SEDs are compared to the observed SED of this object by using the Markov Chain Monte Carlo method. The results show that the PLC model fails to fit the observed SED well, while the LP and BPL models give comparably good fits for the observed SED. The results indicate that it is possible that a stochastic acceleration process acts in the emitting region of PKS 0447-439 and the EED is far from equilibrium (acceleration dominating) or no acceleration process works (in the emitting region). The redshift of PKS 0447-439 is also estimated in our fitting: z = 0.16 ± 0.05 for the LP case and z = 0.17 ± 0.04 for BPL case.

  9. Observations of shock acceleration processes in the solar wind

    NASA Technical Reports Server (NTRS)

    Scholer, M.

    1986-01-01

    Substantial evidence was accumulated over more than two decades that ion acceleration occurs at all collisionless shocks sampled directly in the solar system. The various shock waves in the heliosphere and the associated energetic particle phenomena are shown schematically. Three shocks have attracted considerable attention in recent years: corotating shocks due to the interaction of fast and slow solar wind streams during solar minimum, travelling interplanetary shocks due to coronal mass ejections, and planetary bow shocks. The signatures of these shocks and of their energetic particles are briefly reviewed. The most prominent theoretical models for shock acceleration are also reviewed. Recent observations at the earth's bow shock and at quasi-parallel interplanetary shocks are discussed in detail.

  10. Accelerators for E-beam and X-ray processing

    NASA Astrophysics Data System (ADS)

    Auslender, V. L.; Bryazgin, A. A.; Faktorovich, B. L.; Gorbunov, V. A.; Kokin, E. N.; Korobeinikov, M. V.; Krainov, G. S.; Lukin, A. N.; Maximov, S. A.; Nekhaev, V. E.; Panfilov, A. D.; Radchenko, V. N.; Tkachenko, V. O.; Tuvik, A. A.; Voronin, L. A.

    2002-03-01

    During last years the demand for pasteurization and desinsection of various food products (meat, chicken, sea products, vegetables, fruits, etc.) had increased. The treatment of these products in industrial scale requires the usage of powerful electron accelerators with energy 5-10 MeV and beam power at least 50 kW or more. The report describes the ILU accelerators with energy range up to 10 MeV and beam power up to 150 kW.The different irradiation schemes in electron beam and X-ray modes for various products are described. The design of the X-ray converter and 90° beam bending system are also given.

  11. Bayesian active learning of neural firing rate maps with transformed gaussian process priors.

    PubMed

    Park, Mijung; Weller, J Patrick; Horwitz, Gregory D; Pillow, Jonathan W

    2014-08-01

    A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology experiments. These methods can accelerate the characterization of such maps through the intelligent, adaptive selection of stimuli. Specifically, we explore the manner in which the prior and utility function used in Bayesian active learning affect stimulus selection and performance. Our approach relies on a flexible model that involves a nonlinearly transformed gaussian process (GP) prior over maps and conditionally Poisson spiking. We show that infomax learning, which selects stimuli to maximize the information gain about the firing rate map, exhibits strong dependence on the seemingly innocuous choice of nonlinear transformation function. We derive an alternate utility function that selects stimuli to minimize the average posterior variance of the firing rate map and analyze the surprising relationship between prior parameterization, stimulus selection, and active learning performance in GP-Poisson models. We apply these methods to color tuning measurements of neurons in macaque primary visual cortex.

  12. Microscopic Processes On Radiation from Accelerated Particles in Relativistic Jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hardee, P. E.; Mizuno, Y.; Medvedev, M.; Zhang, B.; Sol, H.; Niemiec, J.; Pohl, M.; Nordlund, A.; Fredriksen, J.; Lyubarsky, Y.; Hartmann, D. H.; Fishman, G. J.

    2009-01-01

    Nonthermal radiation observed from astrophysical systems containing relativistic jets and shocks, e.g., gamma-ray bursts (GRBs), active galactic nuclei (AGNs), and Galactic microquasar systems usually have power-law emission spectra. Recent PIC simulations of relativistic electron-ion (electro-positron) jets injected into a stationary medium show that particle acceleration occurs within the downstream jet. In the collisionless relativistic shock particle acceleration is due to plasma waves and their associated instabilities (e.g., the Buneman instability, other two-streaming instability, and the Weibel (filamentation) instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection behind the jet head. The jitter'' radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  13. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.

  14. Accelerate!

    PubMed

    Kotter, John P

    2012-11-01

    The old ways of setting and implementing strategy are failing us, writes the author of Leading Change, in part because we can no longer keep up with the pace of change. Organizational leaders are torn between trying to stay ahead of increasingly fierce competition and needing to deliver this year's results. Although traditional hierarchies and managerial processes--the components of a company's "operating system"--can meet the daily demands of running an enterprise, they are rarely equipped to identify important hazards quickly, formulate creative strategic initiatives nimbly, and implement them speedily. The solution Kotter offers is a second system--an agile, networklike structure--that operates in concert with the first to create a dual operating system. In such a system the hierarchy can hand off the pursuit of big strategic initiatives to the strategy network, freeing itself to focus on incremental changes to improve efficiency. The network is populated by employees from all levels of the organization, giving it organizational knowledge, relationships, credibility, and influence. It can Liberate information from silos with ease. It has a dynamic structure free of bureaucratic layers, permitting a level of individualism, creativity, and innovation beyond the reach of any hierarchy. The network's core is a guiding coalition that represents each level and department in the hierarchy, with a broad range of skills. Its drivers are members of a "volunteer army" who are energized by and committed to the coalition's vividly formulated, high-stakes vision and strategy. Kotter has helped eight organizations, public and private, build dual operating systems over the past three years. He predicts that such systems will lead to long-term success in the 21st century--for shareholders, customers, employees, and companies themselves.

  15. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  16. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  17. Heralded processes on continuous-variable spaces as quantum maps

    SciTech Connect

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi; Barbieri, Marco; Tualle-Brouri, Rosa

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  18. Quantum stochastic processes for maps on Hilbert C*-modules

    SciTech Connect

    Heo, Jaeseong; Ji, Un Cig

    2011-05-15

    We discuss pairs ({phi}, {Phi}) of maps, where {phi} is a map between C*-algebras and {Phi} is a {phi}-module map between Hilbert C*-modules, which are generalization of representations of Hilbert C*-modules. A covariant version of Stinespring's theorem for such a pair ({phi}, {Phi}) is established, and quantum stochastic processes constructed from pairs ({l_brace}{phi}{sub t{r_brace}}, {l_brace}{Phi}{sub t{r_brace}}) of families of such maps are studied. We prove that the quantum stochastic process J={l_brace}J{sub t{r_brace}} constructed from a {phi}-quantum dynamical semigroup {Phi}={l_brace}{Phi}{sub t{r_brace}} is a j-map for the quantum stochastic process j={l_brace}j{sub t{r_brace}} constructed from the given quantum dynamical semigroup {phi}={l_brace}{phi}{sub t{r_brace}}, and that J is covariant if the {phi}-quantum dynamical semigroup {Phi} is covariant.

  19. Accelerating the FE-Simulation of Roll Forming Processes with the Aid of specific Process's Properties

    NASA Astrophysics Data System (ADS)

    Abrass, Ahmad; Özel, Mahmut; Groche, Peter

    2011-08-01

    Roll forming is an effective and economical sheet forming process that is well-established in industry for the manufacturing of large quantities of profile-shaped products. In cold-roll forming, a metal sheet is fed through successive pairs of forming rolls until it is formed into the desired cross-sectional profile. The deformation of the sheet is complex. For this reason, the theoretical analysis is very difficult, especially, if the strain distribution and the occurring forces are to be determined [1]. The design of roll forming processes depends upon a large number of variables, which mainly relies upon experience based knowledge [2]. In order to overcome the challenges and to optimize these processes, FE-simulations are used. The simulation of these processes is time-consuming. The main objective of this work is to accelerate the simulation of roll forming processes by taking advantage of their steady state properties. These properties allow the transformation of points on the sheet metal according to a mathematical function. This transformation function is determined with the help of the finite element method and then the next forming steps are computed, based on the generated function. With the aid of this developed method, the computational time can be reduced effectively. The details of the FE-model and new numerical algorithms will be described. Furthermore, the results of numerical simulations with and without the application of the developed method will be compared regarding computational time and numerical results.

  20. Processive phosphorylation of ERK MAP kinase in mammalian cells

    PubMed Central

    Aoki, Kazuhiro; Yamada, Masashi; Kunida, Katsuyuki; Yasuda, Shuhei; Matsuda, Michiyuki

    2011-01-01

    The mitogen-activated protein (MAP) kinase pathway is comprised of a three-tiered kinase cascade. The distributive kinetic mechanism of two-site MAP kinase phosphorylation inherently generates a nonlinear switch-like response. However, a linear graded response of MAP kinase has also been observed in mammalian cells, and its molecular mechanism remains unclear. To dissect these input-output behaviors, we quantitatively measured the kinetic parameters involved in the MEK (MAPK/ERK kinase)-ERK MAP kinase signaling module in HeLa cells. Using a numerical analysis based on experimentally determined parameters, we predicted in silico and validated in vivo that ERK is processively phosphorylated in HeLa cells. Finally, we identified molecular crowding as a critical factor that converts distributive phosphorylation into processive phosphorylation. We proposed the term quasi-processive phosphorylation to describe this mode of ERK phosphorylation that is operated under the physiological condition of molecular crowding. The generality of this phenomenon may provide a new paradigm for a diverse set of biochemical reactions including multiple posttranslational modifications. PMID:21768338

  1. Computational Tools for Accelerating Carbon Capture Process Development

    SciTech Connect

    Miller, David

    2013-01-01

    The goals of the work reported are: to develop new computational tools and models to enable industry to more rapidly develop and deploy new advanced energy technologies; to demonstrate the capabilities of the CCSI Toolset on non-proprietary case studies; and to deploy the CCSI Toolset to industry. Challenges of simulating carbon capture (and other) processes include: dealing with multiple scales (particle, device, and whole process scales); integration across scales; verification, validation, and uncertainty; and decision support. The tools cover: risk analysis and decision making; validated, high-fidelity CFD; high-resolution filtered sub-models; process design and optimization tools; advanced process control and dynamics; process models; basic data sub-models; and cross-cutting integration tools.

  2. Hardware acceleration vs. algorithmic acceleration: can GPU-based processing beat complexity optimization for CT?

    NASA Astrophysics Data System (ADS)

    Neophytou, Neophytos; Xu, Fang; Mueller, Klaus

    2007-03-01

    Three-dimensional computed tomography (CT) is a compute-intensive process, due to the large amounts of source and destination data, and this limits the speed at which a reconstruction can be obtained. There are two main approaches to cope with this problem: (i) lowering the overall computational complexity via algorithmic means, and/or (ii) running CT on specialized high-performance hardware. Since the latter requires considerable capital investment into rather inflexible hardware, the former option is all one has typically available in a traditional CPU-based computing environment. However, the emergence of programmable commodity graphics hardware (GPUs) has changed this situation in a decisive way. In this paper, we show that GPUs represent a commodity high-performance parallel architecture that resonates very well with the computational structure and operations inherent to CT. Using formal arguments as well as experiments we demonstrate that GPU-based 'brute-force' CT (i.e., CT at regular complexity) can be significantly faster than CPU-based as well as GPU-based CT with optimal complexity, at least for practical data sizes. Therefore, the answer to the title question: "Can GPU-based processing beat complexity optimization for CT?" is "Absolutely!"

  3. Studies of acceleration processes in the corona using ion measurements on the solar probe mission

    NASA Technical Reports Server (NTRS)

    Gloeckler, G.

    1978-01-01

    The energy spectra and composition of particles escaping from the Sun provide essential information on mechanisms responsible for their acceleration, and may also be used to characterize the regions where they are accelerated and confined and through which they propagate. The suprathermal energy range, which extends from solar wind energies (approximately 1 KeV) to about 1 MeV/nucleon, is of special interest to studies of nonthermal acceleration processes because a large fraction of particles is likely to be accelerated into this energy range. Data obtained from near earth observations of particles in the suprathermal energy range are reviewed. The necessary capabilities of an a ion composition experiment in the solar probe mission and the required ion measurements are discussed. A possible configuration of an instrument consisting of an electrostatic deflection system, modest post acceleration, and a time of flight versus energy system is described as well as its possible location on the spacecraft.

  4. Acceleration of the GAMESS-UK electronic structure package on graphical processing units.

    PubMed

    Wilkinson, Karl A; Sherwood, Paul; Guest, Martyn F; Naidoo, Kevin J

    2011-07-30

    The approach used to calculate the two-electron integral by many electronic structure packages including generalized atomic and molecular electronic structure system-UK has been designed for CPU-based compute units. We redesigned the two-electron compute algorithm for acceleration on a graphical processing unit (GPU). We report the acceleration strategy and illustrate it on the (ss|ss) type integrals. This strategy is general for Fortran-based codes and uses the Accelerator compiler from Portland Group International and GPU-based accelerators from Nvidia. The evaluation of (ss|ss) type integrals within calculations using Hartree Fock ab initio methods and density functional theory are accelerated by single and quad GPU hardware systems by factors of 43 and 153, respectively. The overall speedup for a single self consistent field cycle is at least a factor of eight times faster on a single GPU compared with that of a single CPU.

  5. Application of quantitative trait locus mapping and transcriptomics to studies of the senescence-accelerated phenotype in rats

    PubMed Central

    2014-01-01

    Background Etiology of complex disorders, such as cataract and neurodegenerative diseases including age-related macular degeneration (AMD), remains poorly understood due to the paucity of animal models, fully replicating the human disease. Previously, two quantitative trait loci (QTLs) associated with early cataract, AMD-like retinopathy, and some behavioral aberrations in senescence-accelerated OXYS rats were uncovered on chromosome 1 in a cross between OXYS and WAG rats. To confirm the findings, we generated interval-specific congenic strains, WAG/OXYS-1.1 and WAG/OXYS-1.2, carrying OXYS-derived loci of chromosome 1 in the WAG strain. Both congenic strains displayed early cataract and retinopathy but differed clinically from OXYS rats. Here we applied a high-throughput RNA sequencing (RNA-Seq) strategy to facilitate nomination of the candidate genes and functional pathways that may be responsible for these differences and can contribute to the development of the senescence-accelerated phenotype of OXYS rats. Results First, the size and map position of QTL-derived congenic segments were determined by comparative analysis of coding single-nucleotide polymorphisms (SNPs), which were identified for OXYS, WAG, and congenic retinal RNAs after sequencing. The transferred locus was not what we expected in WAG/OXYS-1.1 rats. In rat retina, 15442 genes were expressed. Coherent sets of differentially expressed genes were identified when we compared RNA-Seq retinal profiles of 20-day-old WAG/OXYS-1.1, WAG/OXYS-1.2, and OXYS rats. The genes most different in the average expression level between the congenic strains included those generally associated with the Wnt, integrin, and TGF-β signaling pathways, widely involved in neurodegenerative processes. Several candidate genes (including Arhgap33, Cebpg, Gtf3c1, Snurf, Tnfaip3, Yme1l1, Cbs, Car9 and Fn1) were found to be either polymorphic in the congenic loci or differentially expressed between the strains. These genes may

  6. Transport processes in the middle atmosphere: Reflections after MAP

    NASA Technical Reports Server (NTRS)

    Grose, W. L.

    1989-01-01

    The Middle Atmosphere Program (MAP) has provided a focus for considerable research on atmospherical radiative, chemical, and dynamical processes and the mutual coupling among these processes. In particular, major advances have occurred in the understanding of constituent transport as a result of near-global measurements obtained during MAP from several satellite based instruments (e.g., LIMS, SAMS, SAGE, and SSU among others). Using selected portions of these data, the development is reviewed of progress in understanding transport processes with special emphasis on dynamically active periods. Examples are presented which demonstrate coupling between chemistry and dynamics. In addition to the constituent data, the use is reviewed of Ertel's potential vorticity, inferred from satellite temperature data, as a diagnostic for interpreting transport phenomena. Finally, the use is briefly illustrated of 3-D model simulations, in conjunction with the satellite data, for providing additional insight into fundamental transport mechanisms.

  7. The Process of Thematic Mapping in Case Conceptualization.

    PubMed

    Ridley, Charles R; Jeffrey, Christina E; Roberson, Richard B

    2017-04-01

    This article, the 4th in a series of 5, introduces the 3-stage process of thematic mapping: theme identification, theme interpretation, and theme intervention. Theme identification is based on inductive reasoning, in which clinicians seek to discover and describe behavioral patterns in emotionally charged episodes. Theme interpretation subsequently initiates a process of deductive reasoning, wherein clinicians distill the generalized pattern into dominant and subthemes. Each theme is then labeled with a compelling metaphor that is representative of the theme interpretation. In the 3rd stage, theme intervention, clinicians seek to change the dysfunctional dominant and subthemes through collaboration with the clients. The process unfolds within 5 overarching parameters: a focus on comprehensiveness, simplification, maximal objectivity/impartial subjectivity, observation and inference, and an idiographic approach. Alternative models of case formulation are offered in comparison to thematic mapping.

  8. UAV Data Processing for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2014-06-01

    Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV) bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI) i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan) level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D) and digital terrain models (3D) will be integrated in order to provide Digital Elevation Models (DEM) as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD). Finally this result will be used as the benchmark for alternative geospatial

  9. Subcortical mapping of calculation processing in the right parietal lobe.

    PubMed

    Della Puppa, Alessandro; De Pellegrin, Serena; Lazzarini, Anna; Gioffrè, Giorgio; Rustemi, Oriela; Cagnin, Annachiara; Scienza, Renato; Semenza, Carlo

    2015-05-01

    Preservation of calculation processing in brain surgery is crucial for patients' quality of life. Over the last decade, surgical electrostimulation was used to identify and preserve the cortical areas involved in such processing. Conversely, subcortical connectivity among different areas implicated in this function remains unclear, and the role of surgery in this domain has not been explored so far. The authors present the first 2 cases in which the subcortical functional sites involved in calculation were identified during right parietal lobe surgery. Two patients affected by a glioma located in the right parietal lobe underwent surgery with the aid of MRI neuronavigation. No calculation deficits were detected during preoperative assessment. Cortical and subcortical mapping were performed using a bipolar stimulator. The current intensity was determined by progressively increasing the amplitude by 0.5-mA increments (from a baseline of 1 mA) until a sensorimotor response was elicited. Then, addition and multiplication calculation tasks were administered. Corticectomy was performed according to both the MRI neuronavigation data and the functional findings obtained through cortical mapping. Direct subcortical electrostimulation was repeatedly performed during tumor resection. Subcortical functional sites for multiplication and addition were detected in both patients. Electrostimulation interfered with calculation processing during cortical mapping as well. Functional sites were spared during tumor removal. The postoperative course was uneventful, and calculation processing was preserved. Postoperative MRI showed complete resection of the tumor. The present preliminary study shows for the first time how functional mapping can be a promising method to intraoperatively identify the subcortical functional sites involved in calculation processing. This report therefore supports direct electrical stimulation as a promising tool to improve the current knowledge on

  10. Refining each process step to accelerate the development of biorefineries

    DOE PAGES

    Chandra, Richard P.; Ragauskas, Art J.

    2016-06-21

    Research over the past decade has been mainly focused on overcoming hurdles in the pretreatment, enzymatic hydrolysis, and fermentation steps of biochemical processing. Pretreatments have improved significantly in their ability to fractionate and recover the cellulose, hemicellulose, and lignin components of biomass while producing substrates containing carbohydrates that can be easily broken down by hydrolytic enzymes. There is a rapid movement towards pretreatment processes that incorporate mechanical treatments that make use of existing infrastructure in the pulp and paper industry, which has experienced a downturn in its traditional markets. Enzyme performance has also made great strides with breakthrough developments inmore » nonhydrolytic protein components, such as lytic polysaccharide monooxygenases, as well as the improvement of enzyme cocktails.The fermentability of pretreated and hydrolyzed sugar streams has been improved through strategies such as the use of reducing agents for detoxification, strain selection, and strain improvements. Although significant progress has been made, tremendous challenges still remain to advance each step of biochemical conversion, especially when processing woody biomass. In addition to technical and scale-up issues within each step of the bioconversion process, biomass feedstock supply and logistics challenges still remain at the forefront of biorefinery research.« less

  11. Accelerating COTS Middleware Acquisition: The i-Mate Process

    SciTech Connect

    Liu, Anna; Gorton, Ian

    2003-03-05

    Most major organizations now use some commercial-off-the-shelf middleware components to run their businesses. Key drivers behind this growth include ever-increasing Internet usage and the ongoing need to integrate heterogeneous legacy systems to streamline business processes. As organizations do more business online, they need scalable, high-performance software infrastructures to handle transactions and provide access to core systems.

  12. Refining each process step to accelerate the development of biorefineries

    SciTech Connect

    Chandra, Richard P.; Ragauskas, Art J.

    2016-06-21

    Research over the past decade has been mainly focused on overcoming hurdles in the pretreatment, enzymatic hydrolysis, and fermentation steps of biochemical processing. Pretreatments have improved significantly in their ability to fractionate and recover the cellulose, hemicellulose, and lignin components of biomass while producing substrates containing carbohydrates that can be easily broken down by hydrolytic enzymes. There is a rapid movement towards pretreatment processes that incorporate mechanical treatments that make use of existing infrastructure in the pulp and paper industry, which has experienced a downturn in its traditional markets. Enzyme performance has also made great strides with breakthrough developments in nonhydrolytic protein components, such as lytic polysaccharide monooxygenases, as well as the improvement of enzyme cocktails.The fermentability of pretreated and hydrolyzed sugar streams has been improved through strategies such as the use of reducing agents for detoxification, strain selection, and strain improvements. Although significant progress has been made, tremendous challenges still remain to advance each step of biochemical conversion, especially when processing woody biomass. In addition to technical and scale-up issues within each step of the bioconversion process, biomass feedstock supply and logistics challenges still remain at the forefront of biorefinery research.

  13. Blocking the association of HDAC4 with MAP1S accelerates autophagy clearance of mutant Huntingtin.

    PubMed

    Yue, Fei; Li, Wenjiao; Zou, Jing; Chen, Qi; Xu, Guibin; Huang, Hai; Xu, Zhen; Zhang, Sheng; Gallinari, Paola; Wang, Fen; McKeehan, Wallace L; Liu, Leyuan

    2015-10-01

    Autophagy controls and executes the turnover of abnormally aggregated proteins. MAP1S interacts with the autophagy marker LC3 and positively regulates autophagy flux. HDAC4 associates with the aggregation-prone mutant huntingtin protein (mHTT) that causes Huntington's disease, and colocalizes with it in cytosolic inclusions. It was suggested HDAC4 interacts with MAP1S in a yeast two-hybrid screening. Here, we found that MAP1S interacts with HDAC4 via a HDAC4-binding domain (HBD). HDAC4 destabilizes MAP1S, suppresses autophagy flux and promotes the accumulation of mHTT aggregates. This occurs by an increase in the deacetylation of the acetylated MAP1S. Either suppression of HDAC4 with siRNA or overexpression of the MAP1S HBD leads to stabilization of MAP1S, activation of autophagy flux and clearance of mHTT aggregates. Therefore, specific interruption of the HDAC4-MAP1S interaction with short peptides or small molecules to enhance autophagy flux may relieve the toxicity of mHTT associated with Huntington's disease and improve symptoms of HD patients.

  14. Process mapping as a tool for home health network analysis.

    PubMed

    Pluto, Delores M; Hirshorn, Barbara A

    2003-01-01

    Process mapping is a qualitative tool that allows service providers, policy makers, researchers, and other concerned stakeholders to get a "bird's eye view" of a home health care organizational network or a very focused, in-depth view of a component of such a network. It can be used to share knowledge about community resources directed at the older population, identify gaps in resource availability and access, and promote on-going collaborative interactions that encourage systemic policy reassessment and programmatic refinement. This article is a methodological description of process mapping, which explores its utility as a practice and research tool, illustrates its use in describing service-providing networks, and discusses some of the issues that are key to successfully using this methodology.

  15. Accelerating Malware Detection via a Graphics Processing Unit

    DTIC Science & Technology

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  16. Accelerated mapping of magnetic susceptibility using 3D planes-on-a-paddlewheel (POP) EPI at ultra-high field strength.

    PubMed

    Stäb, Daniel; Bollmann, Steffen; Langkammer, Christian; Bredies, Kristian; Barth, Markus

    2017-04-01

    With the advent of ultra-high field MRI scanners in clinical research, susceptibility based MRI has recently gained increasing interest because of its potential to assess subtle tissue changes underlying neurological pathologies/disorders. Conventional, but rather slow, three-dimensional (3D) spoiled gradient-echo (GRE) sequences are typically employed to assess the susceptibility of tissue. 3D echo-planar imaging (EPI) represents a fast alternative but generally comes with echo-time restrictions, geometrical distortions and signal dropouts that can become severe at ultra-high fields. In this work we assess quantitative susceptibility mapping (QSM) at 7 T using non-Cartesian 3D EPI with a planes-on-a-paddlewheel (POP) trajectory, which is created by rotating a standard EPI readout train around its own phase encoding axis. We show that the threefold accelerated non-Cartesian 3D POP EPI sequence enables very fast, whole brain susceptibility mapping at an isotropic resolution of 1 mm and that the high image quality has sufficient signal-to-noise ratio in the phase data for reliable QSM processing. The susceptibility maps obtained were comparable with regard to QSM values and geometric distortions to those calculated from a conventional 4 min 3D GRE scan using the same QSM processing pipeline. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Conceptual Framework for the Mapping of Management Process with Information Technology in a Business Process

    PubMed Central

    Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. PMID:25861688

  18. Conceptual framework for the mapping of management process with information technology in a business process.

    PubMed

    Rajarathinam, Vetrickarthick; Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified.

  19. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  20. Microwave heating and the acceleration of polymerization processes

    NASA Astrophysics Data System (ADS)

    Parodi, Fabrizio

    1999-12-01

    Microwave power irradiation of dielectrics is nowadays well recognized and extensively used as an exceptionally efficient and versatile heating technique. Besides this, it revealed since the early 1980s an unexpected, and still far from being elucidated, capacity of causing reaction and yield enhancements in a great variety of chemical processes. These phenomena are currently referred to as specific or nonthermal effects of microwaves. An overview of them and their interpretations given to date in achievements in the microwave processing of slow-curing thermosetting resins is also given. Tailored, quaternary cyanoalkoxyalkyl ammonium halide catalysts, further emphasizing the microwave enhancements of curing kinetics of isocyanate/epoxy and epoxy/anhydride resin systems, are here presented. Their catalytic efficiency under microwave irradiation, microwave heatability, and dielectric properties are discussed and interpreted by the aid of the result of semi-empirical quantum mechanics calculations and molecule dynamics simulations in vacuo. An ion-hopping conduction mechanism has been recognized as the dominant source of the microwave absorption capacities of these catalysts. Dipolar relaxation losses by their strongly dipolar cations, viceversa, would preferably be responsible for the peculiar catalytic effects displayed under microwave heating. This would occur through a well-focused, molecular microwave overheating of intermediate reactive anionic groupings, they could indirectly cause as the nearest neighbors of such negatively-charged molecular sites.

  1. Accelerating radio astronomy cross-correlation with graphics processing units

    NASA Astrophysics Data System (ADS)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  2. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    SciTech Connect

    Tamascelli, Dario; Dambrosio, Francesco Saverio; Conte, Riccardo; Ceotto, Michele

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  3. Accelerating chemical database searching using graphics processing units.

    PubMed

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  4. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  5. Graphics processing unit acceleration of computational electromagnetic methods

    NASA Astrophysics Data System (ADS)

    Inman, Matthew

    The use of Graphical Processing Units (GPU's) for scientific applications has been evolving and expanding for the decade. GPU's provide an alternative to the CPU in the creation and execution of the numerical codes that are often relied upon in to perform simulations in computational electromagnetics. While originally designed purely to display graphics on the users monitor, GPU's today are essentially powerful floating point co-processors that can be programmed not only to render complex graphics, but also perform the complex mathematical calculations often encountered in scientific computing. Currently the GPU's being produced often contain hundreds of separate cores able to access large amounts of high-speed dedicated memory. By utilizing the power offered by such a specialized processor, it is possible to drastically speed up the calculations required in computational electromagnetics. This increase in speed allows for the use of GPU based simulations in a variety of situations that the computational time has heretofore been a limiting factor in, such as in educational courses. Many situations in teaching electromagnetics often rely upon simple examples of problems due to the simulation times needed to analyze more complex problems. The use of GPU based simulations will be shown to allow demonstrations of more advanced problems than previously allowed by adapting the methods for use on the GPU. Modules will be developed for a wide variety of teaching situations utilizing the speed of the GPU to demonstrate various techniques and ideas previously unrealizable.

  6. Accelerator Production of Tritium project process waste assessment

    SciTech Connect

    Carson, S.D.; Peterson, P.K.

    1995-09-01

    DOE has made a commitment to compliance with all applicable environmental regulatory requirements. In this respect, it is important to consider and design all tritium supply alternatives so that they can comply with these requirements. The management of waste is an integral part of this activity and it is therefore necessary to estimate the quantities and specific wastes that will be generated by all tritium supply alternatives. A thorough assessment of waste streams includes waste characterization, quantification, and the identification of treatment and disposal options. The waste assessment for APT has been covered in two reports. The first report was a process waste assessment (PWA) that identified and quantified waste streams associated with both target designs and fulfilled the requirements of APT Work Breakdown Structure (WBS) Item 5.5.2.1. This second report is an expanded version of the first that includes all of the data of the first report, plus an assessment of treatment and disposal options for each waste stream identified in the initial report. The latter information was initially planned to be issued as a separate Waste Treatment and Disposal Options Assessment Report (WBS Item 5.5.2.2).

  7. Acceleration of the remediation process through interim action

    SciTech Connect

    Clark, T.R.; Throckmorton, J.D.; Hampshire, L.H.; Dalga, D.G.; Janke, R.J.

    1993-11-01

    During the Remedial Investigation and Feasibility Study (RI/FS) phase of a CERCLA cleanup, it is possible to implement interim actions at a site ``to respond to an immediate site threat or take advantage of an opportunity to significantly reduce risk quickly.`` An interim action is a short term action that addresses threats to public health and safety and is generally followed by the RI/FS process to achieve complete long term protection of human health and the environment. Typically, an interim action is small in scope and can be implemented quickly to reduce risks, such as the addition of a security fence around a known or suspected hazard, or construction of a temporary cap to reduce run-on/run-off from a contaminant source. For more specialized situations, however, the possibility exists to apply the intent of the interim action guidance to a much larger project scope. The primary focus of this paper is the discussion of the interim action approach for streamlined remedial action and presentation of an example large-scale project utilizing this approach at the Fernald Environmental Management Project (FEMP).

  8. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  9. Estimating and mapping ecological processes influencing microbial community assembly

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.

  10. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  11. Optimization of accelerator parameters using normal form methods on high-order transfer maps

    SciTech Connect

    Snopok, Pavel

    2007-05-01

    Methods of analysis of the dynamics of ensembles of charged particles in collider rings are developed. The following problems are posed and solved using normal form transformations and other methods of perturbative nonlinear dynamics: (1) Optimization of the Tevatron dynamics: (a) Skew quadrupole correction of the dynamics of particles in the Tevatron in the presence of the systematic skew quadrupole errors in dipoles; (b) Calculation of the nonlinear tune shift with amplitude based on the results of measurements and the linear lattice information; (2) Optimization of the Muon Collider storage ring: (a) Computation and optimization of the dynamic aperture of the Muon Collider 50 x 50 GeV storage ring using higher order correctors; (b) 750 x 750 GeV Muon Collider storage ring lattice design matching the Tevatron footprint. The normal form coordinates have a very important advantage over the particle optical coordinates: if the transformation can be carried out successfully (general restrictions for that are not much stronger than the typical restrictions imposed on the behavior of the particles in the accelerator) then the motion in the new coordinates has a very clean representation allowing to extract more information about the dynamics of particles, and they are very convenient for the purposes of visualization. All the problem formulations include the derivation of the objective functions, which are later used in the optimization process using various optimization algorithms. Algorithms used to solve the problems are specific to collider rings, and applicable to similar problems arising on other machines of the same type. The details of the long-term behavior of the systems are studied to ensure the their stability for the desired number of turns. The algorithm of the normal form transformation is of great value for such problems as it gives much extra information about the disturbing factors. In addition to the fact that the dynamics of particles is represented

  12. Alginate-hyaluronan composite hydrogels accelerate wound healing process.

    PubMed

    Catanzano, O; D'Esposito, V; Acierno, S; Ambrosio, M R; De Caro, C; Avagliano, C; Russo, P; Russo, R; Miro, A; Ungaro, F; Calignano, A; Formisano, P; Quaglia, F

    2015-10-20

    In this paper we propose polysaccharide hydrogels combining alginate (ALG) and hyaluronan (HA) as biofunctional platform for dermal wound repair. Hydrogels produced by internal gelation were homogeneous and easy to handle. Rheological evaluation of gelation kinetics of ALG/HA mixtures at different ratios allowed understanding the HA effect on ALG cross-linking process. Disk-shaped hydrogels, at different ALG/HA ratio, were characterized for morphology, homogeneity and mechanical properties. Results suggest that, although the presence of HA does significantly slow down gelation kinetics, the concentration of cross-links reached at the end of gelation is scarcely affected. The in vitro activity of ALG/HA dressings was tested on adipose derived multipotent adult stem cells (Ad-MSC) and an immortalized keratinocyte cell line (HaCaT). Hydrogels did not interfere with cell viability in both cells lines, but significantly promoted gap closure in a scratch assay at early (1 day) and late (5 days) stages as compared to hydrogels made of ALG alone (p<0.01 and 0.001 for Ad-MSC and HaCaT, respectively). In vivo wound healing studies, conducted on a rat model of excised wound indicated that after 5 days ALG/HA hydrogels significantly promoted wound closure as compared to ALG ones (p<0.001). Overall results demonstrate that the integration of HA in a physically cross-linked ALG hydrogel can be a versatile strategy to promote wound healing that can be easily translated in a clinical setting.

  13. Accelerating the CERCLA process using plug-in records of decision

    SciTech Connect

    Williams, E.G.; Smallbeck, D.R.

    1995-12-31

    The inefficiencies of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA or Superfund) process are well recognized. Years of study and oftentimes millions of dollars are expended at Superfund sites before any cleanup begins. An accelerated approach to the CERCLA process was designed and implemented at the Fort Ord Superfund site in Monterey County, California. The approach, developed at the same time as and in concert with the US Environmental Protection Agency`s (EPA`s) Superfund Accelerated Clean-Up Model (SACM), included the preparation of two ``plug-in`` records of decision (RODs). These RODs and the process to utilize them, were carefully designed to meet specific project objectives. Implementation of this accelerated program has allowed for a no further action designation or remediation of many areas of concern at the site up to 6 years ahead of schedule and at savings in excess of a million dollars.

  14. In-situ diagnostics and degradation mapping of a mixed-mode accelerated stress test for proton exchange membranes

    NASA Astrophysics Data System (ADS)

    Lai, Yeh-Hung; Fly, Gerald W.

    2015-01-01

    With increasing availability of more durable membrane materials for proton exchange membrane fuel cells, there is a need for a more stressful test that combines chemical and mechanical stressors to enable accelerated screening of promising membrane candidates. Equally important is the need for in-situ diagnostic methods with sufficient spatial resolution that can provide insights into how membranes degrade to facilitate the development of durable fuel cell systems. In this article, we report an accelerated membrane stress test and a degradation diagnostic method that satisfy both needs. By applying high-amplitude cycles of electrical load to a fuel cell fed with low-RH reactant gases, a wide range of mechanical and chemical stressful conditions can be created within the cell which leads to rapid degradation of a mechanically robust Ion Power™ N111-IP membrane. Using an in-situ shorting/crossover diagnostic method on a segmented fuel cell fixture that provides 100 local current measurements, we are able to monitor the progression and map the degradation modes of shorting, thinning, and crossover leak over the entire membrane. Results from this test method have been validated by conventional metrics of fluoride release rates, physical crossover leak rates, pinhole mapping, and cross-sectional measurements.

  15. Swarm accelerometer data processing from raw accelerations to thermospheric neutral densities

    NASA Astrophysics Data System (ADS)

    Siemes, Christian; de Teixeira da Encarnação, João; Doornbos, Eelco; van den IJssel, Jose; Kraus, Jiří; Pereštý, Radek; Grunwaldt, Ludwig; Apelbaum, Guy; Flury, Jakob; Holmdahl Olsen, Poul Erik

    2016-05-01

    The Swarm satellites were launched on November 22, 2013, and carry accelerometers and GPS receivers as part of their scientific payload. The GPS receivers do not only provide the position and time for the magnetic field measurements, but are also used for determining non-gravitational forces like drag and radiation pressure acting on the spacecraft. The accelerometers measure these forces directly, at much finer resolution than the GPS receivers, from which thermospheric neutral densities can be derived. Unfortunately, the acceleration measurements suffer from a variety of disturbances, the most prominent being slow temperature-induced bias variations and sudden bias changes. In this paper, we describe the new, improved four-stage processing that is applied for transforming the disturbed acceleration measurements into scientifically valuable thermospheric neutral densities. In the first stage, the sudden bias changes in the acceleration measurements are manually removed using a dedicated software tool. The second stage is the calibration of the accelerometer measurements against the non-gravitational accelerations derived from the GPS receiver, which includes the correction for the slow temperature-induced bias variations. The identification of validity periods for calibration and correction parameters is part of the second stage. In the third stage, the calibrated and corrected accelerations are merged with the non-gravitational accelerations derived from the observations of the GPS receiver by a weighted average in the spectral domain, where the weights depend on the frequency. The fourth stage consists of transforming the corrected and calibrated accelerations into thermospheric neutral densities. We present the first results of the processing of Swarm C acceleration measurements from June 2014 to May 2015. We started with Swarm C because its acceleration measurements contain much less disturbances than those of Swarm A and have a higher signal-to-noise ratio

  16. Quasi-steady stages in the process of premixed flame acceleration in narrow channels

    NASA Astrophysics Data System (ADS)

    Valiev, D. M.; Bychkov, V.; Akkerman, V.; Eriksson, L.-E.; Law, C. K.

    2013-09-01

    The present paper addresses the phenomenon of spontaneous acceleration of a premixed flame front propagating in micro-channels, with subsequent deflagration-to-detonation transition. It has recently been shown experimentally [M. Wu, M. Burke, S. Son, and R. Yetter, Proc. Combust. Inst. 31, 2429 (2007)], 10.1016/j.proci.2006.08.098, computationally [D. Valiev, V. Bychkov, V. Akkerman, and L.-E. Eriksson, Phys. Rev. E 80, 036317 (2009)], 10.1103/PhysRevE.80.036317, and analytically [V. Bychkov, V. Akkerman, D. Valiev, and C. K. Law, Phys. Rev. E 81, 026309 (2010)], 10.1103/PhysRevE.81.026309 that the flame acceleration undergoes different stages, from an initial exponential regime to quasi-steady fast deflagration with saturated velocity. The present work focuses on the final saturation stages in the process of flame acceleration, when the flame propagates with supersonic velocity with respect to the channel walls. It is shown that an intermediate stage may occur during acceleration with quasi-steady velocity, noticeably below the Chapman-Jouguet deflagration speed. The intermediate stage is followed by additional flame acceleration and subsequent saturation to the Chapman-Jouguet deflagration regime. We elucidate the intermediate stage by the joint effect of gas pre-compression ahead of the flame front and the hydraulic resistance. The additional acceleration is related to viscous heating at the channel walls, being of key importance at the final stages. The possibility of explosion triggering is also demonstrated.

  17. Locating and quantifying ceramic body armor impact forces on a compliant torso using acceleration mapping

    NASA Astrophysics Data System (ADS)

    Cardi, Adam A.; Adams, Douglas E.; Walsh, Shawn

    2006-03-01

    This research experimentally implements a new method to identify the location and magnitude of a single impulsive excitation to ceramic body armor, which is supported on a compliant torso. The method could easily be extended to other flexibly supported components that undergo rigid body dynamics. Impact loads are identified in two steps. First, the location of the impact force is determined from time domain acceleration responses by comparing them to an array of reference acceleration time histories. Then based on the estimated location, reference frequency response functions are used to reconstruct the input force in the frequency domain through a least squares inverse problem. Experimental results demonstrate the validity of this method at both low energy excitations, which are produced by a medium modally-tuned impact hammer, and at high energy excitations, which are produced by dropping rods with masses up to 0.6 kilograms from a height of 2 meters. The maximum error in the estimated location or magnitude for the low energy excitations on the 10 cm square ceramic body armor was 7.07 mm with an average error of 1.09 mm. In comparing the estimated force for the low energy excitations to the force recorded by the transducer in the modal impact hammer, the maximum error in the predicted force amplitude was 6.78 percent and the maximum error in the predicted impulse was 6.44 percent. For the high energy excitations, which produced accelerations at the measurement locations up to 50 times greater than that of the low energy excitations, the maximum error in the predicted location of the input force was 15 mm with an average error of 6.64 mm. There was no force transducer to capture the input force on the body armor from the rod, but from non-energy-dissipative projectile motion equations the validity of the solutions was confirmed by comparing the impulses.

  18. Nanomanufacturing Portfolio: Manufacturing Processes and Applications to Accelerate Commercial Use of Nanomaterials

    SciTech Connect

    Industrial Technologies Program

    2011-01-05

    This brochure describes the 31 R&D projects that AMO supports to accelerate the commercial manufacture and use of nanomaterials for enhanced energy efficiency. These cost-shared projects seek to exploit the unique properties of nanomaterials to improve the functionality of industrial processes and products.

  19. Acceleration of short and long DNA read mapping without loss of accuracy using suffix array

    PubMed Central

    Tárraga, Joaquín; Arnau, Vicente; Martínez, Héctor; Moreno, Raul; Cazorla, Diego; Salavert-Torres, José; Blanquer-Espert, Ignacio; Dopazo, Joaquín; Medina, Ignacio

    2014-01-01

    HPG Aligner applies suffix arrays for DNA read mapping. This implementation produces a highly sensitive and extremely fast mapping of DNA reads that scales up almost linearly with read length. The approach presented here is faster (over 20× for long reads) and more sensitive (over 98% in a wide range of read lengths) than the current state-of-the-art mappers. HPG Aligner is not only an optimal alternative for current sequencers but also the only solution available to cope with longer reads and growing throughputs produced by forthcoming sequencing technologies. Availability and implementation: https://github.com/opencb/hpg-aligner. Contact: jdopazo@cipf.es or imedina@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25143289

  20. Sampling frequency affects the processing of Actigraph raw acceleration data to activity counts.

    PubMed

    Brønd, Jan Christian; Arvidsson, Daniel

    2016-02-01

    ActiGraph acceleration data are processed through several steps (including band-pass filtering to attenuate unwanted signal frequencies) to generate the activity counts commonly used in physical activity research. We performed three experiments to investigate the effect of sampling frequency on the generation of activity counts. Ideal acceleration signals were produced in the MATLAB software. Thereafter, ActiGraph GT3X+ monitors were spun in a mechanical setup. Finally, 20 subjects performed walking and running wearing GT3X+ monitors. Acceleration data from all experiments were collected with different sampling frequencies, and activity counts were generated with the ActiLife software. With the default 30-Hz (or 60-Hz, 90-Hz) sampling frequency, the generation of activity counts was performed as intended with 50% attenuation of acceleration signals with a frequency of 2.5 Hz by the signal frequency band-pass filter. Frequencies above 5 Hz were eliminated totally. However, with other sampling frequencies, acceleration signals above 5 Hz escaped the band-pass filter to a varied degree and contributed to additional activity counts. Similar results were found for the spinning of the GT3X+ monitors, although the amount of activity counts generated was less, indicating that raw data stored in the GT3X+ monitor is processed. Between 600 and 1,600 more counts per minute were generated with the sampling frequencies 40 and 100 Hz compared with 30 Hz during running. Sampling frequency affects the processing of ActiGraph acceleration data to activity counts. Researchers need to be aware of this error when selecting sampling frequencies other than the default 30 Hz.

  1. Acceleration processes in the quasi-steady magnetoplasmadynamic discharge. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Boyle, M. J.

    1974-01-01

    The flow field characteristics within the discharge chamber and exhaust of a quasi-steady magnetoplasmadynamic (MPD) arcjet were examined to clarify the nature of the plasma acceleration process. The observation of discharge characteristics unperturbed by insulator ablation and terminal voltage fluctuations, first requires the satisfaction of three criteria: the use of refractory insulator materials; a mass injection geometry tailored to provide propellant to both electrode regions of the discharge; and a cathode of sufficient surface area to permit nominal MPD arcjet operation for given combinations of arc current and total mass flow. The axial velocity profile and electromagnetic discharge structure were measured for an arcjet configuration which functions nominally at 15.3 kA and 6 g/sec argon mass flow. An empirical two-flow plasma acceleration model is advanced which delineates inner and outer flow regions and accounts for the observed velocity profile and calculated thrust of the accelerator.

  2. Analysis of the Acceleration Process of SEPs by an Interplanetary Shock for Bastille Day Event

    NASA Astrophysics Data System (ADS)

    Le, G. M.; Han, Y. B.

    Based on the solar energetic particle (SEP) data from ACE and GOES satellites, the acceleration of SEP by CME-driven shock in interplanetary space was investigated. The results showed that the acceleration process of SEP by the Bastille CME-driven shock ran through the whole space from the sun to the magnetosphere. The highest energy of SEP accelerated by the shock was greater than 100MeV. A magnetic bottle associated with the CME captured a lot of high energy particles with some of them having energy greater than 100MeV. Based on magnetic field data of solar wind observed by ACE data, we found that the the magnetic bottle associated with the Bastille CME was the sheath caused by the CME in fact.

  3. RF Processing of X-Band Accelerator Structures at the NLCTA

    SciTech Connect

    Adolphsen, Chris

    2000-08-24

    During the initial phase of operation, the linacs of the Next Linear Collider (NLC) will contain roughly 5,000 X-Band accelerator structures that will accelerate beams of electrons and positrons to 250 GeV. These structures will nominally operate at an unloaded gradient of 72 MV/m. As part of the NLC R and D program, several prototype structures have been built and operated at the Next Linear Collider Test Accelerator (NLCTA) at SLAC. Here, the effect of high gradient operation on the structure performance has been studied. Significant progress was made during the past year after the NLCTA power sources were upgraded to reliably produce the required NLC power levels and beyond. This paper describes the structures, the processing methodology and the observed effects of high gradient operation.

  4. Risk-Based Decision Process for Accelerated Closure of a Nuclear Weapons Facility

    SciTech Connect

    Butler, L.; Norland, R. L.; DiSalvo, R.; Anderson, M.

    2003-02-25

    Nearly 40 years of nuclear weapons production at the Rocky Flats Environmental Technology Site (RFETS or Site) resulted in contamination of soil and underground systems and structures with hazardous substances, including plutonium, uranium and hazardous waste constituents. The Site was placed on the National Priority List in 1989. There are more than 370 Individual Hazardous Substance Sites (IHSSs) at RFETS. Accelerated cleanup and closure of RFETS is being achieved through implementation and refinement of a regulatory framework that fosters programmatic and technical innovations: (1) extensive use of ''accelerated actions'' to remediate IHSSs, (2) development of a risk-based screening process that triggers and helps define the scope of accelerated actions consistent with the final remedial action objectives for the Site, (3) use of field instrumentation for real time data collection, (4) a data management system that renders near real time field data assessment, and (5) a regulatory agency consultative process to facilitate timely decisions. This paper presents the process and interim results for these aspects of the accelerated closure program applied to Environmental Restoration activities at the Site.

  5. General description of electromagnetic radiation processes based on instantaneous charge acceleration in ''endpoints''

    SciTech Connect

    James, Clancy W.; Falcke, Heino; Huege, Tim; Ludwig, Marianne

    2011-11-15

    We present a methodology for calculating the electromagnetic radiation from accelerated charged particles. Our formulation - the 'endpoint formulation' - combines numerous results developed in the literature in relation to radiation arising from particle acceleration using a complete, and completely general, treatment. We do this by describing particle motion via a series of discrete, instantaneous acceleration events, or 'endpoints', with each such event being treated as a source of emission. This method implicitly allows for particle creation and destruction, and is suited to direct numerical implementation in either the time or frequency domains. In this paper we demonstrate the complete generality of our method for calculating the radiated field from charged particle acceleration, and show how it reduces to the classical named radiation processes such as synchrotron, Tamm's description of Vavilov-Cherenkov, and transition radiation under appropriate limits. Using this formulation, we are immediately able to answer outstanding questions regarding the phenomenology of radio emission from ultra-high-energy particle interactions in both the earth's atmosphere and the moon. In particular, our formulation makes it apparent that the dominant emission component of the Askaryan effect (coherent radio-wave radiation from high-energy particle cascades in dense media) comes from coherent 'bremsstrahlung' from particle acceleration, rather than coherent Vavilov-Cherenkov radiation.

  6. Accelerating the cosmic microwave background map-making procedure through preconditioning

    NASA Astrophysics Data System (ADS)

    Szydlarski, M.; Grigori, L.; Stompor, R.

    2014-12-01

    Estimation of the sky signal from sequences of time ordered data is one of the key steps in cosmic microwave background (CMB) data analysis, commonly referred to as the map-making problem. Some of the most popular and general methods proposed for this problem involve solving generalised least-squares (GLS) equations with non-diagonal noise weights given by a block-diagonal matrix with Toeplitz blocks. In this work, we study new map-making solvers potentially suitable for applications to the largest anticipated data sets. They are based on iterative conjugate gradient (CG) approaches enhanced with novel, parallel, two-level preconditioners. We apply the proposed solvers to examples of simulated non-polarised and polarised CMB observations and a set of idealised scanning strategies with sky coverage ranging from a nearly full sky down to small sky patches. We discuss their implementation for massively parallel computational platforms and their performance for a broad range of parameters that characterise the simulated data sets in detail. We find that our best new solver can outperform carefully optimised standard solvers used today by a factor of as much as five in terms of the convergence rate and a factor of up to four in terms of the time to solution, without significantly increasing the memory consumption and the volume of inter-processor communication. The performance of the new algorithms is also found to be more stable and robust and less dependent on specific characteristics of the analysed data set. We therefore conclude that the proposed approaches are well suited to address successfully challenges posed by new and forthcoming CMB data sets.

  7. Distinguishing Between Quasi-static and Alfvénic Auroral Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Lysak, R. L.; Song, Y.

    2013-12-01

    Models for the acceleration of auroral particles fall into two general classes. Quasi-static processes, such as double layers or magnetic mirror supported potential drops, produce a nearly monoenergetic beam of precipitating electrons and upward flowing ion beams. Time-dependent acceleration processes, often associated with kinetic Alfvén waves, can produce a broader range of energies and often have a strongly field-aligned pitch angle distribution. Both processes are associated with strong perpendicular electric fields as well as the parallel electric fields that are largely responsible for the particle acceleration. These electric fields and the related magnetic perturbations can be characterized by the ratio of the electric field to a perpendicular magnetic perturbation, which is related to the Pedersen conductivity in the static case and the Alfvén velocity in the time-dependent case. However, these considerations can be complicated by the interaction between upward and downward propagating waves. The relevant time and space scales of these processes will be assessed and the consequences for observation by orbiting spacecraft and ground-based instrumentation will be determined. These features will be illustrated by numerical simulations of the magnetosphere-ionosphere coupling with emphasis on what a virtual spacecraft passing through the simulation would be expected to observe.

  8. Challenges Encountered during the Processing of the BNL ERL 5 Cell Accelerating Cavity

    SciTech Connect

    A. Burrill; I. Ben-Zvi; R. Calaga; H. Hahn; V. Litvinenko; G. T. McIntyre; P. Kneisel; J. Mammosser; J. P. Preble; C. E. Reece; R. A. Rimmer; J. Saunders

    2007-08-01

    One of the key components for the Energy Recovery Linac being built by the Electron cooling group in the Collider Accelerator Department is the 5 cell accelerating cavity which is designed to accelerate 2 MeV electrons from the gun up to 15-20 MeV, allow them to make one pass through the ring and then decelerate them back down to 2 MeV prior to sending them to the dump. This cavity was designed by BNL and fabricated by AES in Medford, NY. Following fabrication it was sent to Thomas Jefferson Lab in VA for chemical processing, testing and assembly into a string assembly suitable for shipment back to BNL and integration into the ERL. The steps involved in this processing sequence will be reviewed and the deviations from processing of similar SRF cavities will be discussed. The lessons learned from this process are documented to help future projects where the scope is different from that normally encountered.

  9. CHALLENGES ENCOUNTERED DURING THE PROCESSING OF THE BNL ERL 5 CELL ACCELERATING CAVITY

    SciTech Connect

    BURRILL,A.

    2007-06-25

    One of the key components for the Energy Recovery Linac being built by the Electron cooling group in the Collider Accelerator Department is the 5 cell accelerating cavity which is designed to accelerate 2 MeV electrons from the gun up to 15-20 MeV, allow them to make one pass through the ring and then decelerate them back down to 2 MeV prior to sending them to the dump. This cavity was designed by BNL and fabricated by AES in Medford, NY. Following fabrication it was sent to Thomas Jefferson Lab in VA for chemical processing, testing and assembly into a string assembly suitable for shipment back to BNL for integration into the ERL. The steps involved in this processing sequence will be reviewed and the deviations from processing of similar SRF cavities will be discussed. The lessons learned from this process are documented to help future projects where the scope is different from that normally encountered.

  10. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  11. Flowsheet report for baseline actinide blanket processing for accelerator transmutation of waste

    SciTech Connect

    Walker, R.B.

    1992-04-08

    We provide a flowsheet analysis of the chemical processing of actinide and fission product materials form the actinide blanket of an accelerator-based transmutation concept. An initial liquid ion exchange step is employed to recover unburned plutonium and neptunium, so that it can be returned quickly to the transmitter. The remaining materials, consisting of fission products and trivalent actinides (americium, curium), is processed after a cooling period. A reverse Talspeak process is employed to separate these trivalent actinides from lanthanides and other fission products.

  12. Spatial structure of the neck and acceleration processes in a micropinch

    NASA Astrophysics Data System (ADS)

    Dolgov, A. N.; Klyachin, N. A.; Prokhorovich, D. E.

    2016-12-01

    It is shown that the spatial structure of the micropinch neck during the transition from magnetohydrodynamic to radiative compression and the bremsstrahlung spectrum of the discharge in the photon energy range of up to 30 keV depend on the configuration of the inner electrode of the coaxial electrode system of the micropinch discharge. Analysis of the experimental results indicates that the acceleration processes in the electron component of the micropinch plasma develop earlier than radiative compression.

  13. Mapping Soil Structure, Identification and Monitoring of Soil processes

    NASA Astrophysics Data System (ADS)

    Tabbagh, A.

    2006-05-01

    As in other domains of earth exploration, geophysical surface analysis tools are very well adapted to the 3D mapping of soil structure. They generate well-sampled information which can be used for plotting large-scale soil maps as a guide for determining water and fertilizer requirements in precision agriculture, can be used to assist with the delineation of polluted areas, In the case of small soil volumes they can also be used to localise cracks and preferential flow paths. Electrical measurement methods are the most suitable for the above applications because of the sensitivity of electrical conductivity to clay and water content, as well as to salinity. Dielectric permittivity exhibits the most direct relationship to free liquid water content, but GPR necessitates very short sampling intervals, and TDR measurements are limited to water content monitoring at one or several specific points. Electrical resistivity measurements have been successful in monitoring spatial soil characteristics, to follow both structural changes such as crack opening, and water displacements such as liquid uptake by plants. Self potential is sensitive to the presence of on-going redox biological activity, and 'streaming potential' is expected to provide a direct assessment of Darcy's velocity as do temperature variations. Indirectly, all of these parameters may help in the determination of hydraulic conductivity. Apart from short-term changes, on a daily to seasonal scale, long term changes such as pedogenesis processes on a secular scale and anthropogenic influences are revealed by variations in magnetic properties, which can be charted using both magnetic and electromagnetic prospection methods.

  14. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  15. Speech processing using conditional observable maximum likelihood continuity mapping

    DOEpatents

    Hogden, John; Nix, David

    2004-01-13

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence of speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.

  16. A whole body vibration perception map and associated acceleration loads at the lower leg, hip and head.

    PubMed

    Sonza, Anelise; Völkel, Nina; Zaro, Milton A; Achaval, Matilde; Hennig, Ewald M

    2015-07-01

    Whole-body vibration (WBV) training has become popular in recent years. However, WBV may be harmful to the human body. The goal of this study was to determine the acceleration magnitudes at different body segments for different frequencies of WBV. Additionally, vibration sensation ratings by subjects served to create perception vibration magnitude and discomfort maps of the human body. In the first of two experiments, 65 young adults mean (± SD) age range of 23 (± 3.0) years, participated in WBV severity perception ratings, based on a Borg scale. Measurements were performed at 12 different frequencies, two intensities (3 and 5 mm amplitudes) of rotational mode WBV. On a separate day, a second experiment (n = 40) included vertical accelerometry of the head, hip and lower leg with the same WBV settings. The highest lower limb vibration magnitude perception based on the Borg scale was extremely intense for the frequencies between 21 and 25 Hz; somewhat hard for the trunk region (11-25 Hz) and fairly light for the head (13-25 Hz). The highest vertical accelerations were found at a frequency of 23 Hz at the tibia, 9 Hz at the hip and 13 Hz at the head. At 5 mm amplitude, 61.5% of the subjects reported discomfort in the foot region (21-25 Hz), 46.2% for the lower back (17, 19 and 21 Hz) and 23% for the abdominal region (9-13 Hz). The range of 3-7 Hz represents the safest frequency range with magnitudes less than 1 g(*)sec for all studied regions.

  17. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    NASA Astrophysics Data System (ADS)

    Dey, T.; Rodrigue, P.

    2015-07-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 103 to 104 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called `intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The examination

  18. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    SciTech Connect

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; Degraff, Brian D.; Gold, Steven W.; Hannah, Brian S.; Howell, Matthew P.; Kim, Sang-Ho; Mammosser, John; McMahan, Christopher J.; Neustadt, Thomas S.; Saunders, Jeffrey W.; Tyagi, Puneet V.; Vandygriff, Daniel J.; Vandygriff, David M.; Ball, Jeffrey Allen; Blokland, Willem; Crofford, Mark T.; Lee, Sung-Woo; Stewart, Stephen; Strong, William Herb

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipacting issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.

  19. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    DOE PAGES

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; ...

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipactingmore » issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.« less

  20. Activation processes in a medical linear accelerator and spatial distribution of activation products.

    PubMed

    Fischer, Helmut W; Tabot, Ben E; Poppe, Björn

    2006-12-21

    Activation products have been identified by in situ gamma spectroscopy at the isocentre of a medical linear accelerator shortly after termination of a high energy photon beam irradiation with 15 x 15 cm field size. Spectra have been recorded either with an open or with a closed collimator. Whilst some activation products disappear from the spectrum with closed collimator or exhibit reduced count rates, others remain with identical intensity. The former isotopes are neutron-deficient and mostly decay by positron emission or electron capture; the latter have neutron excess and decay by beta(-) emission. This new finding is consistent with the assumption that photons in the primary beam produce activation products by (gamma, n) reactions in the treatment head and subsequently the neutrons created in these processes undergo (n, gamma) reactions creating activation products in a much larger area. These findings are expected to be generally applicable to all medical high energy linear accelerators.

  1. Using graphics processing units to accelerate perturbation Monte Carlo simulation in a turbid medium

    NASA Astrophysics Data System (ADS)

    Cai, Fuhong; He, Sailing

    2012-04-01

    We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.

  2. Using graphics processing units to accelerate perturbation Monte Carlo simulation in a turbid medium.

    PubMed

    Cai, Fuhong; He, Sailing

    2012-04-01

    We report a fast perturbation Monte Carlo (PMC) algorithm accelerated by graphics processing units (GPU). The two-step PMC simulation [Opt. Lett. 36, 2095 (2011)] is performed by storing the seeds instead of the photon's trajectory, and thus the requirement in computer random-access memory (RAM) becomes minimal. The two-step PMC is extremely suitable for implementation onto GPU. In a standard simulation of spatially-resolved photon migration in the turbid media, the acceleration ratio between using GPU and using conventional CPU is about 1000. Furthermore, since in the two-step PMC algorithm one records the effective seeds, which is associated to the photon that reaches a region of interest in this letter, and then re-run the MC simulation based on the recorded effective seeds, radiative transfer equation (RTE) can be solved by two-step PMC not only with an arbitrary change in the absorption coefficient, but also with large change in the scattering coefficient.

  3. Effects of Knowledge Map Characteristics in Information Processing.

    ERIC Educational Resources Information Center

    Wiegmann, Douglas A.; And Others

    1992-01-01

    Three experiments involving 102 college students examined the impact of knowledge map configuration on acquisition of information. Variations in spatial configuration, map format, and link structure affected encoding and retrieval of information. These effects were mediated by the users' spatial and verbal abilities. (SLD)

  4. Plasma Processing of SRF Cavities for the next Generation Of Particle Accelerators

    SciTech Connect

    Vuskovic, Leposava

    2015-11-23

    The cost-effective production of high frequency accelerating fields are the foundation for the next generation of particle accelerators. The Ar/Cl2 plasma etching technology holds the promise to yield a major reduction in cavity preparation costs. Plasma-based dry niobium surface treatment provides an excellent opportunity to remove bulk niobium, eliminate surface imperfections, increase cavity quality factor, and bring accelerating fields to higher levels. At the same time, the developed technology will be more environmentally friendly than the hydrogen fluoride-based wet etching technology. Plasma etching of inner surfaces of standard multi-cell SRF cavities is the main goal of this research in order to eliminate contaminants, including niobium oxides, in the penetration depth region. Successful plasma processing of multi-cell cavities will establish this method as a viable technique in the quest for more efficient components of next generation particle accelerators. In this project the single-cell pill box cavity plasma etching system is developed and etching conditions are determined. An actual single cell SRF cavity (1497 MHz) is plasma etched based on the pill box cavity results. The first RF test of this plasma etched cavity at cryogenic temperature is obtained. The system can also be used for other surface modifications, including tailoring niobium surface properties, surface passivation or nitriding for better performance of SRF cavities. The results of this plasma processing technology may be applied to most of the current SRF cavity fabrication projects. In the course of this project it has been demonstrated that a capacitively coupled radio-frequency discharge can be successfully used for etching curved niobium surfaces, in particular the inner walls of SRF cavities. The results could also be applicable to the inner or concave surfaces of any 3D structure other than an SRF cavity.

  5. Mapping and manipulating optoelectronic processes in emerging photovoltaic materials

    NASA Astrophysics Data System (ADS)

    Leblebici, Sibel Yontz

    The goal of the work in this dissertation is to understand and overcome the limiting optoelectronic processes in emerging second generation photovoltaic devices. There is an urgent need to mitigate global climate change by reducing greenhouse gas emissions. Renewable energy from photovoltaics has great potential to reduce emissions if the energy to manufacture the solar cell is much lower than the energy the solar cell generates. Two emerging thin film solar cell materials, organic semiconductors and hybrid organic-inorganic perovskites, meet this requirement because the active layers are processed at low temperatures, e.g. 150 °C. Other advantages of these two classes of materials include solution processability, composted of abundant materials, strongly light absorbing, highly tunable bandgaps, and low cost. Organic solar cells have evolved significantly from 1% efficient devices in 1989 to 11% efficient devices today. Although organic semiconductors are highly tunable and inexpensive, the main challenges to overcome are the large exciton binding energies and poor understanding of exciton dynamics. In my thesis, I optimized solar cells based on three new solution processable azadipyrromethene-based small molecules. I used the highest performing molecule to study the effect of increasing the permittivity of the material by incorporating a high permittivity small molecule into the active layer. The studies on two model systems, small donor molecules and a polymer-fullerene bulk heterojunction, show that Frenkel and charge transfer exciton binding energies can be manipulated by controlling permittivity, which impacts the solar cell efficiency. Hybrid organic-inorganic perovskite materials have similar advantages to organic semiconductors, but they are not excitonic, which is an added advantage for these materials. Although photovoltaics based on hybrid halide perovskite materials have exceeded 20% efficiency in only a few years of optimization, the loss mechanisms

  6. Hybrid-integrated optical acceleration seismometer and its digital processing system

    NASA Astrophysics Data System (ADS)

    En, De; Chen, Caihe; Cui, Yuming; Tang, Donglin; Liang, Zhengxi; Gao, Hongyu

    2005-02-01

    Hybrid-integrated Optical acceleration seismometer and its digital signal processing system are researched and developed. The simple system figure of the seismometer is given. The principle of the seismometer is explicated. The seismometer is composed of a seismic mass,Integrated Optical Chips and a set of Michelson interferometer light path. The Michelson Integrated Optical Chips are critical parts among the sensor elements. The simple figure of the digital signal processing system is given. As an advanced quality digital signal processing (DSP) chip equipped with necessary circuits has been used in its digital signal processing system, a high accurate detection of the acceleration signal has been achieved and the environmental interference signal has been effectively compensated. Test results indicate that the accelerometer has better frequency response well above the resonant frequency, and the output signal is in correspondence with the input signal. The accelerometer also has better frequency response under the resonant frequency. At last, the curve of Seismometer frequency response is given.

  7. ACCELERATED PROCESSING OF SB4 AND PREPARATION FOR SB5 PROCESSING AT DWPF

    SciTech Connect

    Herman, C

    2008-12-01

    The Defense Waste Processing Facility (DWPF) initiated processing of Sludge Batch 4 (SB4) in May 2007. SB4 was the first DWPF sludge batch to contain significant quantities of HM or high Al sludge. Initial testing with SB4 simulants showed potential negative impacts to DWPF processing; therefore, Savannah River National Laboratory (SRNL) performed extensive testing in an attempt to optimize processing. SRNL's testing has resulted in the highest DWPF production rates since start-up. During SB4 processing, DWPF also began incorporating waste streams from the interim salt processing facilities to initiate coupled operations. While DWPF has been processing SB4, the Liquid Waste Organization (LWO) and the SRNL have been preparing Sludge Batch 5 (SB5). SB5 has undergone low-temperature aluminum dissolution to reduce the mass of sludge for vitrification and will contain a small fraction of Purex sludge. A high-level review of SB4 processing and the SB5 preparation studies will be provided.

  8. Real-time Process Monitoring and Temperature Mapping of the 3D Polymer Printing Process

    SciTech Connect

    Dinwiddie, Ralph Barton; Love, Lonnie J; Rowe, John C

    2013-01-01

    An extended range IR camera was used to make temperature measurements of samples as they are being manufactured. The objective is to quantify the temperature variation inside the system as parts are being fabricated, as well as quantify the temperature of a part during fabrication. The IR camera was used to map the temperature within the build volume of the oven and surface temperature measurement of a part as it was being manufactured. The development of the temperature map of the oven provides insight into the global temperature variation within the oven that may lead to understanding variations in the properties of parts as a function of location. The observation of the temperature variation of a part that fails during construction provides insight into how the deposition process itself impacts temperature distribution within a single part leading to failure.

  9. Successes and lessons learned: How to accelerate the base closure process

    SciTech Connect

    Larkin, V.C.; Stoll, R.

    1994-12-31

    Naval Station Puget Sound, Seattle, was nominated for closure by the Base Closure Commission in 1991 (BRAC II) and will be transferred in September of 1995. Historic activities have resulted in petroleum-related environmental issues. Unlike many bases being closed, the politically sensitive issues are not the economics of job losses. Because homeless housing is expected to be included in the selected reuse plan, the primary concerns of the public are reduced real estate values and public safety. In addition to a reuse plan adopted by the Seattle City Council, the Muckleshoot Indian tribe has also submitted an alternative reuse plan to the Navy. Acceleration methods described in this paper include methods for beginning the environmental impact statement (EIS) process before reuse plans are finalized; tracking development of engineering alternatives in parallel with environmental investigations; using field screening data to begin developing plans and specifications for remediation, instead of waiting 6 weeks for analytical results and data validation; using efficient communication techniques to facilitate accelerated review of technical documents by the BCT; expediting removal actions and performing ``cleanups incidental to investigation``; and effectively facilitating members of the Restoration Advisory Board with divergent points of view. This paper will describe acceleration methods that proved to be effective and methods that could be modified to be more effective at other sites.

  10. Physical processes at work in sub-30 fs, PW laser pulse-driven plasma accelerators: Towards GeV electron acceleration experiments at CILEX facility

    NASA Astrophysics Data System (ADS)

    Beck, A.; Kalmykov, S. Y.; Davoine, X.; Lifschitz, A.; Shadwick, B. A.; Malka, V.; Specka, A.

    2014-03-01

    Optimal regimes and physical processes at work are identified for the first round of laser wakefield acceleration experiments proposed at a future CILEX facility. The Apollon-10P CILEX laser, delivering fully compressed, near-PW-power pulses of sub-25 fs duration, is well suited for driving electron density wakes in the blowout regime in cm-length gas targets. Early destruction of the pulse (partly due to energy depletion) prevents electrons from reaching dephasing, limiting the energy gain to about 3 GeV. However, the optimal operating regimes, found with reduced and full three-dimensional particle-in-cell simulations, show high energy efficiency, with about 10% of incident pulse energy transferred to 3 GeV electron bunches with sub-5% energy spread, half-nC charge, and absolutely no low-energy background. This optimal acceleration occurs in 2 cm length plasmas of electron density below 1018 cm-3. Due to their high charge and low phase space volume, these multi-GeV bunches are tailor-made for staged acceleration planned in the framework of the CILEX project. The hallmarks of the optimal regime are electron self-injection at the early stage of laser pulse propagation, stable self-guiding of the pulse through the entire acceleration process, and no need for an external plasma channel. With the initial focal spot closely matched for the nonlinear self-guiding, the laser pulse stabilizes transversely within two Rayleigh lengths, preventing subsequent evolution of the accelerating bucket. This dynamics prevents continuous self-injection of background electrons, preserving low phase space volume of the bunch through the plasma. Near the end of propagation, an optical shock builds up in the pulse tail. This neither disrupts pulse propagation nor produces any noticeable low-energy background in the electron spectra, which is in striking contrast with most of existing GeV-scale acceleration experiments.

  11. Ground Test of the Urine Processing Assembly for Accelerations and Transfer Functions

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Almond, Deborah F. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of the ground test of the urine processing assembly for accelerations and transfer functions. Details are given on the test setup, test data, data analysis, analytical results, and microgravity assessment. The conclusions of the tests include the following: (1) the single input/multiple output method is useful if the data is acquired by tri-axial accelerometers and inputs can be considered uncorrelated; (2) tying coherence with the matrix yields higher confidence in results; (3) the WRS#2 rack ORUs need to be isolated; (4) and future work includes a plan for characterizing performance of isolation materials.

  12. Turbulent Magnetohydrodynamic Acceleration Processes: Theory SSX Experiments and Connections to Space and Astrophysics

    SciTech Connect

    W Matthaeus; M Brown

    2006-07-15

    This is the final technical report for a funded program to provide theoretical support to the Swarthmore Spheromak Experiment. We examined mhd relaxation, reconnecton between two spheromaks, particle acceleration by these processes, and collisonless effects, e.g., Hall effect near the reconnection zone,. Throughout the project, applications to space plasma physics and astrophysics were included. Towards the end ofthe project we were examining a more fully turbulent relaxation associated with unconstrained dynamics in SSX. We employed experimental, spacecraft observations, analytical and numerical methods.

  13. Draw-in Map — A Road Map for Simulation-Guided Die Tryout and Stamping Process Control

    NASA Astrophysics Data System (ADS)

    Wang, Chuantao (C. T.); Zhang, Jimmy J.; Goan, Norman

    2005-08-01

    Sheet metal forming is a displacement or draw-in controlled manufacturing process in which a flat blank is drawn into die cavity to form an automotive body panel. Draw-in amount is the single most important stamping manufacturing index that controls all forming characteristics (strains, stresses, thinning, etc.), stamping failures (splits, wrinkles, surface distortion, etc.) and line die operations and automations. Draw-in Map is engineered for math-based die developments via advanced stamping simulation technology. Then the Draw-in Map is provided to die makers in plants as a road map for math-guided die tryout in which the die tryout workers follow the engineered tryout conditions and matches the engineered draw-in amount so that the tryout time and cost are greatly reduced, and quality is ensured. The Map can also be used as a math-based trouble-shooting tool to identify the causes of formability problems in stamping production. The engineered Draw-in Map has been applied to all draw die tryout for all GM vehicle programs since 1998. A minimum 50% reduction in both lead-time and cost and significant improvement in panel quality in tryout have been reported. This paper presents the concept and process to apply the engineered Draw-in Map in die tryout.

  14. Draw-in Map - A Road Map for Simulation-Guided Die Tryout and Stamping Process Control

    SciTech Connect

    Wang Chuantao; Zhang, Jimmy J.; Goan, Norman

    2005-08-05

    Sheet metal forming is a displacement or draw-in controlled manufacturing process in which a flat blank is drawn into die cavity to form an automotive body panel. Draw-in amount is the single most important stamping manufacturing index that controls all forming characteristics (strains, stresses, thinning, etc.), stamping failures (splits, wrinkles, surface distortion, etc.) and line die operations and automations. Draw-in Map is engineered for math-based die developments via advanced stamping simulation technology. Then the Draw-in Map is provided to die makers in plants as a road map for math-guided die tryout in which the die tryout workers follow the engineered tryout conditions and matches the engineered draw-in amount so that the tryout time and cost are greatly reduced, and quality is ensured. The Map can also be used as a math-based trouble-shooting tool to identify the causes of formability problems in stamping production. The engineered Draw-in Map has been applied to all draw die tryout for all GM vehicle programs since 1998. A minimum 50% reduction in both lead-time and cost and significant improvement in panel quality in tryout have been reported. This paper presents the concept and process to apply the engineered Draw-in Map in die tryout.

  15. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    SciTech Connect

    Curtis, J.H.; Michelotti, M.D.; Riemer, N.; Heath, M.T.; West, M.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removal rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.

  16. Spatiotemporal processing of linear acceleration: primary afferent and central vestibular neuron responses

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Dickman, J. D.

    2000-01-01

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements

  17. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  18. Redesign of Process Map to Increase Efficiency: Reducing Procedure Time 1 in Cervical-Cancer Brachytherapy

    PubMed Central

    Damato, Antonio L.; Cormack, Robert A.; Bhagwat, Mandar S.; Buzurovic, Ivan; Finucane, Susan; Hansen, Jorgen L.; O’Farrell, Desmond A.; Offiong, Alecia; Randall, Una; Friesen, Scott; Lee, Larissa J.; Viswanathan, Akila N.

    2014-01-01

    Purpose To increase intra-procedural efficiency in the use of clinical resources and to decrease planning time for cervical-cancer brachytherapy treatments through redesign of the procedure’s process map. Methods and Materials A multi-disciplinary team identified all tasks and associated resources involved in cervical-cancer brachytherapy in our institution, and arranged them in a process map. A redesign of the treatment planning component of the process map was conducted with the goal of minimizing planning time. Planning time was measured on 20 consecutive insertions, of which 10 were performed with standard procedures and 10 with the redesigned process map, and results compared. Statistical significance (p <0.05) was measured with a 2-tailed T-test. Results Twelve tasks involved in cervical-cancer brachytherapy treatments were identified. The process map showed that in standard procedures, the treatment planning tasks were performed sequentially. The process map was redesigned to specify that contouring and some planning tasks are performed concomitantly. Some quality assurance (QA) tasks were reorganized to minimize adverse effects of a possible error on procedure time. Test “dry runs” followed by live implementation confirmed the applicability of the new process map to clinical conditions. A 29% reduction in planning time (p <0.01) was observed with the introduction of the redesigned process map. Conclusions A process map for cervical-cancer brachytherapy was generated. The treatment planning component of the process map was redesigned, resulting in a 29% decrease in planning time and a streamlining of the QA process. PMID:25572438

  19. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  20. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words.

    PubMed

    Vlach, Haley A; Sandhofer, Catherine M

    2012-01-01

    Children's remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children's (N = 216) and adults' (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail - forgetting supports extended mapping by promoting the memory and generalization of words and categories.

  1. Fast Mapping Across Time: Memory Processes Support Children’s Retention of Learned Words

    PubMed Central

    Vlach, Haley A.; Sandhofer, Catherine M.

    2012-01-01

    Children’s remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children’s (N = 216) and adults’ (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children’s retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children’s word learning and the role of one such process, forgetting, is discussed in detail – forgetting supports extended mapping by promoting the memory and generalization of words and categories. PMID:22375132

  2. Utilization of Integrated Process Control, Data Capture, and Data Analysis in Construction of Accelerator Systems

    SciTech Connect

    Bonnie Madre; Charles Reece; Joseph Ozelis; Valerie Bookwalter

    2003-05-12

    Jefferson Lab has developed a web-based system that integrates commercial database, data analysis, document archiving and retrieval, and user interface software, into a coherent knowledge management product (Pansophy). This product provides important tools for the successful pursuit of major projects such as accelerator system development and construction, by offering elements of process and procedure control, data capture and review, and data mining and analysis. After a period of initial development, Pansophy is now being used in Jefferson Lab's SNS superconducting linac construction effort, as a means for structuring and implementing the QA program, for process control and tracking, and for cryomodule test data capture and presentation/analysis. Development of Pansophy is continuing, in particular data queries and analysis functions that are the cornerstone of its utility.

  3. Arsenite exposure accelerates aging process regulated by the transcription factor DAF-16/FOXO in Caenorhabditis elegans.

    PubMed

    Yu, Chan-Wei; How, Chun Ming; Liao, Vivian Hsiu-Chuan

    2016-05-01

    Arsenic is a known human carcinogen and high levels of arsenic contamination in food, soils, water, and air are of toxicology concerns. Nowadays, arsenic is still a contaminant of emerging interest, yet the effects of arsenic on aging process have received little attention. In this study, we investigated the effects and the underlying mechanisms of chronic arsenite exposure on the aging process in Caenorhabditis elegans. The results showed that prolonged arsenite exposure caused significantly decreased lifespan compared to non-exposed ones. In addition, arsenite exposure (100 μM) caused significant changes of age-dependent biomarkers, including a decrease of defecation frequency, accumulations of intestinal lipofuscin and lipid peroxidation in an age-dependent manner in C. elegans. Further evidence revealed that intracellular reactive oxygen species (ROS) level was significantly increased in an age-dependent manner upon 100 μM arsenite exposure. Moreover, the mRNA levels of transcriptional makers of aging (hsp-16.1, hsp-16.49, and hsp-70) were increased in aged worms under arsenite exposure (100 μM). Finally, we showed that daf-16 mutant worms were more sensitive to arsenite exposure (100 μM) on lifespan and failed to induce the expression of its target gene sod-3 in aged daf-16 mutant under arsenite exposure (100 μM). Our study demonstrated that chronic arsenite exposure resulted in accelerated aging process in C. elegans. The overproduction of intracellular ROS and the transcription factor DAF-16/FOXO play roles in mediating the accelerated aging process by arsenite exposure in C. elegans. This study implicates a potential ecotoxicological and health risk of arsenic in the environment.

  4. Ion distributions in the vicinity of Mars: Signatures of heating and acceleration processes

    NASA Astrophysics Data System (ADS)

    Nilsson, H.; Stenberg, G.; Futaana, Y.; Holmström, M.; Barabash, S.; Lundin, R.; Edberg, N. J. T.; Fedorov, A.

    2012-02-01

    More than three years of data from the ASPERA-3 instrument on-board Mars Express has been used to compile average distribution functions of ions in and around the Mars induced magnetosphere. We present samples of average distribution functions, as well as average flux patterns based on the average distribution functions, all suitable for detailed comparison with models of the near-Mars space environment. The average heavy ion distributions close to the planet form thermal populations with a temperature of 3 to 10 eV. The distribution functions in the tail consist of two populations, one cold which is an extension of the low altitude population, and one accelerated population of ionospheric origin ions. All significant fluxes of heavy ions in the tail are tailward. The heavy ions in the magnetosheath form a plume with the flow aligned with the bow shock, and a more radial flow direction than the solar wind origin flow. Summarizing the escape processes, ionospheric ions are heated close to the planet, presumably through wave-particle interaction. These heated populations are accelerated in the tailward direction in a restricted region. Another significant escape path is through the magnetosheath. A part of the ionospheric population is likely accelerated in the radial direction, out into the magnetosheath, although pick up of an oxygen exosphere may also be a viable source for this escape. Increased energy input from the solar wind during CIR events appear to mainly increase the number flux of escaping particles, the average energy of the escaping particles is not strongly affected. Heavy ions on the dayside may precipitate and cause sputtering of the atmosphere, though fluxes are likely lower than 0.4 × 1023 s-1.

  5. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units

    PubMed Central

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A.; Anastasio, Mark A.

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. Results: The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Conclusions: Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction. PMID:23387778

  6. Investigation on the internal acceleration process of the outer radiation belt using the particle filter

    NASA Astrophysics Data System (ADS)

    Toyama, H.; Miyoshi, Y.; Ueno, G.; Koshiishi, H.; Matsumoto, H.; Shiokawa, K.

    2013-12-01

    It is known that high energy electrons in the radiation belts often cause satellite anomalies and malfunctions. Thus, a forecast of the time variation of the energetic electrons is necessary to protect satellites in the radiation belts. Time variations of the radiation belt electrons have been modeled with the Fokker-Plank equation. Performance of the forecast using the Fokker-Planck equation depends on the parameters used in the model, so that improvement of the parameters is important for the space weather forecast. We performed data assimilation using the particle filter by a code which was developed by Miyoshi et al.[2006]. We prepare 1000 particles used for the calculation. In this study, phase space density, the diffusion coefficient, and wave amplitude, and the source amplitude of the internal acceleration compose the state vector. The observation vector consists of the differential flux measured by the Tsubasa satellite. We also apply the particle smoother to estimate the smoothed distribution. While there were several discrepancies between the simulation without the data assimilation and the observations, the data assimilation improves the simulation result, and captures the typical flux variations of the outer belt during magnetic storms. We also discuss the internal acceleration process on the basis of the source amplitude estimated through the data assimilation.

  7. Graphics processing unit (GPU)-accelerated particle filter framework for positron emission tomography image reconstruction.

    PubMed

    Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng

    2012-04-01

    As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application.

  8. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  9. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units.

    PubMed

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J Andrew

    2013-11-12

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling.

  10. Accelerated evolution after gene duplication: a time-dependent process affecting just one copy.

    PubMed

    Pegueroles, Cinta; Laurie, Steve; Albà, M Mar

    2013-08-01

    Gene duplication is widely regarded as a major mechanism modeling genome evolution and function. However, the mechanisms that drive the evolution of the two, initially redundant, gene copies are still ill defined. Many gene duplicates experience evolutionary rate acceleration, but the relative contribution of positive selection and random drift to the retention and subsequent evolution of gene duplicates, and for how long the molecular clock may be distorted by these processes, remains unclear. Focusing on rodent genes that duplicated before and after the mouse and rat split, we find significantly increased sequence divergence after duplication in only one of the copies, which in nearly all cases corresponds to the novel daughter copy, independent of the mechanism of duplication. We observe that the evolutionary rate of the accelerated copy, measured as the ratio of nonsynonymous to synonymous substitutions, is on average 5-fold higher in the period spanning 4-12 My after the duplication than it was before the duplication. This increase can be explained, at least in part, by the action of positive selection according to the results of the maximum likelihood-based branch-site test. Subsequently, the rate decelerates until purifying selection completely returns to preduplication levels. Reversion to the original rates has already been accomplished 40.5 My after the duplication event, corresponding to a genetic distance of about 0.28 synonymous substitutions per site. Differences in tissue gene expression patterns parallel those of substitution rates, reinforcing the role of neofunctionalization in explaining the evolution of young gene duplicates.

  11. High-Speed Digital Signal Processing Method for Detection of Repeating Earthquakes Using GPGPU-Acceleration

    NASA Astrophysics Data System (ADS)

    Kawakami, Taiki; Okubo, Kan; Uchida, Naoki; Takeuchi, Nobunao; Matsuzawa, Toru

    2013-04-01

    Repeating earthquakes are occurring on the similar asperity at the plate boundary. These earthquakes have an important property; the seismic waveforms observed at the identical observation site are very similar regardless of their occurrence time. The slip histories of repeating earthquakes could reveal the existence of asperities: The Analysis of repeating earthquakes can detect the characteristics of the asperities and realize the temporal and spatial monitoring of the slip in the plate boundary. Moreover, we are expecting the medium-term predictions of earthquake at the plate boundary by means of analysis of repeating earthquakes. Although the previous works mostly clarified the existence of asperity and repeating earthquake, and relationship between asperity and quasi-static slip area, the stable and robust method for automatic detection of repeating earthquakes has not been established yet. Furthermore, in order to process the enormous data (so-called big data) the speedup of the signal processing is an important issue. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the signal processing in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer. GPU computing gives us the high-performance computing environment at a lower cost than before. Therefore, the use of GPUs contributes to a significant reduction of the execution time in signal processing of the huge seismic data. In this study, first, we applied the band-limited Fourier phase correlation as a fast method of detecting repeating earthquake. This method utilizes only band-limited phase information and yields the correlation values between two seismic signals. Secondly, we employ coherence function using three orthogonal components (East-West, North-South, and Up-Down) of seismic data as a

  12. Using a commercial graphical processing unit and the CUDA programming language to accelerate scientific image processing applications

    NASA Astrophysics Data System (ADS)

    Broussard, Randy P.; Ives, Robert W.

    2011-01-01

    In the past two years the processing power of video graphics cards has quadrupled and is approaching super computer levels. State-of-the-art graphical processing units (GPU) boast of theoretical computational performance in the range of 1.5 trillion floating point operations per second (1.5 Teraflops). This processing power is readily accessible to the scientific community at a relatively small cost. High level programming languages are now available that give access to the internal architecture of the graphics card allowing greater algorithm optimization. This research takes memory access expensive portions of an image-based iris identification algorithm and hosts it on a GPU using the C++ compatible CUDA language. The selected segmentation algorithm uses basic image processing techniques such as image inversion, value squaring, thresholding, dilation, erosion and memory/computationally intensive calculations such as the circular Hough transform. Portions of the iris segmentation algorithm were accelerated by a factor of 77 over the 2008 GPU results. Some parts of the algorithm ran at speeds that were over 1600 times faster than their CPU counterparts. Strengths and limitations of the GPU Single Instruction Multiple Data architecture are discussed. Memory access times, instruction execution times, programming details and code samples are presented as part of the research.

  13. Elevated-temperature-induced acceleration of PACT clearing process of mouse brain tissue

    NASA Astrophysics Data System (ADS)

    Yu, Tingting; Qi, Yisong; Zhu, Jingtan; Xu, Jianyi; Gong, Hui; Luo, Qingming; Zhu, Dan

    2017-01-01

    Tissue optical clearing technique shows a great potential for neural imaging with high resolution, especially for connectomics in brain. The passive clarity technique (PACT) is a relative simple clearing method based on incubation, which has a great advantage on tissue transparency, fluorescence preservation and immunostaining compatibility for imaging tissue blocks. However, this method suffers from long processing time. Previous studies indicated that increasing temperature can speed up the clearing. In this work, we aim to systematacially and quantitatively study this influence based on PACT with graded increase of temperatures. We investigated the process of optical clearing of brain tissue block at different temperatures, and found that elevated temperature could accelerate the clearing process and also had influence on the fluorescence intensity. By balancing the advantages with drawbacks, we conclude that 42–47 °C is an alternative temperature range for PACT, which can not only produce faster clearing process, but also retain the original advantages of PACT by preserving endogenous fluorescence well, achieving fine morphology maintenance and immunostaining compatibility.

  14. Elevated-temperature-induced acceleration of PACT clearing process of mouse brain tissue

    PubMed Central

    Yu, Tingting; Qi, Yisong; Zhu, Jingtan; Xu, Jianyi; Gong, Hui; Luo, Qingming; Zhu, Dan

    2017-01-01

    Tissue optical clearing technique shows a great potential for neural imaging with high resolution, especially for connectomics in brain. The passive clarity technique (PACT) is a relative simple clearing method based on incubation, which has a great advantage on tissue transparency, fluorescence preservation and immunostaining compatibility for imaging tissue blocks. However, this method suffers from long processing time. Previous studies indicated that increasing temperature can speed up the clearing. In this work, we aim to systematacially and quantitatively study this influence based on PACT with graded increase of temperatures. We investigated the process of optical clearing of brain tissue block at different temperatures, and found that elevated temperature could accelerate the clearing process and also had influence on the fluorescence intensity. By balancing the advantages with drawbacks, we conclude that 42–47 °C is an alternative temperature range for PACT, which can not only produce faster clearing process, but also retain the original advantages of PACT by preserving endogenous fluorescence well, achieving fine morphology maintenance and immunostaining compatibility. PMID:28139694

  15. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  16. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  17. In-flight, online processing and mapping of airborne gamma spectrometry data

    NASA Astrophysics Data System (ADS)

    Bucher, Benno; Rybach, Ladislaus; Schwarz, Georg

    2005-03-01

    Airborne gamma spectrometry is a technique especially useful for environmental monitoring and emergency preparedness. Because time is a critical factor in emergency response a fast data processing and mapping software is needed, which also supports online monitoring and data processing features. Therefore a new online data processing and mapping software was developed, which also displays successfully the gamma spectra, the ground activity and the topographical data. The software was successfully tested during various survey flights.

  18. On the mapping associated with the complex representation of functions and processes.

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1972-01-01

    The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-

  19. Mapping Experiment as a Learning Process: How the First Electromagnetic Motor Was Invented.

    ERIC Educational Resources Information Center

    Gooding, David

    1990-01-01

    Introduced is a notation to map out an experiment as an active process in a real-world environment and display the human aspect written out of most narratives. Comparing maps of accounts can show how knowledge-construction depends on narrative reconstruction. Emphasized are nonverbal and procedural aspects of discovery and invention. (KR)

  20. Beyond Event Segmentation: Spatial- and Social-Cognitive Processes in Verb-to-Action Mapping

    ERIC Educational Resources Information Center

    Friend, Margaret; Pace, Amy

    2011-01-01

    The present article investigates spatial- and social-cognitive processes in toddlers' mapping of concepts to real-world events. In 2 studies we explore how event segmentation might lay the groundwork for extracting actions from the event stream and conceptually mapping novel verbs to these actions. In Study 1, toddlers demonstrated the ability to…

  1. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion

    PubMed Central

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  2. Venus and the Earth's Archean: Geological mapping and process comparisons

    NASA Astrophysics Data System (ADS)

    Head, J. W.; Hurwitz, D. M.; Ivanov, M. A.; Basilevsky, A. T.; Senthil Kumar, P.

    2008-09-01

    Introduction. The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities [1-3] and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new perspectives on the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record [4] provides important insight into high heat-flux tectonic and magmatic environments and structures [5] and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Here we address the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. The Earth's Archean and its Relation to Venus. The Archean period of Earth's history extends from accretion/initial crust formation (sometimes called the Hadean) to 2.5 Ga and is thought of by most workers as being a transitional period between the earliest Earth and later periods largely dominated by plate tectonics (Proterozoic and Phanerozoic) [2, 4]. Thus the Archean is viewed as recording a critical period in Earth's history in which a transition took place from the types of primary and early secondary crusts seen on the Moon, Mars and Mercury [6] (and largely missing in the record of the Earth), to the style of crustal accretion and plate tectonics characterizing later Earth history. The Archean is also characterized by enhanced crustal and mantle temperatures leading to differences in deformation style and volcanism (e.g., komatiites) [2]. The preserved Archean crust is exposed in ~36 different cratons [4], forming the cores of most continental regions, and is composed of gneisses, plutons and

  3. Processing of radioactive waste by the use of low energy ({le} 100 MeV) charged particle accelerators. Optimization problems

    SciTech Connect

    Mushnikov, V.N.; Ozhigov, L.S.; Khizhnyak, N.A.

    1993-12-31

    The radiation processing of long-lived radiotoxic elements is based on transmutation reactions under the action of various particles and energies. Among the different particle sources the most promising is the proton accelerator. The present work studied the process of radiation deactivation in the stationary proton flux as functions of their flux density and energy. The Bateman-Robinson differential equations were solved.

  4. 24 CFR 200.1500 - Sanctions against a MAP lender.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sanctions against a MAP lender. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1500 Sanctions against a MAP lender. (a) In addition...

  5. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  6. 24 CFR 200.1500 - Sanctions against a MAP lender.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sanctions against a MAP lender. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1500 Sanctions against a MAP lender. (a) In addition...

  7. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  8. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  9. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  10. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  11. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  12. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  13. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  14. 24 CFR 200.1500 - Sanctions against a MAP lender.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sanctions against a MAP lender. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1500 Sanctions against a MAP lender. (a) In addition...

  15. 24 CFR 200.1500 - Sanctions against a MAP lender.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sanctions against a MAP lender. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1500 Sanctions against a MAP lender. (a) In addition...

  16. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  17. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  18. 24 CFR 200.1500 - Sanctions against a MAP lender.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sanctions against a MAP lender. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1500 Sanctions against a MAP lender. (a) In addition...

  19. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  20. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  1. 24 CFR 200.1535 - MAP Lender Review Board.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false MAP Lender Review Board. 200.1535... URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1535 MAP Lender Review Board. (a) Authority—(1) Sanctions....

  2. 24 CFR 200.1520 - Termination of MAP privileges.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Termination of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1520 Termination of MAP privileges. (a) In...

  3. 24 CFR 200.1515 - Suspension of MAP privileges.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Suspension of MAP privileges. 200... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1515 Suspension of MAP privileges. (a) In general....

  4. Process for Generating Engine Fuel Consumption Map: Future Atkinson Engine with Cooled EGR and Cylinder Deactivation

    EPA Pesticide Factsheets

    This document summarizes the process followed to utilize GT-POWER modeled engine and laboratory engine dyno test data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.

  5. Revealing the flux: Using processed Husimi maps to visualize dynamics of bound systems and mesoscopic transport

    NASA Astrophysics Data System (ADS)

    Mason, Douglas J.; Borunda, Mario F.; Heller, Eric J.

    2015-04-01

    We elaborate upon the "processed Husimi map" representation for visualizing quantum wave functions using coherent states as a measurement of the local phase space to produce a vector field related to the probability flux. Adapted from the Husimi projection, the processed Husimi map is mathematically related to the flux operator under certain limits but offers a robust and flexible alternative since it can operate away from these limits and in systems that exhibit zero flux. The processed Husimi map is further capable of revealing the full classical dynamics underlying a quantum wave function since it reverse engineers the wave function to yield the underlying classical ray structure. We demonstrate the capabilities of processed Husimi maps on bound systems with and without electromagnetic fields, as well as on open systems on and off resonance, to examine the relationship between closed system eigenstates and mesoscopic transport.

  6. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use

  7. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, M.; Buss, R.; Scherrer, S.; Margreth, M.; Zappa, M.

    2015-12-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexity of automatic DRP mapping approaches affects hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison and a deviation map were derived. The automatically derived DRP-maps were used in synthetic runoff simulations with an adapted version of the hydrological model PREVAH, and simulation results compared with those from simulations using the reference maps. The DRP-maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP-maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. We therefore recommend not only using expert

  8. Effects of a Story Map on Accelerated Reader Postreading Test Scores in Students with High-Functioning Autism

    ERIC Educational Resources Information Center

    Stringfield, Suzanne Griggs; Luscre, Deanna; Gast, David L.

    2011-01-01

    In this study, three elementary-aged boys with high-functioning autism (HFA) were taught to use a graphic organizer called a Story Map as a postreading tool during language arts instruction. Students learned to accurately complete the Story Map. The effect of the intervention on story recall was assessed within the context of a multiple-baseline…

  9. Vibrotactile masking experiments reveal accelerated somatosensory processing in congenitally blind braille readers.

    PubMed

    Bhattacharjee, Arindam; Ye, Amanda J; Lisak, Joy A; Vargas, Maria G; Goldreich, Daniel

    2010-10-27

    Braille reading is a demanding task that requires the identification of rapidly varying tactile patterns. During proficient reading, neighboring characters impact the fingertip at ∼100 ms intervals, and adjacent raised dots within a character at 50 ms intervals. Because the brain requires time to interpret afferent sensorineural activity, among other reasons, tactile stimuli separated by such short temporal intervals pose a challenge to perception. How, then, do proficient Braille readers successfully interpret inputs arising from their fingertips at such rapid rates? We hypothesized that somatosensory perceptual consolidation occurs more rapidly in proficient Braille readers. If so, Braille readers should outperform sighted participants on masking tasks, which demand rapid perceptual processing, but would not necessarily outperform the sighted on tests of simple vibrotactile sensitivity. To investigate, we conducted two-interval forced-choice vibrotactile detection, amplitude discrimination, and masking tasks on the index fingertips of 89 sighted and 57 profoundly blind humans. Sighted and blind participants had similar unmasked detection (25 ms target tap) and amplitude discrimination (compared with 100 μm reference tap) thresholds, but congenitally blind Braille readers, the fastest readers among the blind participants, exhibited significantly less masking than the sighted (masker, 50 Hz, 50 μm; target-masker delays, ±50 and ±100 ms). Indeed, Braille reading speed correlated significantly and specifically with masking task performance, and in particular with the backward masking decay time constant. We conclude that vibrotactile sensitivity is unchanged but that perceptual processing is accelerated in congenitally blind Braille readers.

  10. Incipient fault detection and identification in process systems using accelerating neural network learning

    SciTech Connect

    Parlos, A.G.; Muthusami, J.; Atiya, A.F. . Dept. of Nuclear Engineering)

    1994-02-01

    The objective of this paper is to present the development and numerical testing of a robust fault detection and identification (FDI) system using artificial neural networks (ANNs), for incipient (slowly developing) faults occurring in process systems. The challenge in using ANNs in FDI systems arises because of one's desire to detect faults of varying severity, faults from noisy sensors, and multiple simultaneous faults. To address these issues, it becomes essential to have a learning algorithm that ensures quick convergence to a high level of accuracy. A recently developed accelerated learning algorithm, namely a form of an adaptive back propagation (ABP) algorithm, is used for this purpose. The ABP algorithm is used for the development of an FDI system for a process composed of a direct current motor, a centrifugal pump, and the associated piping system. Simulation studies indicate that the FDI system has significantly high sensitivity to incipient fault severity, while exhibiting insensitivity to sensor noise. For multiple simultaneous faults, the FDI system detects the fault with the predominant signature. The major limitation of the developed FDI system is encountered when it is subjected to simultaneous faults with similar signatures. During such faults, the inherent limitation of pattern-recognition-based FDI methods becomes apparent. Thus, alternate, more sophisticated FDI methods become necessary to address such problems. Even though the effectiveness of pattern-recognition-based FDI methods using ANNs has been demonstrated, further testing using real-world data is necessary.

  11. Detailed Modeling of Physical Processes in Electron Sources for Accelerator Applications

    NASA Astrophysics Data System (ADS)

    Chubenko, Oksana; Afanasev, Andrei

    2017-01-01

    At present, electron sources are essential in a wide range of applications - from common technical use to exploring the nature of matter. Depending on the application requirements, different methods and materials are used to generate electrons. State-of-the-art accelerator applications set a number of often-conflicting requirements for electron sources (e.g., quantum efficiency vs. polarization, current density vs. lifetime, etc). Development of advanced electron sources includes modeling and design of cathodes, material growth, fabrication of cathodes, and cathode testing. The detailed simulation and modeling of physical processes is required in order to shed light on the exact mechanisms of electron emission and to develop new-generation electron sources with optimized efficiency. The purpose of the present work is to study physical processes in advanced electron sources and develop scientific tools, which could be used to predict electron emission from novel nano-structured materials. In particular, the area of interest includes bulk/superlattice gallium arsenide (bulk/SL GaAs) photo-emitters and nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) photo/field-emitters. Work supported by The George Washington University and Euclid TechLabs LLC.

  12. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  13. Generalized Temporal Acceleration Scheme for Kinetic Monte Carlo Simulations of Surface Catalytic Processes by Scaling the Rates of Fast Reactions.

    PubMed

    Dybeck, Eric Christopher; Plaisance, Craig Patrick; Neurock, Matthew

    2017-02-14

    A novel algorithm has been developed to achieve temporal acceleration during kinetic Monte Carlo (KMC) simulations of surface catalytic processes. This algorithm allows for the direct simulation of reaction networks containing kinetic processes occurring on vastly disparate timescales which computationally overburden standard KMC methods. Previously developed methods for temporal acceleration in KMC have been designed for specific systems and often require a priori information from the user such as identifying the fast and slow processes. In the approach presented herein, quasi-equilibrated processes are identified automatically based on previous executions of the forward and reverse reactions. Temporal acceleration is achieved by automatically scaling the intrinsic rate constants of the quasi-equilibrated processes, bringing their rates closer to the timescales of the slow kinetically relevant non-equilibrated processes. All reactions are still simulated directly, although with modified rate constants. Abrupt changes in the underlying dynamics of the reaction network are identified during the simulation and the reaction rate constants are rescaled accordingly. The algorithm has been utilized here to model the Fischer-Tropsch synthesis reaction over ruthenium nanoparticles. This reaction network has multiple timescale-disparate processes which would be intractable to simulate without the aid of temporal acceleration. The accelerated simulations are found to give reaction rates and selectivities indistinguishable from those calculated by an equivalent mean-field kinetic model. The computational savings of the algorithm can span many orders of magnitude in realistic systems and the computational cost is not limited by the magnitude of the timescale disparity in the system processes. Furthermore, the algorithm has been designed in a generic fashion and can easily be applied to other surface catalytic processes of interest.

  14. Business process mapping techniques for ISO 9001 and 14001 certifications

    SciTech Connect

    Klement, R.E.; Richardson, G.D.

    1997-11-01

    AlliedSignal Federal Manufacturing and Technologies/Kansas City (FM and T/KC) produces nonnuclear components for nuclear weapons. The company has operated the plant for the US Department of Energy (DOE) since 1949. Throughout the history of the plant, procedures have been written to reflect the nuclear weapons industry best practices, and the facility has built a reputation for producing high quality products. The purpose of this presentation is to demonstrate how Total Quality principles were used at FM and T/KC to document processes for ISO 9001 and 14001 certifications. The information presented to the reader will lead to a better understanding of business administration by aligning procedures to key business processes within a business model; converting functional-based procedures to process-based procedures for total integrated resource management; and assigning ownership, validation, and metrics to procedures/processes, adding value to a company`s profitability.

  15. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    USGS Publications Warehouse

    Hellweg, M.; Boatwright, J.

    1999-01-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  16. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    NASA Astrophysics Data System (ADS)

    Hellweg, Margaret; Boatwright, John

    1999-04-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints.

  17. Functional mapping of quantitative trait loci underlying the character process: a theoretical framework.

    PubMed

    Ma, Chang-Xing; Casella, George; Wu, Rongling

    2002-08-01

    Unlike a character measured at a finite set of landmark points, function-valued traits are those that change as a function of some independent and continuous variable. These traits, also called infinite-dimensional characters, can be described as the character process and include a number of biologically, economically, or biomedically important features, such as growth trajectories, allometric scalings, and norms of reaction. Here we present a new statistical infrastructure for mapping quantitative trait loci (QTL) underlying the character process. This strategy, termed functional mapping, integrates mathematical relationships of different traits or variables within the genetic mapping framework. Logistic mapping proposed in this article can be viewed as an example of functional mapping. Logistic mapping is based on a universal biological law that for each and every living organism growth over time follows an exponential growth curve (e.g., logistic or S-shaped). A maximum-likelihood approach based on a logistic-mixture model, implemented with the EM algorithm, is developed to provide the estimates of QTL positions, QTL effects, and other model parameters responsible for growth trajectories. Logistic mapping displays a tremendous potential to increase the power of QTL detection, the precision of parameter estimation, and the resolution of QTL localization due to the small number of parameters to be estimated, the pleiotropic effect of a QTL on growth, and/or residual correlations of growth at different ages. More importantly, logistic mapping allows for testing numerous biologically important hypotheses concerning the genetic basis of quantitative variation, thus gaining an insight into the critical role of development in shaping plant and animal evolution and domestication. The power of logistic mapping is demonstrated by an example of a forest tree, in which one QTL affecting stem growth processes is detected on a linkage group using our method, whereas it cannot

  18. Functional mapping of quantitative trait loci underlying the character process: a theoretical framework.

    PubMed Central

    Ma, Chang-Xing; Casella, George; Wu, Rongling

    2002-01-01

    Unlike a character measured at a finite set of landmark points, function-valued traits are those that change as a function of some independent and continuous variable. These traits, also called infinite-dimensional characters, can be described as the character process and include a number of biologically, economically, or biomedically important features, such as growth trajectories, allometric scalings, and norms of reaction. Here we present a new statistical infrastructure for mapping quantitative trait loci (QTL) underlying the character process. This strategy, termed functional mapping, integrates mathematical relationships of different traits or variables within the genetic mapping framework. Logistic mapping proposed in this article can be viewed as an example of functional mapping. Logistic mapping is based on a universal biological law that for each and every living organism growth over time follows an exponential growth curve (e.g., logistic or S-shaped). A maximum-likelihood approach based on a logistic-mixture model, implemented with the EM algorithm, is developed to provide the estimates of QTL positions, QTL effects, and other model parameters responsible for growth trajectories. Logistic mapping displays a tremendous potential to increase the power of QTL detection, the precision of parameter estimation, and the resolution of QTL localization due to the small number of parameters to be estimated, the pleiotropic effect of a QTL on growth, and/or residual correlations of growth at different ages. More importantly, logistic mapping allows for testing numerous biologically important hypotheses concerning the genetic basis of quantitative variation, thus gaining an insight into the critical role of development in shaping plant and animal evolution and domestication. The power of logistic mapping is demonstrated by an example of a forest tree, in which one QTL affecting stem growth processes is detected on a linkage group using our method, whereas it cannot

  19. Bisphenol A exposure accelerated the aging process in the nematode Caenorhabditis elegans.

    PubMed

    Tan, Ling; Wang, Shunchang; Wang, Yun; He, Mei; Liu, Dahai

    2015-06-01

    Bisphenol A (BPA) is a well-known environmental estrogenic disruptor that causes adverse effects. Recent studies have found that chronic exposure to BPA is associated with a high incidence of several age-related diseases. Aging is characterized by progressive function decline, which affects quality of life. However, the effects of BPA on the aging process are largely unknown. In the present study, by using the nematode Caenorhabditis elegans as a model, we investigated the influence of BPA exposure on the aging process. The decrease in body length, fecundity, and population size and the increased egg laying defection suggested that BPA exposure resulted in fitness loss and reproduction aging in this animal. Lifetime exposure of worms to BPA shortened the lifespan in a dose-dependant manner. Moreover, prolonged BPA exposure resulted in age-related behavior degeneration and the accumulation of lipofuscin and lipid peroxide products. The expression of mitochondria-specific HSP-6 and endoplasmic reticulum (ER)-related HSP-70 exhibited hormetic decrease. The expression of ER-related HSP-4 decreased significantly while HSP-16.2 showed a dose-dependent increase. The decreased expression of GCS-1 and GST-4 implicated the reduced antioxidant ability under BPA exposure, and the increase in SOD-3 expression might be caused by elevated levels of reactive oxygen species (ROS) production. Finally, BPA exposure increased the generation of hydrogen peroxide-related ROS and superoxide anions. Our results suggest that BPA exposure resulted in an accelerated aging process in C. elegans mediated by the induction of oxidative stress.

  20. Non-invasive Mapping of Face Processing by Navigated Transcranial Magnetic Stimulation

    PubMed Central

    Maurer, Stefanie; Giglhuber, Katrin; Sollmann, Nico; Kelm, Anna; Ille, Sebastian; Hauck, Theresa; Tanigawa, Noriko; Ringel, Florian; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M.

    2017-01-01

    Background: Besides motor and language function, tumor resections within the frontal and parietal lobe have also been reported to cause neuropsychological impairment like prosopagnosia. Objective: Since non-navigated transcranial magnetic stimulation (TMS) has previously been used to map neuropsychological cortical function, this study aims to evaluate the feasibility and spatial discrimination of repetitive navigated TMS (rTMS) mapping for detection of face processing impairment in healthy volunteers. The study was also designed to establish this examination for preoperative mapping in brain tumor patients. Methods: Twenty healthy and purely right-handed volunteers (11 female, 9 male) underwent rTMS mapping for cortical face processing function using 5 Hz/10 pulses. Both hemispheres were investigated randomly with an interval of 2 weeks between mapping sessions. Fifty-two predetermined cortical spots of the whole hemispheres were mapped after baseline measurement. The task consisted of 80 portraits of popular persons, which had to be named while rTMS was applied. Results: In 80% of all subjects rTMS elicited naming errors in the right middle middle frontal gyrus (mMFG). Concerning anomia errors, the highest error rate (35%) was achieved in the bilateral triangular inferior frontal gyrus (trIFG). With regard to similarly or wrongly named persons, we observed 10% error rates mainly in the bilateral frontal lobes. Conclusion: It seems feasible to map the cortical face processing function and to generate face processing impairment via rTMS. The observed localizations are well in accordance with the contemporary literature, and the mapping did not interfere with rTMS-induced language impairment. The clinical usefulness of preoperative mapping has to be evaluated subsequently. PMID:28167906

  1. Extracting Process and Mapping Management for Heterogennous Systems

    NASA Astrophysics Data System (ADS)

    Hagara, Igor; Tanuška, Pavol; Duchovičová, Soňa

    2013-12-01

    A lot of papers describe three common methods of data selection from primary systems. This paper defines how to select the correct method or combinations of methods for minimizing the impact of production system and common operation. Before using any method, it is necessary to know the primary system and its databases structures for the optimal use of the actual data structure setup and the best design for ETL process. Databases structures are usually categorized into groups, which characterize their quality. The classification helps to find the ideal method for each group and thus design a solution of ETL process with the minimal impact on the data warehouse and production system.

  2. Graphics processing unit accelerated one-dimensional blood flow computation in the human arterial tree.

    PubMed

    Itu, Lucian; Sharma, Puneet; Kamen, Ali; Suciu, Constantin; Comaniciu, Dorin

    2013-12-01

    One-dimensional blood flow models have been used extensively for computing pressure and flow waveforms in the human arterial circulation. We propose an improved numerical implementation based on a graphics processing unit (GPU) for the acceleration of the execution time of one-dimensional model. A novel parallel hybrid CPU-GPU algorithm with compact copy operations (PHCGCC) and a parallel GPU only (PGO) algorithm are developed, which are compared against previously introduced PHCG versions, a single-threaded CPU only algorithm and a multi-threaded CPU only algorithm. Different second-order numerical schemes (Lax-Wendroff and Taylor series) are evaluated for the numerical solution of one-dimensional model, and the computational setups include physiologically motivated non-periodic (Windkessel) and periodic boundary conditions (BC) (structured tree) and elastic and viscoelastic wall laws. Both the PHCGCC and the PGO implementations improved the execution time significantly. The speed-up values over the single-threaded CPU only implementation range from 5.26 to 8.10 × , whereas the speed-up values over the multi-threaded CPU only implementation range from 1.84 to 4.02 × . The PHCGCC algorithm performs best for an elastic wall law with non-periodic BC and for viscoelastic wall laws, whereas the PGO algorithm performs best for an elastic wall law with periodic BC.

  3. Closing the gap: accelerating the translational process in nanomedicine by proposing standardized characterization techniques.

    PubMed

    Khorasani, Ali A; Weaver, James L; Salvador-Morales, Carolina

    2014-01-01

    On the cusp of widespread permeation of nanomedicine, academia, industry, and government have invested substantial financial resources in developing new ways to better treat diseases. Materials have unique physical and chemical properties at the nanoscale compared with their bulk or small-molecule analogs. These unique properties have been greatly advantageous in providing innovative solutions for medical treatments at the bench level. However, nanomedicine research has not yet fully permeated the clinical setting because of several limitations. Among these limitations are the lack of universal standards for characterizing nanomaterials and the limited knowledge that we possess regarding the interactions between nanomaterials and biological entities such as proteins. In this review, we report on recent developments in the characterization of nanomaterials as well as the newest information about the interactions between nanomaterials and proteins in the human body. We propose a standard set of techniques for universal characterization of nanomaterials. We also address relevant regulatory issues involved in the translational process for the development of drug molecules and drug delivery systems. Adherence and refinement of a universal standard in nanomaterial characterization as well as the acquisition of a deeper understanding of nanomaterials and proteins will likely accelerate the use of nanomedicine in common practice to a great extent.

  4. Closing the gap: accelerating the translational process in nanomedicine by proposing standardized characterization techniques

    PubMed Central

    Khorasani, Ali A; Weaver, James L; Salvador-Morales, Carolina

    2014-01-01

    On the cusp of widespread permeation of nanomedicine, academia, industry, and government have invested substantial financial resources in developing new ways to better treat diseases. Materials have unique physical and chemical properties at the nanoscale compared with their bulk or small-molecule analogs. These unique properties have been greatly advantageous in providing innovative solutions for medical treatments at the bench level. However, nanomedicine research has not yet fully permeated the clinical setting because of several limitations. Among these limitations are the lack of universal standards for characterizing nanomaterials and the limited knowledge that we possess regarding the interactions between nanomaterials and biological entities such as proteins. In this review, we report on recent developments in the characterization of nanomaterials as well as the newest information about the interactions between nanomaterials and proteins in the human body. We propose a standard set of techniques for universal characterization of nanomaterials. We also address relevant regulatory issues involved in the translational process for the development of drug molecules and drug delivery systems. Adherence and refinement of a universal standard in nanomaterial characterization as well as the acquisition of a deeper understanding of nanomaterials and proteins will likely accelerate the use of nanomedicine in common practice to a great extent. PMID:25525356

  5. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  6. A long-standing hyperglycaemic condition impairs skin barrier by accelerating skin ageing process.

    PubMed

    Park, Hwa-Young; Kim, Jae-Hong; Jung, Minyoung; Chung, Choon Hee; Hasham, Rosnani; Park, Chang Seo; Choi, Eung Ho

    2011-12-01

    Uncontrolled chronic hyperglycaemia including type 2 diabetes mellitus (DM) induces many skin problems related to chronic impaired skin barrier state. However, little is known about the skin barrier state of chronic hyperglycaemia patients, the dysfunction of which may be a major cause of their skin problems. In this study, we investigated whether a long-standing hyperglycaemic condition including type 2 DM impairs skin barrier homoeostasis in proportion to the duration and its pathomechanism. We utilized the Otsuka Long-Evans Tokushima Fatty (OLETF) rats as an animal model of long-standing hyperglycaemia and Long-Evans Tokushima Otsuka rats as a control strain. We confirmed that a long-standing hyperglycaemia delayed skin barrier homoeostasis, which correlated with haemoglobin A1c levels. OLETF rats as a long-standing hyperglycaemia model exhibited decreased epidermal lipid synthesis and antimicrobial peptide expression with increasing age. Decreased epidermal lipid synthesis accounted for decreased lamellar body production. In addition, OLETF rats had significantly higher serum levels of advanced glycation end products (AGEs) and elevated levels of the receptor for AGE in the epidermis. A long-standing hyperglycaemic condition impairs skin barrier function including permeability and antimicrobial barriers by accelerating skin ageing process in proportion to the duration of hyperglycaemia, which could be a major pathophysiology underlying cutaneous complications of DM.

  7. Operational shoreline mapping with high spatial resolution radar and geographic processing

    USGS Publications Warehouse

    Rangoonwala, Amina; Jones, Cathleen E; Chi, Zhaohui; Ramsey III, Elijah W.

    2017-01-01

    A comprehensive mapping technology was developed utilizing standard image processing and available GIS procedures to automate shoreline identification and mapping from 2 m synthetic aperture radar (SAR) HH amplitude data. The development used four NASA Uninhabited Aerial Vehicle SAR (UAVSAR) data collections between summer 2009 and 2012 and a fall 2012 collection of wetlands dominantly fronted by vegetated shorelines along the Mississippi River Delta that are beset by severe storms, toxic releases, and relative sea-level rise. In comparison to shorelines interpreted from 0.3 m and 1 m orthophotography, the automated GIS 10 m alongshore sampling found SAR shoreline mapping accuracy to be ±2 m, well within the lower range of reported shoreline mapping accuracies. The high comparability was obtained even though water levels differed between the SAR and photography image pairs and included all shorelines regardless of complexity. The SAR mapping technology is highly repeatable and extendable to other SAR instruments with similar operational functionality.

  8. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    SciTech Connect

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; Turteltaub, Kenneth W.; Newman, Lee S.

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.

  9. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway

    DOE PAGES

    Tooker, Brian C.; Brindley, Stephen M.; Chiarappa-Zucca, Marina L.; ...

    2014-06-16

    We report that exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstratemore » that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, we determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) then HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.« less

  10. Accelerator mass spectrometry detection of beryllium ions in the antigen processing and presentation pathway.

    PubMed

    Tooker, Brian C; Brindley, Stephen M; Chiarappa-Zucca, Marina L; Turteltaub, Kenneth W; Newman, Lee S

    2015-01-01

    Exposure to small amounts of beryllium (Be) can result in beryllium sensitization and progression to Chronic Beryllium Disease (CBD). In CBD, beryllium is presented to Be-responsive T-cells by professional antigen-presenting cells (APC). This presentation drives T-cell proliferation and pro-inflammatory cytokine (IL-2, TNFα, and IFNγ) production and leads to granuloma formation. The mechanism by which beryllium enters an APC and is processed to become part of the beryllium antigen complex has not yet been elucidated. Developing techniques for beryllium detection with enough sensitivity has presented a barrier to further investigation. The objective of this study was to demonstrate that Accelerator Mass Spectrometry (AMS) is sensitive enough to quantify the amount of beryllium presented by APC to stimulate Be-responsive T-cells. To achieve this goal, APC - which may or may not stimulate Be-responsive T-cells - were cultured with Be-ferritin. Then, by utilizing AMS, the amount of beryllium processed for presentation was determined. Further, IFNγ intracellular cytokine assays were performed to demonstrate that Be-ferritin (at levels used in the experiments) could stimulate Be-responsive T-cells when presented by an APC of the correct HLA type (HLA-DP0201). The results indicated that Be-responsive T-cells expressed IFNγ only when APC with the correct HLA type were able to process Be for presentation. Utilizing AMS, it was determined that APC with HLA-DP0201 had membrane fractions containing 0.17-0.59 ng Be and APC with HLA-DP0401 had membrane fractions bearing 0.40-0.45 ng Be. However, HLA-DP0401 APC had 20-times more Be associated with the whole cells (57.68-61.12 ng) than HLA-DP0201 APC (0.90-3.49 ng). As these findings demonstrate, AMS detection of picogram levels of Be processed by APC is possible. Further, regardless of form, Be requires processing by APC to successfully stimulate Be-responsive T-cells to generate IFNγ.

  11. Boric Acid Reduces the Formation of DNA Double Strand Breaks and Accelerates Wound Healing Process.

    PubMed

    Tepedelen, Burcu Erbaykent; Soya, Elif; Korkmaz, Mehmet

    2016-12-01

    Boron is absorbed by the digestive and respiratory system, and it was considered that it is converted to boric acid (BA), which was distributed to all tissues above 90 %. The biochemical essentiality of boron element is caused by boric acid because it affects the activity of several enzymes involved in the metabolism. DNA damage repair mechanisms and oxidative stress regulation is quite important in the transition stage from normal to cancerous cells; thus, this study was conducted to investigate the protective effect of boric acid on DNA damage and wound healing in human epithelial cell line. For this purpose, the amount of DNA damage occurred with irinotecan (CPT-11), etoposide (ETP), doxorubicin (Doxo), and H2O2 was determined by immunofluorescence through phosphorylation of H2AX((Ser139)) and pATM((Ser1981)) in the absence and presence of BA. Moreover, the effect of BA on wound healing has been investigated in epithelial cells treated with these agents. Our results demonstrated that H2AX((Ser139)) foci numbers were significantly decreased in the presence of BA while wound healing was accelerated by BA compared to that in the control and only drug-treated cells. Eventually, the results indicate that BA reduced the formation of DNA double strand breaks caused by agents as well as improving the wound healing process. Therefore, we suggest that boric acid has important therapeutical effectiveness and may be used in the treatment of inflammatory diseases where oxidative stress and wound healing process plays an important role.

  12. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  13. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    SciTech Connect

    Wang, J; Templeton, D C; Harris, D B

    2011-01-21

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combined with conventional methods significantly improves the network detection ability in an efficient matter.

  14. Mapping social processes at work in nursing knowledge development.

    PubMed

    Hamilton, Patti; Willis, Eileen; Henderson, Julie; Harvey, Clare; Toffoli, Luisa; Abery, Elizabeth; Verrall, Claire

    2014-09-01

    In this paper, we suggest a blueprint for combining bibliometrics and critical analysis as a way to review published scientific works in nursing. This new approach is neither a systematic review nor meta-analysis. Instead, it is a way for researchers and clinicians to understand how and why current nursing knowledge developed as it did. Our approach will enable consumers and producers of nursing knowledge to recognize and take into account the social processes involved in the development, evaluation, and utilization of new nursing knowledge. We offer a rationale and a strategy for examining the socially-sanctioned actions by which nurse scientists signal to readers the boundaries of their thinking about a problem, the roots of their ideas, and the significance of their work. These actions - based on social processes of authority, credibility, and prestige - have bearing on the careers of nurse scientists and on the ways the knowledge they create enters into the everyday world of nurse clinicians and determines their actions at the bedside, as well as their opportunities for advancement.

  15. With high resolution DEM to enhanced maps of Dominant Runoff Processes (DRP)

    NASA Astrophysics Data System (ADS)

    Margreth, Michael; Naef, Felix

    2010-05-01

    The reaction of a river on intense rainfall depends on the distribution of the dominant runoff processes (DRP) Hortonian Overland Flow (HOF), Saturated Overland Flow (SOF), Sub-surface Flow (SSF) or Deep Percolation (DP) within its catchment area. A decision scheme to determine the DRP was implemented in a GIS, using high resolution data of soils, geology, land use and topography. With the scheme, a DRP map was derived for the Kanton of Zurich with an area of 1730 km2, which lies in the Swiss Plateau and covers a wide range of topography, geology and flood producing precipitation regimes. Detailed soil maps are essential for the derivation of high resolution dominant runoff processes maps because they contain information about the soil infiltration and the storage capacity. In the Kanton of Zurich, only a small part of the forested areas is covered by detailed soil maps. Information like soil depth and soil water regime had to be derived from the forest vegetation map (1:5'000). In this map, species of plants, grouped to forest communities, are delineated, depending on their preferred site conditions. Besides geology, topography and climate, also soil water regime and soil depth influence the occurence of plant species. However, a comparison between the soil water regime, indicated by detailed soil maps and the forest vegetation map shows that not all forest communities are selective for the soil water regime and soil depth. Thus, only some forest communities can be used, to derive the DRP. For the other forest communities, an automatic method had to be developed to derive soil water regime and soil depth, based on a high resolution geological map and a laser scanned DEM. With the high resolution topographic information, small creeks, drainage ditches and erosion ditches could be identified. These areas indicate where a fast runoff reaction during heavy rainfalls can be expected. Creeks and drainage ditches suggest that soils do not drain properly and are saturated

  16. Graphic processing unit accelerated real-time partially coherent beam generator

    NASA Astrophysics Data System (ADS)

    Ni, Xiaolong; Liu, Zhi; Chen, Chunyi; Jiang, Huilin; Fang, Hanhan; Song, Lujun; Zhang, Su

    2016-07-01

    A method of using liquid-crystals (LCs) to generate a partially coherent beam in real-time is described. An expression for generating a partially coherent beam is given and calculated using a graphic processing unit (GPU), i.e., the GeForce GTX 680. A liquid-crystal on silicon (LCOS) with 256 × 256 pixels is used as the partially coherent beam generator (PCBG). An optimizing method with partition convolution is used to improve the generating speed of our LC PCBG. The total time needed to generate a random phase map with a coherence width range from 0.015 mm to 1.5 mm is less than 2.4 ms for calculation and readout with the GPU; adding the time needed for the CPU to read and send to LCOS with the response time of the LC PCBG, the real-time partially coherent beam (PCB) generation frequency of our LC PCBG is up to 312 Hz. To our knowledge, it is the first real-time partially coherent beam generator. A series of experiments based on double pinhole interference are performed. The result shows that to generate a laser beam with a coherence width of 0.9 mm and 1.5 mm, with a mean error of approximately 1%, the RMS values needed 0.021306 and 0.020883 and the PV values required 0.073576 and 0.072998, respectively.

  17. Graphics Processing Unit Acceleration and Parallelization of GENESIS for Large-Scale Molecular Dynamics Simulations.

    PubMed

    Jung, Jaewoon; Naurse, Akira; Kobayashi, Chigusa; Sugita, Yuji

    2016-10-11

    The graphics processing unit (GPU) has become a popular computational platform for molecular dynamics (MD) simulations of biomolecules. A significant speedup in the simulations of small- or medium-size systems using only a few computer nodes with a single or multiple GPUs has been reported. Because of GPU memory limitation and slow communication between GPUs on different computer nodes, it is not straightforward to accelerate MD simulations of large biological systems that contain a few million or more atoms on massively parallel supercomputers with GPUs. In this study, we develop a new scheme in our MD software, GENESIS, to reduce the total computational time on such computers. Computationally intensive real-space nonbonded interactions are computed mainly on GPUs in the scheme, while less intensive bonded interactions and communication-intensive reciprocal-space interactions are performed on CPUs. On the basis of the midpoint cell method as a domain decomposition scheme, we invent the single particle interaction list for reducing the GPU memory usage. Since total computational time is limited by the reciprocal-space computation, we utilize the RESPA multiple time-step integration and reduce the CPU resting time by assigning a subset of nonbonded interactions on CPUs as well as on GPUs when the reciprocal-space computation is skipped. We validated our GPU implementations in GENESIS on BPTI and a membrane protein, porin, by MD simulations and an alanine-tripeptide by REMD simulations. Benchmark calculations on TSUBAME supercomputer showed that an MD simulation of a million atoms system was scalable up to 256 computer nodes with GPUs.

  18. Process for Generating Engine Fuel Consumption Map: Ricardo Cooled EGR Boost 24-bar Standard Car Engine Tier 2 Fuel

    EPA Pesticide Factsheets

    This document summarizes the process followed to utilize the fuel consumption map of a Ricardo modeled engine and vehicle fuel consumption data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.

  19. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Meroni, A.; Bahr, T.

    2013-05-01

    Having access to SAR data can be highly important and critical especially for disaster mapping. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. Therefore, we present in this paper the operational processing of SAR data within a GIS environment for rapid disaster mapping. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. A series of COSMO-SkyMed acquisitions was processed in ArcGIS® using a single-sensor, multi-mode, multi-temporal approach. The relevant processing steps were combined using the ArcGIS ModelBuilder to create a new model for rapid disaster mapping in ArcGIS, which can be accessed both via a desktop and a server environment.

  20. ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors.

    PubMed

    Lai, Vicky Tzuyin; Curran, Tim

    2013-12-01

    Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors.

  1. Schooling in Times of Acceleration

    ERIC Educational Resources Information Center

    Buddeberg, Magdalena; Hornberg, Sabine

    2017-01-01

    Modern societies are characterised by forms of acceleration, which influence social processes. Sociologist Hartmut Rosa has systematised temporal structures by focusing on three categories of social acceleration: technical acceleration, acceleration of social change, and acceleration of the pace of life. All three processes of acceleration are…

  2. Monte Carlo Modeling of Electronuclear Processes in Experimental Accelerator Driven Systems

    NASA Astrophysics Data System (ADS)

    Polanski, Aleksander

    2000-01-01

    The paper presents results of Monte Carlo modeling of an experimental Accelerator Driven System (ADS), which employs a subcritical assembly and a 660 MeV proton accelerator operating at the Laboratory of Nuclear Problems of the Joint Institute for Nuclear Research in Dubna. The mix of oxides (PuO2 + UO2) MOX fuel designed for the reactor will be adopted for the core of the assembly. The present conceptual design of the experimental subcritical assembly in Dubna is based on the core with a nominal unit capacity of 30 kW (thermal). This corresponds to the multiplication coefficient keff= 0.945 and the accelerator beam power of 1 kW.

  3. Using concept maps to explore preservice teachers' perceptions of science content knowledge, teaching practices, and reflective processes

    NASA Astrophysics Data System (ADS)

    Somers, Judy L.

    This qualitative study examined seven preservice teachers' perceptions of their science content knowledge, teaching practices, and reflective processes through the use of the metacognitive strategy of concept maps. Included in the paper is a review of literature in the areas of preservice teachers' perceptions of teaching, concept development, concept mapping, science content understanding, and reflective process as a part of metacognition. The key questions addressed include the use of concept maps to indicate organization and understanding of science content, mapping strategies to indicate perceptions of teaching practice, and the influence of concept maps on reflective process. There is also a comparison of preservice teachers' perceptions of concept map usage with the purposes and practices of maps as described by experienced teachers. Data were collected primarily through interviews, observations, a pre and post concept mapping activity, and an analysis of those concept maps using a rubric developed for this study. Findings showed that concept map usage clarified students' understanding of the organization and relationships within content area and that the process of creating the concept maps increased participants' understanding of the selected content. The participants felt that the visual element of concept mapping was an important factor in improving content understanding. These participants saw benefit in using concept maps as planning tools and as instructional tools. They did not recognize the use of concept maps as assessment tools. When the participants were able to find personal relevance in and through their concept maps they were better able to be reflective about the process. The experienced teachers discussed student understanding and skill development as the primary purpose of concept map usage, while they were able to use concept maps to accomplish multiple purposes in practice.

  4. Processing Map and Mechanism of Hot Deformation of a Corrosion-Resistant Nickel-Based Alloy

    NASA Astrophysics Data System (ADS)

    Wang, L.; Liu, F.; Zuo, Q.; Cheng, J. J.; Chen, C. F.

    2017-01-01

    Hot deformation behavior of a corrosion-resistant nickel-based alloy was studied in temperature range of 1050-1200 °C and strain rate range of 0.001-10 s-1 by employing hot compression tests. An approach of processing map was used to reveal the hot workability and microstructural evolution during the hot deformation. The results show that different stable domains in the processing map associated with the microstructure evolution can be ascribed to different dynamic recrystallization (DRX) mechanisms. The discontinuous dynamic recrystallization (DDRX) grains evolved by the necklace mechanism are finer than those evolved by the ordinary mechanism, respectively, arising from the strong nucleation process and the growth process. If subjected to low temperature and high strain rate, the flow instability domain occurs, due to the continuous dynamic recrystallization (CDRX) based on the evolution of deformation micro-bands within the deformed grains. Based on the processing map, a DRX mechanism map is established, which can provide an idea for designing desired microstructure.

  5. Impaired letter-string processing in developmental dyslexia: what visual-to-phonology code mapping disorder?

    PubMed

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-05-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia.

  6. A new mapping acquisition and processing system for simultaneous PIXE-RBS analysis with external beam

    NASA Astrophysics Data System (ADS)

    Pichon, L.; Beck, L.; Walter, Ph.; Moignard, B.; Guillou, T.

    2010-06-01

    The combination of ion beam analysis techniques is particularly fruitful for the study of cultural heritage objects. For several years, the AGLAE facility of the Louvre laboratory has been implementing these techniques with an external beam. The recent set-up permits to carry out PIXE, PIGE and RBS simultaneously on the same analyzed spot with a particle beam of approximately 20 μm diameter. A new mapping system has been developed in order to provide elemental concentration maps from the PIXE and RBS spectra. This system combines the Genie2000 spectroscopy software with a homemade software that creates maps by handling acquisition with the object position. Each pixel of each PIXE and RBS maps contains the spectrum normalised by the dose. After analysing each pixel of the PIXE maps (low and high energy X-ray spectra) with the Gupixwin peak-fitting software, quantitative elemental concentrations are obtained for the major and trace elements. This paper presents the quantitative elemental maps extracted from the PIXE spectra and the development of RBS data processing for light element distribution and thin layer characterization. Examples on rock painting and lustrous ceramics will be presented.

  7. A method to evaluate dose errors introduced by dose mapping processes for mass conserving deformations

    SciTech Connect

    Yan, C.; Hugo, G.; Salguero, F. J.; Saleh-Sayah, N.; Weiss, E.; Sleeman, W. C.; Siebers, J. V.

    2012-04-15

    Purpose: To present a method to evaluate the dose mapping error introduced by the dose mapping process. In addition, apply the method to evaluate the dose mapping error introduced by the 4D dose calculation process implemented in a research version of commercial treatment planning system for a patient case. Methods: The average dose accumulated in a finite volume should be unchanged when the dose delivered to one anatomic instance of that volume is mapped to a different anatomic instance--provided that the tissue deformation between the anatomic instances is mass conserving. The average dose to a finite volume on image S is defined as d{sub S}=e{sub s}/m{sub S}, where e{sub S} is the energy deposited in the mass m{sub S} contained in the volume. Since mass and energy should be conserved, when d{sub S} is mapped to an image R(d{sub S{yields}R}=d{sub R}), the mean dose mapping error is defined as {Delta}d{sub m}=|d{sub R}-d{sub S}|=|e{sub R}/m{sub R}-e{sub S}/m{sub S}|, where the e{sub R} and e{sub S} are integral doses (energy deposited), and m{sub R} and m{sub S} are the masses within the region of interest (ROI) on image R and the corresponding ROI on image S, where R and S are the two anatomic instances from the same patient. Alternatively, application of simple differential propagation yields the differential dose mapping error, {Delta}d{sub d}=|({partial_derivative}d/{partial_derivative}e)*{Delta}e+({partial_derivative}d/{partial_derivative}m)*{Delta}m|=|((e{sub S}-e{sub R})/m{sub R})-((m{sub S}-m{sub R})/m{sub R}{sup 2})*e{sub R}|={alpha}|d{sub R}-d{sub S}| with {alpha}=m{sub S}/m{sub R}. A 4D treatment plan on a ten-phase 4D-CT lung patient is used to demonstrate the dose mapping error evaluations for a patient case, in which the accumulated dose, D{sub R}={Sigma}{sub S=0}{sup 9}d{sub S{yields}R}, and associated error values ({Delta}D{sub m} and {Delta}D{sub d}) are calculated for a uniformly spaced set of ROIs. Results: For the single sample patient dose

  8. Mapping a Process of Negotiated Identity among Incarcerated Male Juvenile Offenders

    ERIC Educational Resources Information Center

    Abrams, Laura S.; Hyun, Anna

    2009-01-01

    Building on theories of youth identity transitions, this study maps a process of negotiated identity among incarcerated young men. Data are drawn from ethnographic study of three juvenile correctional institutions and longitudinal semistructured interviews with facility residents. Cross-case analysis of 10 cases that finds youth offenders adapted…

  9. The Maneuver Planning Process for the Microwave Anisotropy Probe (MAP) Mission

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Andrews, Stephen; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe (MAP) was successfully launched from Kennedy Space Center's Eastern Range on June 30, 2001. MAP will measure the cosmic microwave background as a follow up to NASA's Cosmic Background Explorer (COBE) mission from the early 1990's. MAP will take advantage of its mission orbit about the Sun-Earth/Moon L2 Lagrangian point to produce results with higher resolution, sensitivity, and accuracy than COBE. A strategy comprising highly eccentric phasing loops with a lunar gravity assist was utilized to provide a zero-cost insertion into a lissajous orbit about L2. Maneuvers were executed at the phasing loop perigees to correct for launch vehicle errors and to target the lunar gravity assist so that a suitable orbit at L2 was achieved. This paper will discuss the maneuver planning process for designing, verifying, and executing MAP's maneuvers. A discussion of the tools and how they interacted will also be included. The maneuver planning process was iterative and crossed several disciplines, including trajectory design, attitude control, propulsion, power, thermal, communications, and ground planning. Several commercial, off-the-shelf (COTS) packages were used to design the maneuvers. STK/Astrogator was used as the trajectory design tool. All maneuvers were designed in Astrogator to ensure that the Moon was met at the correct time and orientation to provide the energy needed to achieve an orbit about L2. The Mathworks Matlab product was used to develop a tool for generating command quaternions. The command quaternion table (CQT) was used to drive the attitude during the perigee maneuvers. The MatrixX toolset, originally written by Integrated Systems, Inc., now distributed by Mathworks, was used to create HiFi, a high fidelity simulator of the MAP attitude control system. HiFi was used to test the CQT and to make sure that all attitude requirements were met during the maneuver. In addition, all ACS data plotting and output were generated in

  10. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  11. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  12. Molecular dynamics-based virtual screening: accelerating the drug discovery process by high-performance computing.

    PubMed

    Ge, Hu; Wang, Yu; Li, Chanjuan; Chen, Nanhao; Xie, Yufang; Xu, Mengyan; He, Yingyan; Gu, Xinchun; Wu, Ruibo; Gu, Qiong; Zeng, Liang; Xu, Jun

    2013-10-28

    High-performance computing (HPC) has become a state strategic technology in a number of countries. One hypothesis is that HPC can accelerate biopharmaceutical innovation. Our experimental data demonstrate that HPC can significantly accelerate biopharmaceutical innovation by employing molecular dynamics-based virtual screening (MDVS). Without using HPC, MDVS for a 10K compound library with tens of nanoseconds of MD simulations requires years of computer time. In contrast, a state of the art HPC can be 600 times faster than an eight-core PC server is in screening a typical drug target (which contains about 40K atoms). Also, careful design of the GPU/CPU architecture can reduce the HPC costs. However, the communication cost of parallel computing is a bottleneck that acts as the main limit of further virtual screening improvements for drug innovations.

  13. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the

  14. Processing techniques for the production of an experimental computer-generated shaded-relief map

    USGS Publications Warehouse

    Judd, Damon D.

    1986-01-01

    The data consisted of forty-eight 1° by 1° blocks of resampled digital elevation model (DEM) data. These data were digitally mosaicked and assigned colors based on intervals of elevation values. The color-coded data set was then used to create a shaded-relief image that was photographically composited with cartographic line information to produce a shaded-relief map. The majority of the processing was completed at the National Mapping Division EROS Data Center in Sioux Falls, South Dakota.

  15. Ab initio nonadiabatic dynamics of multichromophore complexes: a scalable graphical-processing-unit-accelerated exciton framework.

    PubMed

    Sisto, Aaron; Glowacki, David R; Martinez, Todd J

    2014-09-16

    ("fragmenting") a molecular system and then stitching it back together. In this Account, we address both of these problems, the first by using graphical processing units (GPUs) and electronic structure algorithms tuned for these architectures and the second by using an exciton model as a framework in which to stitch together the solutions of the smaller problems. The multitiered parallel framework outlined here is aimed at nonadiabatic dynamics simulations on large supramolecular multichromophoric complexes in full atomistic detail. In this framework, the lowest tier of parallelism involves GPU-accelerated electronic structure theory calculations, for which we summarize recent progress in parallelizing the computation and use of electron repulsion integrals (ERIs), which are the major computational bottleneck in both density functional theory (DFT) and time-dependent density functional theory (TDDFT). The topmost tier of parallelism relies on a distributed memory framework, in which we build an exciton model that couples chromophoric units. Combining these multiple levels of parallelism allows access to ground and excited state dynamics for large multichromophoric assemblies. The parallel excitonic framework is in good agreement with much more computationally demanding TDDFT calculations of the full assembly.

  16. Optimization of the accelerated curing process of concrete using a fibre Bragg grating-based control system and microwave technology

    NASA Astrophysics Data System (ADS)

    Fabian, Matthias; Jia, Yaodong; Shi, Shi; McCague, Colum; Bai, Yun; Sun, Tong; Grattan, Kenneth T. V.

    2016-05-01

    In this paper, an investigation into the suitability of using fibre Bragg gratings (FBGs) for monitoring the accelerated curing process of concrete in a microwave heating environment is presented. In this approach, the temperature data provided by the FBGs are used to regulate automatically the microwave power so that a pre-defined temperature profile is maintained to optimize the curing process, achieving early strength values comparable to those of conventional heat-curing techniques but with significantly reduced energy consumption. The immunity of the FBGs to interference from the microwave radiation used ensures stable readings in the targeted environment, unlike conventional electronic sensor probes.

  17. Accelerating quantum chemistry calculations with graphical processing units - toward in high-density (HD) silico drug discovery.

    PubMed

    Hagiwara, Yohsuke; Ohno, Kazuki; Orita, Masaya; Koga, Ryota; Endo, Toshio; Akiyama, Yutaka; Sekijima, Masakazu

    2013-09-01

    The growing power of central processing units (CPU) has made it possible to use quantum mechanical (QM) calculations for in silico drug discovery. However, limited CPU power makes large-scale in silico screening such as virtual screening with QM calculations a challenge. Recently, general-purpose computing on graphics processing units (GPGPU) has offered an alternative, because of its significantly accelerated computational time over CPU. Here, we review a GPGPU-based supercomputer, TSUBAME2.0, and its promise for next generation in silico drug discovery, in high-density (HD) silico drug discovery.

  18. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  19. 24 CFR 200.1530 - Bases for sanctioning a MAP lender.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Bases for sanctioning a MAP lender... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1530 Bases for sanctioning a MAP lender. It is...

  20. 24 CFR 200.1530 - Bases for sanctioning a MAP lender.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Bases for sanctioning a MAP lender... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1530 Bases for sanctioning a MAP lender. It is...

  1. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  2. 24 CFR 200.1530 - Bases for sanctioning a MAP lender.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Bases for sanctioning a MAP lender... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1530 Bases for sanctioning a MAP lender. It is...

  3. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  4. 24 CFR 200.1530 - Bases for sanctioning a MAP lender.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Bases for sanctioning a MAP lender... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1530 Bases for sanctioning a MAP lender. It is...

  5. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  6. 24 CFR 200.1530 - Bases for sanctioning a MAP lender.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Bases for sanctioning a MAP lender... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1530 Bases for sanctioning a MAP lender. It is...

  7. 24 CFR 200.1545 - Appeals of MAP Lender Review Board decisions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Appeals of MAP Lender Review Board... HOUSING AND URBAN DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality Assurance Enforcement § 200.1545 Appeals of MAP Lender Review Board decisions....

  8. Lightweight Hyperspectral Mapping System and a Novel Photogrammetric Processing Chain for UAV-based Sensing

    NASA Astrophysics Data System (ADS)

    Suomalainen, Juha; Franke, Jappe; Anders, Niels; Iqbal, Shahzad; Wenting, Philip; Becker, Rolf; Kooistra, Lammert

    2014-05-01

    We have developed a lightweight Hyperspectral Mapping System (HYMSY) and a novel processing chain for UAV based mapping. The HYMSY consists of a custom pushbroom spectrometer (range 450-950nm, FWHM 9nm, ~20 lines/s, 328 pixels/line), a consumer camera (collecting 16MPix raw image every 2 seconds), a GPS-Inertia Navigation System (GPS-INS), and synchronization and data storage units. The weight of the system at take-off is 2.0kg allowing us to mount it on a relatively small octocopter. The novel processing chain exploits photogrammetry in the georectification process of the hyperspectral data. At first stage the photos are processed in a photogrammetric software producing a high-resolution RGB orthomosaic, a Digital Surface Model (DSM), and photogrammetric UAV/camera position and attitude at the moment of each photo. These photogrammetric camera positions are then used to enhance the internal accuracy of GPS-INS data. These enhanced GPS-INS data are then used to project the hyperspectral data over the photogrammetric DSM, producing a georectified end product. The presented photogrammetric processing chain allows fully automated georectification of hyperspectral data using a compact GPS-INS unit while still producingin UAV use higher georeferencing accuracy than would be possible using the traditional processing method. During 2013, we have operated HYMSY on 150+ octocopter flights at 60+ sites or days. On typical flight we have produced for a 2-10ha area: a RGB orthoimagemosaic at 1-5cm resolution, a DSM in 5-10cm resolution, and hyperspectral datacube at 10-50cm resolution. The targets have mostly consisted of vegetated targets including potatoes, wheat, sugar beets, onions, tulips, coral reefs, and heathlands,. In this poster we present the Hyperspectral Mapping System and the photogrammetric processing chain with some of our first mapping results.

  9. Topological data analysis of contagion maps for examining spreading processes on networks

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-07-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  10. Mapping the particle acceleration in the cool core of the galaxy cluster RX J1720.1+2638

    SciTech Connect

    Giacintucci, S.; Markevitch, M.; Brunetti, G.; Venturi, T.; ZuHone, J. A.

    2014-11-01

    We present new deep, high-resolution radio images of the diffuse minihalo in the cool core of the galaxy cluster RX J1720.1+2638. The images have been obtained with the Giant Metrewave Radio Telescope at 317, 617, and 1280 MHz and with the Very Large Array at 1.5, 4.9, and 8.4 GHz, with angular resolutions ranging from 1'' to 10''. This represents the best radio spectral and imaging data set for any minihalo. Most of the radio flux of the minihalo arises from a bright central component with a maximum radius of ∼80 kpc. A fainter tail of emission extends out from the central component to form a spiral-shaped structure with a length of ∼230 kpc, seen at frequencies 1.5 GHz and below. We find indication of a possible steepening of the total radio spectrum of the minihalo at high frequencies. Furthermore, a spectral index image shows that the spectrum of the diffuse emission steepens with increasing distance along the tail. A striking spatial correlation is observed between the minihalo emission and two cold fronts visible in the Chandra X-ray image of this cool core. These cold fronts confine the minihalo, as also seen in numerical simulations of minihalo formation by sloshing-induced turbulence. All these observations favor the hypothesis that the radio-emitting electrons in cluster cool cores are produced by turbulent re-acceleration.

  11. A signal processing view of strip-mapping synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Munson, David C., Jr.; Visentin, Robert L.

    1989-01-01

    The authors derive the fundamental strip-mapping SAR (synthetic aperture radar) imaging equations from first principles. They show that the resolution mechanism relies on the geometry of the imaging situation rather than on the Doppler effect. Both the airborne and spaceborne cases are considered. Range processing is discussed by presenting an analysis of pulse compression and formulating a mathematical model of the radar return signal. This formulation is used to obtain the airborne SAR model. The authors study the resolution mechanism and derive the signal processing relations needed to produce a high-resolution image. They introduce spotlight-mode SAR and briefly indicate how polar-format spotlight processing can be used in strip-mapping SAR. They discuss a number of current and future research directions in SAR imaging.

  12. Mapping knowledge translation and innovation processes in Cancer Drug Development: the case of liposomal doxorubicin.

    PubMed

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Hector; Castaño, Victor M

    2014-09-03

    We explored how the knowledge translation and innovation processes are structured when theyresult in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research.

  13. Potensoft: MATLAB-based software for potential field data processing, modeling and mapping

    NASA Astrophysics Data System (ADS)

    Özgü Arısoy, M.; Dikmen, Ünal

    2011-07-01

    An open-source software including an easy-to-use graphical user interface (GUI) has been developed for processing, modeling and mapping of gravity and magnetic data. The program, called Potensoft, is a set of functions written in MATLAB. The most common application of Potensoft is spatial and frequency domain filtering of gravity and magnetic data. The GUI helps the user easily change all the required parameters. One of the major advantages of the program is to display the input and processed maps in a preview window, thereby allowing the user to track the results during the ongoing process. Source codes can be modified depending on the users' goals. This paper discusses the main features of the program and its capabilities are demonstrated by means of illustrative examples. The main objective is to introduce and ensure usage of the developed package for academic, teaching and professional purposes.

  14. Insights into siloxane removal from biogas in biotrickling filters via process mapping-based analysis.

    PubMed

    Soreanu, Gabriela

    2016-03-01

    Data process mapping using response surface methodology (RSM)-based computational techniques is performed in this study for the diagnosis of a laboratory-scale biotrickling filter applied for siloxane (i.e. octamethylcyclotetrasiloxane (D4) and decamethylcyclopentasiloxane (D5)) removal from biogas. A mathematical model describing the process performance (i.e. Si removal efficiency, %) was obtained as a function of key operating parameters (e.g biogas flowrate, D4 and D5 concentration). The contour plots and the response surfaces generated for the obtained objective function indicate a minimization trend in siloxane removal performance, however a maximum performance of approximately 60% Si removal efficiency was recorded. Analysis of the process mapping results provides indicators of improvement to biological system performance.

  15. What can we learn from inverse methods regarding the processes behind the acceleration and retreat of Helheim glacier (Greenland)?

    NASA Astrophysics Data System (ADS)

    Gagliardini, O.; Gillet-chaulet, F.; Martin, N.; Monnier, J.; Singh, J.

    2011-12-01

    Greenland outlet glaciers control the ice discharge toward the sea and the resulting contribution to sea level rise. Physical processes at the root of the observed acceleration and retreat, - decrease of the back force at the calving terminus, increase of basal lubrication and decrease of the lateral friction -, are still not well understood. All these three processes certainly play a role but their relative contributions have not yet been quantified. Helheim glacier, located on the east coast of Greenland, has undergone an enhanced retreat since 2003, and this retreat was concurrent with accelerated ice flow. In this study, the flowline dataset including surface elevation, surface velocity and front position of Helheim from 2001 to 2006 is used to quantify the sensitivity of each of these processes. For that, we used the full-Stokes finite element ice flow model DassFlow/Ice, including adjoint code and full 4d-var data assimilation process in which the control variables are the basal and lateral friction parameters as well as the calving front pressure. For each available date, the sensitivity of each processes is first studied and an optimal distribution is then inferred from the surface measurements. Using this optimal distribution of these parameters, a transient simulation is performed over the whole dataset period. The relative contributions of the basal friction, lateral friction and front back force are then discussed under the light of these new results.

  16. Data processing strategy of Raman chemical maps: data characteristics and behavior

    NASA Astrophysics Data System (ADS)

    Lee, Eunah

    2007-09-01

    Raman maps, when acquired and processed successfully, produce Raman chemical images, which provide detailed information on the spatial distribution and morphology of individual chemical species in samples. The advantages of Raman chemical images are most significant when the sample is chemically and structurally complicated. In pharmaceutical applications, these Raman chemical images can be used to understand and develop drug formulations, drug delivery mechanisms, and drug-cellular interactions. Studies using Raman hyperspectral imaging - the term that encompasses the entire procedure from data measurement to processing and interpretation - is increasing and gaining a wider acceptance due to recent improvements in Raman instrumentation and software. Since Raman maps are a collection of numerous Raman spectra of different chemical species, within a single data set, spectral characteristics such as the scattering strength, fluorescence level, and baselines vary a great deal. To acquire and process a Raman map successfully, this heterogeneity must be taken into the consideration. This paper will show the impact of signal-to-noise ratio (S/N) on data processing strategies and their results. It will be demonstrated that the S/N of original data is critical for good classification and scientifically meaningful results regardless of the processing strategies.

  17. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    NASA Astrophysics Data System (ADS)

    Frigeri, Alessandro; Hare, Trent; Neteler, Markus; Coradini, Angioletta; Federico, Costanzo; Orosei, Roberto

    2011-09-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  18. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    USGS Publications Warehouse

    Frigeri, A.; Hare, T.; Neteler, M.; Coradini, A.; Federico, C.; Orosei, R.

    2011-01-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  19. Accelerating the commercialization of university technologies for military healthcare applications: the role of the proof of concept process

    NASA Astrophysics Data System (ADS)

    Ochoa, Rosibel; DeLong, Hal; Kenyon, Jessica; Wilson, Eli

    2011-06-01

    The von Liebig Center for Entrepreneurism and Technology Advancement at UC San Diego (vonliebig.ucsd.edu) is focused on accelerating technology transfer and commercialization through programs and education on entrepreneurism. Technology Acceleration Projects (TAPs) that offer pre-venture grants and extensive mentoring on technology commercialization are a key component of its model which has been developed over the past ten years with the support of a grant from the von Liebig Foundation. In 2010, the von Liebig Entrepreneurism Center partnered with the U.S. Army Telemedicine and Advanced Technology Research Center (TATRC), to develop a regional model of Technology Acceleration Program initially focused on military research to be deployed across the nation to increase awareness of military medical needs and to accelerate the commercialization of novel technologies to treat the patient. Participants to these challenges are multi-disciplinary teams of graduate students and faculty in engineering, medicine and business representing universities and research institutes in a region, selected via a competitive process, who receive commercialization assistance and funding grants to support translation of their research discoveries into products or services. To validate this model, a pilot program focused on commercialization of wireless healthcare technologies targeting campuses in Southern California has been conducted with the additional support of Qualcomm, Inc. Three projects representing three different universities in Southern California were selected out of forty five applications from ten different universities and research institutes. Over the next twelve months, these teams will conduct proof of concept studies, technology development and preliminary market research to determine the commercial feasibility of their technologies. This first regional program will help build the needed tools and processes to adapt and replicate this model across other regions in the

  20. Neutron activation processes simulation in an Elekta medical linear accelerator head.

    PubMed

    Juste, B; Miró, R; Verdú, G; Díez, S; Campayo, J M

    2014-01-01

    Monte Carlo estimation of the giant-dipole-resonance (GRN) photoneutrons inside the Elekta Precise LINAC head (emitting a 15 MV photon beam) were performed using the MCNP6 (general-purpose Monte Carlo N-Particle code, version 6). Each component of LINAC head geometry and materials were modelled in detail using the given manufacturer information. Primary photons generate photoneutrons and its transport across the treatment head was simulated, including the (n, γ) reactions which undergo activation products. The MCNP6 was used to develop a method for quantifying the activation of accelerator components. The approach described in this paper is useful in quantifying the origin and the amount of nuclear activation.

  1. [Effect of pilot UASB-SFSBR-MAP process for the large scale swine wastewater treatment].

    PubMed

    Wang, Liang; Chen, Chong-Jun; Chen, Ying-Xu; Wu, Wei-Xiang

    2013-03-01

    In this paper, a treatment process consisted of UASB, step-fed sequencing batch reactor (SFSBR) and magnesium ammonium phosphate precipitation reactor (MAP) was built to treat the large scale swine wastewater, which aimed at overcoming drawbacks of conventional anaerobic-aerobic treatment process and SBR treatment process, such as the low denitrification efficiency, high operating costs and high nutrient losses and so on. Based on the treatment process, a pilot engineering was constructed. It was concluded from the experiment results that the removal efficiency of COD, NH4(+) -N and TP reached 95.1%, 92.7% and 88.8%, the recovery rate of NH4(+) -N and TP by MAP process reached 23.9% and 83.8%, the effluent quality was superior to the discharge standard of pollutants for livestock and poultry breeding (GB 18596-2001), mass concentration of COD, TN, NH4(+) -N, TP and SS were not higher than 135, 116, 43, 7.3 and 50 mg x L(-1) respectively. The process developed was reliable, kept self-balance of carbon source and alkalinity, reached high nutrient recovery efficiency. And the operating cost was equal to that of the traditional anaerobic-aerobic treatment process. So the treatment process could provide a high value of application and dissemination and be fit for the treatment pf the large scale swine wastewater in China.

  2. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  3. An Approach to Optimize Size Parameters of Forging by Combining Hot-Processing Map and FEM

    NASA Astrophysics Data System (ADS)

    Hu, H. E.; Wang, X. Y.; Deng, L.

    2014-11-01

    The size parameters of 6061 aluminum alloy rib-web forging were optimized by using hot-processing map and finite element method (FEM) based on high-temperature compression data. The results show that the stress level of the alloy can be represented by a Zener-Holloman parameter in a hyperbolic sine-type equation with the hot deformation activation energy of 343.7 kJ/mol. Dynamic recovery and dynamic recrystallization concurrently preceded during high-temperature deformation of the alloy. Optimal hot-processing parameters for the alloy corresponding to the peak value of 0.42 are 753 K and 0.001 s-1. The instability domain occurs at deformation temperature lower than 653 K. FEM is an available method to validate hot-processing map in actual manufacture by analyzing the effect of corner radius, rib width, and web thickness on workability of rib-web forging of the alloy. Size parameters of die forgings can be optimized conveniently by combining hot-processing map and FEM.

  4. Improved laser damage threshold performance of calcium fluoride optical surfaces via Accelerated Neutral Atom Beam (ANAB) processing

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, S.; Walsh, M.; Svrluga, R.; Thomas, M.

    2015-11-01

    Optics are not keeping up with the pace of laser advancements. The laser industry is rapidly increasing its power capabilities and reducing wavelengths which have exposed the optics as a weak link in lifetime failures for these advanced systems. Nanometer sized surface defects (scratches, pits, bumps and residual particles) on the surface of optics are a significant limiting factor to high end performance. Angstrom level smoothing of materials such as calcium fluoride, spinel, magnesium fluoride, zinc sulfide, LBO and others presents a unique challenge for traditional polishing techniques. Exogenesis Corporation, using its new and proprietary Accelerated Neutral Atom Beam (ANAB) technology, is able to remove nano-scale surface damage and particle contamination leaving many material surfaces with roughness typically around one Angstrom. This surface defect mitigation via ANAB processing can be shown to increase performance properties of high intensity optical materials. This paper describes the ANAB technology and summarizes smoothing results for calcium fluoride laser windows. It further correlates laser damage threshold improvements with the smoothing produced by ANAB surface treatment. All ANAB processing was performed at Exogenesis Corporation using an nAccel100TM Accelerated Particle Beam processing tool. All surface measurement data for the paper was produced via AFM analysis on a Park Model XE70 AFM, and all laser damage testing was performed at Spica Technologies, Inc. Exogenesis Corporation's ANAB processing technology is a new and unique surface modification technique that has demonstrated to be highly effective at correcting nano-scale surface defects. ANAB is a non-contact vacuum process comprised of an intense beam of accelerated, electrically neutral gas atoms with average energies of a few tens of electron volts. The ANAB process does not apply mechanical forces associated with traditional polishing techniques. ANAB efficiently removes surface

  5. Journey to the Edges: Social Structures and Neural Maps of Intergroup Processes

    PubMed Central

    Fiske, Susan T.

    2013-01-01

    This article explores boundaries of the intellectual map of intergroup processes, going to the macro (social structure) boundary and the micro (neural systems) boundary. Both are illustrated by with my own and others’ work on social structures and on neural structures related to intergroup processes. Analyzing the impact of social structures on intergroup processes led to insights about distinct forms of sexism and underlies current work on forms of ageism. The stereotype content model also starts with the social structure of intergroup relations (interdependence and status) and predicts images, emotions, and behaviors. Social structure has much to offer the social psychology of intergroup processes. At the other, less explored boundary, social neuroscience addresses the effects of social contexts on neural systems relevant to intergroup processes. Both social structural and neural analyses circle back to traditional social psychology as converging indicators of intergroup processes. PMID:22435843

  6. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  7. Development of a new flux map processing code for moveable detector system in PWR

    SciTech Connect

    Li, W.; Lu, H.; Li, J.; Dang, Z.; Zhang, X.

    2013-07-01

    This paper presents an introduction to the development of the flux map processing code MAPLE developed by China Nuclear Power Technology Research Institute (CNPPJ), China Guangdong Nuclear Power Group (CGN). The method to get the three-dimensional 'measured' power distribution according to measurement signal has also been described. Three methods, namely, Weight Coefficient Method (WCM), Polynomial Expand Method (PEM) and Thin Plane Spline (TPS) method, have been applied to fit the deviation between measured and predicted results for two-dimensional radial plane. The measured flux map data of the LINGAO nuclear power plant (NPP) is processed using MAPLE as a test case to compare the effectiveness of the three methods, combined with a 3D neutronics code COCO. Assembly power distribution results show that MAPLE results are reasonable and satisfied. More verification and validation of the MAPLE code will be carried out in future. (authors)

  8. Optical mapping of Kamchatka's volcanic deposits using digitally processed band-selective photographs

    NASA Astrophysics Data System (ADS)

    Ivanov, S. I.; Novikov, V. V.; Popov, A. P.; Tadzhidinov, Kh. G.

    1989-08-01

    A procedure is described for the digital processing of band-selective aerial photographs of volcano-bearing surfaces. The brightness and color parameters of samples of volcanic rocks and soils in their natural bedding are examined, and the results of two-parameter (albedo-color) mapping for an area around the Tolbachin Volcano are discussed. It is shown that the information obtained with this procedure yields accurate predictions of geochemical properties of volcanic deposits from optical data.

  9. Studies of $${\\rm Nb}_{3}{\\rm Sn}$$ Strands Based on the Restacked-Rod Process for High Field Accelerator Magnets

    DOE PAGES

    Barzi, E.; Bossert, M.; Gallo, G.; ...

    2011-12-21

    A major thrust in Fermilab's accelerator magnet R&D program is the development of Nb3Sn wires which meet target requirements for high field magnets, such as high critical current density, low effective filament size, and the capability to withstand the cabling process. The performance of a number of strands with 150/169 restack design produced by Oxford Superconducting Technology was studied for round and deformed wires. To optimize the maximum plastic strain, finite element modeling was also used as an aid in the design. Results of mechanical, transport and metallographic analyses are presented for round and deformed wires.

  10. Young coconut juice can accelerate the healing process of cutaneous wounds

    PubMed Central

    2012-01-01

    Background Estrogen has been reported to accelerate cutaneous wound healing. This research studies the effect of young coconut juice (YCJ), presumably containing estrogen-like substances, on cutaneous wound healing in ovairectomized rats. Methods Four groups of female rats (6 in each group) were included in this study. These included sham-operated, ovariectomized (ovx), ovx receiving estradiol benzoate (EB) injections intraperitoneally, and ovx receiving YCJ orally. Two equidistant 1-cm full-thickness skin incisional wounds were made two weeks after ovariectomy. The rats were sacrificed at the end of the third and the fourth week of the study, and their serum estradiol (E2) level was measured by chemiluminescent immunoassay. The skin was excised and examined in histological sections stained with H&E, and immunostained using anti-estrogen receptor (ER-α an ER-β) antibodies. Results Wound healing was accelerated in ovx rats receiving YCJ, as compared to controls. This was associated with significantly higher density of immunostaining for ER-α an ER-β in keratinocytes, fibroblasts, white blood cells, fat cells, sebaceous gland, skeletal muscles, and hair shafts and follicles. This was also associated with thicker epidermis and dermis, but with thinner hypodermis. In addition, the number and size of immunoreactive hair follicles for both ER-α and ER-β were the highest in the ovx+YCJ group, as compared to the ovx+EB group. Conclusions This study demonstrates that YCJ has estrogen-like characteristics, which in turn seem to have beneficial effects on cutaneous wound healing. PMID:23234369

  11. Melatonin improves inflammation processes in liver of senescence-accelerated prone male mice (SAMP8).

    PubMed

    Cuesta, Sara; Kireev, Roman; Forman, Katherine; García, Cruz; Escames, Germaine; Ariznavarreta, Carmen; Vara, Elena; Tresguerres, Jesús A F

    2010-12-01

    Aging is associated with an increase in oxidative stress and inflammation. The aim of this study was to investigate the effect of aging on various physiological parameters related to inflammation in livers obtained from two types of male mice models: Senescence-accelerated prone (SAMP8) and senescence-accelerated-resistant (SAMR1) mice, and to study the influence of the administration of melatonin (1mg/kg/day) for one month on old SAMP8 mice on these parameters. The parameters studied have been the mRNA expression of TNF-α, iNOS, IL-1β, HO-1, HO-2, MCP1, NFkB1, NFkB2, NFkB protein or NKAP and IL-10. All have been measured by real-time reverse transcription polymerase chain reaction RT-PCR. Furthermore we analyzed the protein expression of TNF-α, iNOS, IL-1β, HO-1, HO-2, and IL-10 by Western-blot. Aging increased oxidative stress and inflammation especially in the liver of SAMP8 mice. Treatment with melatonin decreased the mRNA expression of TNF-α, IL-1β, HO (HO-1 and HO-2), iNOS, MCP1, NFκB1, NFκB2 and NKAP in old male mice. The protein expression of TNF-α, IL-1β was also decreased and IL-10 increased with melatonin treatment and no significant differences were observed in the rest of parameters analyzed. The present study showed that aging was related to inflammation in livers obtained from old male senescence prone mice (SAMP8) and old male senescence resistant mice (SAMR1) being the alterations more evident in the former. Exogenous administration of melatonin was able to reduce inflammation.

  12. Energetic ions and electrons and their acceleration processes in the magnetotail

    NASA Astrophysics Data System (ADS)

    Scholer, Manfred

    Many years of observations of energetic particle fluxes in the geomagnetic tail have shown that these particles exhibit a bursty appearance on all time scales. However, often the bursty appearance is merely due to multiple entries and exits of the spacecraft into and out of the plasma sheet which always contains varying fluxes of energetic particles. Therefore these bursts should not in each case be immediately associated with reconnection. Nevertheless the fact that charged particles are accelerated to high energies within the magnetosphere has to be explained and reconnection may ultimately be a promising candidate. In addition to these entries and exits into and out of the plasma sheet there occur short term bursts well within the plasma sheet which may be the direct signature of reconnection. At the boundary of the recovering plasma sheet earthward directed beams of energetic ions have been observed which may be due to more steady state reconnection in the distant tail. During plasma sheet dropout at substorm onset short lived (˜40 s) high energy particle bursts occur which are related to the newly created earthward neutral line. Recent results from the ISEE 3 deep tail mission have revealed the existence of fast tailward moving plasma structures which are preceded by energetic electron and ion beams. The observed velocity dispersions during the appearance of these beams allow a determination of the source location. Finally it is noted that the vast literature on energetic burst observations in the geomagnetic tail has to be contrasted with the existence of only a few theoretical papers which deal with particle acceleration to high energies during reconnection in a more quantitative way.

  13. Interactive remote data processing using Pixelize Wavelet Filtration (PWF-method) and PeriodMap analysis

    NASA Astrophysics Data System (ADS)

    Sych, Robert; Nakariakov, Valery; Anfinogentov, Sergey

    Wavelet analysis is suitable for investigating waves and oscillating in solar atmosphere, which are limited in both time and frequency. We have developed an algorithms to detect this waves by use the Pixelize Wavelet Filtration (PWF-method). This method allows to obtain information about the presence of propagating and non-propagating waves in the data observation (cube images), and localize them precisely in time as well in space. We tested the algorithm and found that the results of coronal waves detection are consistent with those obtained by visual inspection. For fast exploration of the data cube, in addition, we applied early-developed Period- Map analysis. This method based on the Fast Fourier Transform and allows on initial stage quickly to look for "hot" regions with the peak harmonic oscillations and determine spatial distribution at the significant harmonics. We propose the detection procedure of coronal waves separate on two parts: at the first part, we apply the PeriodMap analysis (fast preparation) and than, at the second part, use information about spatial distribution of oscillation sources to apply the PWF-method (slow preparation). There are two possible algorithms working with the data: in automatic and hands-on operation mode. Firstly we use multiply PWF analysis as a preparation narrowband maps at frequency subbands multiply two and/or harmonic PWF analysis for separate harmonics in a spectrum. Secondly we manually select necessary spectral subband and temporal interval and than construct narrowband maps. For practical implementation of the proposed methods, we have developed the remote data processing system at Institute of Solar-Terrestrial Physics, Irkutsk. The system based on the data processing server - http://pwf.iszf.irk.ru. The main aim of this resource is calculation in remote access through the local and/or global network (Internet) narrowband maps of wave's sources both in whole spectral band and at significant harmonics. In addition

  14. The Common Element Effect of Abstract-to-Abstract Mapping in Language Processing.

    PubMed

    Chen, Xuqian; Wang, Guixiang; Liang, Yuchan

    2016-01-01

    Since the 1990s, there has been much discussion about how concepts are learned and processed. Many researchers believe that the experienced bodily states (i.e., embodied experiences) should be an important factor that affects concepts' learning and use, and metaphorical mappings between abstract concepts, such as TIME and POWER, and concrete concepts, such as SPATIAL ORIENTATION, STRUCTURED EXPERIENCEs, etc., suggest the abstract-concrete concepts' connections. In most of the recent literature, we can find common elements (e.g., concrete concepts) shared by different abstract-concrete metaphorical expressions. Therefore, we assumed that mappings might also be found between two abstract concepts that share common elements, though they have no symbolic connections. In the present study, two lexical decision tasks were arranged and the priming effect between TIME and ABSTRACT ACTIONs was used as an index to test our hypothesis. Results showed a robust priming effect when a target verb and its prime belonged to the same duration type (TIME consistent condition). These findings suggest that mapping between concepts was affected by common elements. We propose a dynamic model in which mappings between concepts are influenced by common elements, including symbolic or embodied information. What kind of elements (linguistic or embodied) can be used would depend on how difficult it is for a concept to be learned or accessed.

  15. Processing of noised residual stress phase maps by using a 3D phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Viotti, Matias R.; Fantin, Analucia V.; Albertazzi, Armando; Willemann, Daniel P.

    2013-07-01

    The measurement of residual stress by using digital speckle pattern interferometry (DSPI) combined with the hole drilling technique is a valuable and fast tool for integrity evaluation of civil structures and mechanical parts. However, in some cases, measured phase maps are badly corrupted by noise which makes phase unwrapping a difficult and unsuccessful task. By following recommendations given by the ASTM E837 standard, 20 consecutive hole steps should be performed for the measurement of non-uniform stresses. As a consequence, 20 difference phase maps along the hole depth will be available for the DSPI technique. An adaptive phase unwrapping algorithm could be used in order to unwrap images following paths localized along well modulated pixels and performing two dimensional phase unwrapping (following paths inside a difference phase map corresponding to a hole step) or 3D phase unwrapping (similar to a temporal phase unwrapping following paths located at well-modulated pixels in a previous or a subsequent hole image). Non-corrupted and corrupted hole-drilling tests were processed with a traditional phase unwrapping algorithm as well as with the proposed 3D approach. Comparisons between unwrapped phase maps and simulated ones have shown that the proposed method gave results with best accordance than 2D results.

  16. The Common Element Effect of Abstract-to-Abstract Mapping in Language Processing

    PubMed Central

    Chen, Xuqian; Wang, Guixiang; Liang, Yuchan

    2016-01-01

    Since the 1990s, there has been much discussion about how concepts are learned and processed. Many researchers believe that the experienced bodily states (i.e., embodied experiences) should be an important factor that affects concepts’ learning and use, and metaphorical mappings between abstract concepts, such as TIME and POWER, and concrete concepts, such as SPATIAL ORIENTATION, STRUCTURED EXPERIENCEs, etc., suggest the abstract-concrete concepts’ connections. In most of the recent literature, we can find common elements (e.g., concrete concepts) shared by different abstract-concrete metaphorical expressions. Therefore, we assumed that mappings might also be found between two abstract concepts that share common elements, though they have no symbolic connections. In the present study, two lexical decision tasks were arranged and the priming effect between TIME and ABSTRACT ACTIONs was used as an index to test our hypothesis. Results showed a robust priming effect when a target verb and its prime belonged to the same duration type (TIME consistent condition). These findings suggest that mapping between concepts was affected by common elements. We propose a dynamic model in which mappings between concepts are influenced by common elements, including symbolic or embodied information. What kind of elements (linguistic or embodied) can be used would depend on how difficult it is for a concept to be learned or accessed. PMID:27822192

  17. Beyond event segmentation: spatial- and social-cognitive processes in verb-to-action mapping.

    PubMed

    Friend, Margaret; Pace, Amy

    2011-05-01

    The present article investigates spatial- and social-cognitive processes in toddlers' mapping of concepts to real-world events. In 2 studies we explore how event segmentation might lay the groundwork for extracting actions from the event stream and conceptually mapping novel verbs to these actions. In Study 1, toddlers demonstrated the ability to segment a novel multiaction event by selecting a single action for behavioral reenactment when prompted. In Study 2, a single action embedded in the event sequence was specified as the referent for a novel verb through cues to support intentional inference. As in Study 1, toddlers spontaneously segmented the sequence. In addition, they mapped a novel label to the embedded action specified by the novel label and intentional cues. These data are consistent with current thinking that suggests a convergence of spatial and social cognition in verb learning. In the present research, toddlers built upon event segmentation skills and used intentional inference when mapping verbs to actions embedded in the motion stream.

  18. PREFACE: 3rd International Workshop on Materials Analysis and Processing in Magnetic Fields (MAP3)

    NASA Astrophysics Data System (ADS)

    Sakka, Yoshio; Hirota, Noriyuki; Horii, Shigeru; Ando, Tsutomu

    2009-07-01

    The 3rd International Workshop on Materials Analysis and Processing in Materials Fields (MAP3) was held on 14-16 May 2008 at the University of Tokyo, Japan. The first was held in March 2004 at the National High Magnetic Field Laboratory in Tallahassee, USA. Two years later the second took place in Grenoble, France. MAP3 was held at The University of Tokyo International Symposium, and jointly with MANA Workshop on Materials Processing by External Stimulation, and JSPS CORE Program of Construction of the World Center on Electromagnetic Processing of Materials. At the end of MAP3 it was decided that the next MAP4 will be held in Atlanta, USA in 2010. Processing in magnetic fields is a rapidly expanding research area with a wide range of promising applications in materials science. MAP3 focused on the magnetic field interactions involved in the study and processing of materials in all disciplines ranging from physics to chemistry and biology: Magnetic field effects on chemical, physical, and biological phenomena Magnetic field effects on electrochemical phenomena Magnetic field effects on thermodynamic phenomena Magnetic field effects on hydrodynamic phenomena Magnetic field effects on crystal growth Magnetic processing of materials Diamagnetic levitation Magneto-Archimedes effect Spin chemistry Application of magnetic fields to analytical chemistry Magnetic orientation Control of structure by magnetic fields Magnetic separation and purification Magnetic field-induced phase transitions Materials properties in high magnetic fields Development of NMR and MRI Medical application of magnetic fields Novel magnetic phenomena Physical property measurement by Magnetic fields High magnetic field generation> MAP3 consisted of 84 presentations including 16 invited talks. This volume of Journal of Physics: Conference Series contains the proceeding of MAP3 with 34 papers that provide a scientific record of the topics covered by the conference with the special topics (13 papers) in

  19. HiCUP: pipeline for mapping and processing Hi-C data.

    PubMed

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies.

  20. HiCUP: pipeline for mapping and processing Hi-C data

    PubMed Central

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies. PMID:26835000

  1. Mapping of Inner and Outer Celestial Bodies Using New Global and Local Topographic Data Derived from Photogrammetric Image Processing

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kokhanov, A. A.; Rodionova, J. F.; Zharkova, A. Yu.; Lazareva, M. S.

    2016-06-01

    New estimation of fundamental geodetic parameters and global and local topography of planets and satellites provide basic coordinate systems for mapping as well as opportunities for studies of processes on their surfaces. The main targets of our study are Europa, Ganymede, Calisto and Io (satellites of Jupiter), Enceladus (a satellite of Saturn), terrestrial planetary bodies, including Mercury, the Moon and Phobos, one of the Martian satellites. In particular, based on new global shape models derived from three-dimensional control point networks and processing of high-resolution stereo images, we have carried out studies of topography and morphology. As a visual representation of the results, various planetary maps with different scale and thematic direction were created. For example, for Phobos we have produced a new atlas with 43 maps, as well as various wall maps (different from the maps in the atlas by their format and design): basemap, topography and geomorphological maps. In addition, we compiled geomorphologic maps of Ganymede on local level, and a global hypsometric Enceladus map. Mercury's topography was represented as a hypsometric globe for the first time. Mapping of the Moon was carried out using new images with super resolution (0.5-1 m/pixel) for activity regions of the first Soviet planetary rovers (Lunokhod-1 and -2). New results of planetary mapping have been demonstrated to the scientific community at planetary map exhibitions (Planetary Maps Exhibitions, 2015), organized by MExLab team in frame of the International Map Year, which is celebrated in 2015-2016. Cartographic products have multipurpose applications: for example, the Mercury globe is popular for teaching and public outreach, the maps like those for the Moon and Phobos provide cartographic support for Solar system exploration.

  2. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  3. Processing of airborne lidar bathymetry data for detailed sea floor mapping

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. Michael

    2014-10-01

    Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas. With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field data collection can be directed into selected areas and even focused to identification of specific targets detected in the lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data, and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding density.

  4. Relative characterization of rosemary samples according to their geographical origins using microwave-accelerated distillation, solid-phase microextraction and Kohonen self-organizing maps.

    PubMed

    Tigrine-Kordjani, N; Chemat, F; Meklati, B Y; Tuduri, L; Giraudel, J L; Montury, M

    2007-09-01

    For centuries, rosemary (Rosmarinus officinalis L.) has been used to prepare essential oils which, even now, are highly valued due to their various biological activities. Nevertheless, it has been noted that these activities often depend on the origin of the rosemary plant and the method of extraction. Since both of these quality parameters can greatly influence the chemical composition of rosemary oil, an original analytical method was developed where "dry distillation" was coupled to headspace solid-phase microextraction (HS-SPME) and then a data mining technique using the Kohonen self-organizing map algorithm was applied to the data obtained. This original approach uses the newly described microwave-accelerated distillation technique (MAD) and HS-SPME; neither of these techniques require external solvent and so this approach provides a novel "green" chemistry sampling method in the field of biological matrix analysis. The large data set obtained was then treated with a rarely used chemometric technique based on nonclassical statistics. Applied to 32 rosemary samples collected at the same time from 12 different sites in the north of Algeria, this method highlighted a strong correlation between the volatile chemical compositions of the samples and their origins, and it therefore allowed the samples to be grouped according to geographical distribution. Moreover, the method allowed us to identify the constituents that exerted the most influence during classification.

  5. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    PubMed Central

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  6. Developmental Changes in Processing Speed: Influence of Accelerated Education for Gifted Children

    ERIC Educational Resources Information Center

    Duan, Xiaoju; Shi, Jiannong; Zhou, Dan

    2010-01-01

    There are two major hypotheses concerning the developmental trends of processing speeds. These hypotheses explore both local and global trends. The study presented here investigates the effects of people's different knowledge on the speed with which they are able to process information. The participants in this study are gifted children aged 9,…

  7. Application of Low Level, Uniform Ultrasound Field for Acceleration of Enzymatic Bio-processing of Cotton

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Enzymatic bio-processing of cotton generates significantly less hazardous wastewater effluents, which are readily biodegradable, but it also has several critical shortcomings that impede its acceptance by industries: expensive processing costs and slow reaction rates. Our research has found that th...

  8. Implications of acceleration environments on scaling materials processing in space to production

    NASA Technical Reports Server (NTRS)

    Demel, Ken

    1990-01-01

    Some considerations regarding materials processing in space are covered from a commercial perspective. Key areas include power, proprietary data, operational requirements (including logistics), and also the center of gravity location, and control of that location with respect to materials processing payloads.

  9. Topological data analysis of contagion maps for examining spreading processes on networks

    PubMed Central

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-01-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth’s surface; however, in modern contagions long-range edges—for example, due to airline transportation or communication media—allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct “contagion maps” that use multiple contagions on a network to map the nodes as a point cloud. By analyzing the topology, geometry, and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modeling, forecast, and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks. PMID:26194875

  10. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  11. Topographic power spectral density study of the effect of surface treatment processes on niobium for superconducting radio frequency accelerator cavities

    SciTech Connect

    Charles Reece, Hui Tian, Michael Kelley, Chen Xu

    2012-04-01

    Microroughness is viewed as a critical issue for attaining optimum performance of superconducting radio frequency accelerator cavities. The principal surface smoothing methods are buffered chemical polish (BCP) and electropolish (EP). The resulting topography is characterized by atomic force microscopy (AFM). The power spectral density (PSD) of AFM data provides a more thorough description of the topography than a single-value roughness measurement. In this work, one dimensional average PSD functions derived from topography of BCP and EP with different controlled starting conditions and durations have been fitted with a combination of power law, K correlation, and shifted Gaussian models to extract characteristic parameters at different spatial harmonic scales. While the simplest characterizations of these data are not new, the systematic tracking of scale-specific roughness as a function of processing is new and offers feedback for tighter process prescriptions more knowledgably targeted at beneficial niobium topography for superconducting radio frequency applications.

  12. Geomorphologic mapping of titan's polar terrains: Constraining surface processes and landscape evolution

    NASA Astrophysics Data System (ADS)

    Birch, S. P. D.; Hayes, A. G.; Dietrich, W. E.; Howard, A. D.; Bristow, C. S.; Malaska, M. J.; Moore, J. M.; Mastrogiuseppe, M.; Hofgartner, J. D.; Williams, D. A.; White, O. L.; Soderblom, J. M.; Barnes, J. W.; Turtle, E. P.; Lunine, J. I.; Wood, C. A.; Neish, C. D.; Kirk, R. L.; Stofan, E. R.; Lorenz, R. D.; Lopes, R. M. C.

    2017-01-01

    We present a geomorphologic map of Titan's polar terrains. The map was generated from a combination of Cassini Synthetic Aperture Radar (SAR) and Imaging Science Subsystem imaging products, as well as altimetry, SARTopo and radargrammetry topographic datasets. In combining imagery with topographic data, our geomorphologic map reveals a stratigraphic sequence from which we infer process interactions between units. In mapping both polar regions with the same geomorphologic units, we conclude that processes that formed the terrains of the north polar region also acted to form the landscape we observe at the south. Uniform, SAR-dark plains are interpreted as sedimentary deposits, and are bounded by moderately dissected uplands. These plains contain the highest density of filled and empty lake depressions, and canyons. These units unconformably overlay a basement rock that outcrops as mountains and SAR-bright dissected terrains at various elevations across both poles. All these units are then superposed by surficial units that slope towards the seas, suggestive of subsequent overland transport of sediment. From estimates of the depths of the embedded empty depressions and canyons that drain into the seas, the SAR-dark plains must be >600 m thick in places, though the thickness may vary across the poles. At the lowest elevations of each polar region, there are large seas, which are currently liquid methane/ethane filled at the north and empty at the south. The large plains deposits and the surrounding hillslopes may represent remnant landforms that are a result of previously vast polar oceans, where larger liquid bodies may have allowed for a sustained accumulation of soluble and insoluble sediments, potentially forming layered sedimentary deposits. Coupled with vertical crustal movements, the resulting layers would be of varying solubilities and erosional resistances, allowing formation of the complex landscape that we observe today.

  13. Land use/land cover mapping using multi-scale texture processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Wong, S. N.; Sarker, M. L. R.

    2014-02-01

    Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.

  14. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    SciTech Connect

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-15

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large

  15. Accelerating solidification process simulation for large-sized system of liquid metal atoms using GPU with CUDA

    NASA Astrophysics Data System (ADS)

    Jie, Liang; Li, KenLi; Shi, Lin; Liu, RangSu; Mei, Jing

    2014-01-01

    Molecular dynamics simulation is a powerful tool to simulate and analyze complex physical processes and phenomena at atomic characteristic for predicting the natural time-evolution of a system of atoms. Precise simulation of physical processes has strong requirements both in the simulation size and computing timescale. Therefore, finding available computing resources is crucial to accelerate computation. However, a tremendous computational resource (GPGPU) are recently being utilized for general purpose computing due to its high performance of floating-point arithmetic operation, wide memory bandwidth and enhanced programmability. As for the most time-consuming component in MD simulation calculation during the case of studying liquid metal solidification processes, this paper presents a fine-grained spatial decomposition method to accelerate the computation of update of neighbor lists and interaction force calculation by take advantage of modern graphics processors units (GPU), enlarging the scale of the simulation system to a simulation system involving 10 000 000 atoms. In addition, a number of evaluations and tests, ranging from executions on different precision enabled-CUDA versions, over various types of GPU (NVIDIA 480GTX, 580GTX and M2050) to CPU clusters with different number of CPU cores are discussed. The experimental results demonstrate that GPU-based calculations are typically 9∼11 times faster than the corresponding sequential execution and approximately 1.5∼2 times faster than 16 CPU cores clusters implementations. On the basis of the simulated results, the comparisons between the theoretical results and the experimental ones are executed, and the good agreement between the two and more complete and larger cluster structures in the actual macroscopic materials are observed. Moreover, different nucleation and evolution mechanism of nano-clusters and nano-crystals formed in the processes of metal solidification is observed with large-sized system.

  16. Dynamic recrystallization behavior and processing map of the Cu-Cr-Zr-Nd alloy.

    PubMed

    Zhang, Yi; Sun, Huili; Volinsky, Alex A; Tian, Baohong; Song, Kexing; Chai, Zhe; Liu, Ping; Liu, Yong

    2016-01-01

    Hot deformation behavior of the Cu-Cr-Zr-Nd alloy was studied by hot compressive tests in the temperature range of 650-950 °C and the strain rate range of 0.001-10 s(-1) using Gleeble-1500D thermo-mechanical simulator. The results showed that the flow stress is strongly dependent on the deformation temperature and the strain rate. With the increase of temperature or the decrease of strain rate, the flow stress significantly decreases. Hot activation energy of the alloy is about 404.84 kJ/mol and the constitutive equation of the alloy based on the hyperbolic-sine equation was established. Based on the dynamic material model, the processing map was established to optimize the deformation parameters. The optimal processing parameters for the Cu-Cr-Zr-Nd alloy hot working are in the temperature range of 900-950 °C and strain rate range of 0.1-1 s(-1). A full dynamic recrystallization structure with fine and homogeneous grain size can be obtained at optimal processing conditions. The microstructure of specimens deformed at different conditions was analyzed and connected with the processing map. The surface fracture was observed to identify instability conditions.

  17. IntenCD: an application for CD uniformity mapping of photomask and process control at maskshops

    NASA Astrophysics Data System (ADS)

    Kim, Heebom; Lee, MyoungSoo; Lee, Sukho; Sung, Young-Su; Kim, Byunggook; Woo, Sang-Gyun; Cho, HanKu; Yishai, Michael Ben; Shoval, Lior; Couderc, Christophe

    2008-05-01

    Lithographic process steps used in today's integrated circuit production require tight control of critical dimensions (CD). With new design rules dropping to 32 nm and emerging double patterning processes, parameters that were of secondary importance in previous technology generations have now become determining for the overall CD budget in the wafer fab. One of these key parameters is the intra-field mask CD uniformity (CDU) error, which is considered to consume an increasing portion of the overall CD budget for IC fabrication process. Consequently, it has become necessary to monitor and characterize CDU in both the maskshop and the wafer fab. Here, we describe the introduction of a new application for CDU monitoring into the mask making process at Samsung. The IntenCDTM application, developed by Applied Materials, is implemented on an aerial mask inspection tool. It uses transmission inspection data, which contains information about CD variation over the mask, to create a dense yet accurate CDU map of the whole mask. This CDU map is generated in parallel to the normal defect inspection run, thus adding minimal overhead to the regular inspection time. We present experimental data showing examples of mask induced CD variations from various sources such as geometry, transmission and phase variations. We show how these small variations were captured by IntenCDTM and demonstrate a high level of correlation between CD SEM analysis and IntenCDTM mapping of mask CDU. Finally, we suggest a scheme for integrating the IntenCDTM application as part of mask qualification procedure at maskshops.

  18. Brightest Fermi-LAT flares of PKS 1222+216: implications on emission and acceleration processes

    SciTech Connect

    Kushwaha, Pankaj; Singh, K. P.; Sahayanathan, Sunder

    2014-11-20

    We present a high time resolution study of the two brightest γ-ray outbursts from a blazar PKS 1222+216 observed by the Fermi Large Area Telescope (LAT) in 2010. The γ-ray light curves obtained in four different energy bands, 0.1-3, 0.1-0.3, 0.3-1, and 1-3 GeV, with time bins of six hours, show asymmetric profiles with similar rise times in all the bands but a rapid decline during the April flare and a gradual one during the June flare. The light curves during the April flare show an ∼2 day long plateau in 0.1-0.3 GeV emission, erratic variations in 0.3-1 GeV emission, and a daily recurring feature in 1-3 GeV emission until the rapid rise and decline within a day. The June flare shows a monotonic rise until the peak, followed by a gradual decline powered mainly by the multi-peak 0.1-0.3 GeV emission. The peak fluxes during both the flares are similar except in the 1-3 GeV band in April, which is twice the corresponding flux during the June flare. Hardness ratios during the April flare indicate spectral hardening in the rising phase followed by softening during the decay. We attribute this behavior to the development of a shock associated with an increase in acceleration efficiency followed by its decay leading to spectral softening. The June flare suggests hardening during the rise followed by a complicated energy dependent behavior during the decay. Observed features during the June flare favor multiple emission regions while the overall flaring episode can be related to jet dynamics.

  19. Electromagnetic oil field mapping for improved process monitoring and reservoir characterization: A poster presentation

    SciTech Connect

    Waggoner, J.R.; Mansure, A.J.

    1992-02-01

    This report is a permanent record of a poster paper presented by the authors at the Third International Reservoir Characterization Technical Conference in Tulsa, Oklahoma on November 3--5, 1991. The subject is electromagnetic (EM) techniques that are being developed to monitor oil recovery processes to improve overall process performance. The potential impact of EM surveys is very significant, primarily in the areas of locating oil, identifying oil inside and outside the pattern, characterizing flow units, and pseudo-real time process control to optimize process performance and efficiency. Since a map of resistivity alone has little direct application to these areas, an essential part of the EM technique is understanding the relationship between the process and the formation resistivity at all scales, and integrating this understanding into reservoir characterization and simulation. First is a discussion of work completed on the core scale petrophysics of resistivity changes in an oil recovery process; a steamflood is used as an example. A system has been developed for coupling the petrophysics of resistivity with reservoir simulation to simulate the formation resistivity structure arising from a recovery process. Preliminary results are given for an investigation into the effect of heterogeneity and anisotropy on the EM technique, as well as the use of the resistivity simulator to interpret EM data in terms of reservoir and process parameters. Examples illustrate the application of the EM technique to improve process monitoring and reservoir characterization.

  20. A single aerobic exercise session accelerates movement execution but not central processing.

    PubMed

    Beyer, Kit B; Sage, Michael D; Staines, W Richard; Middleton, Laura E; McIlroy, William E

    2017-03-27

    Previous research has demonstrated that aerobic exercise has disparate effects on speed of processing and movement execution. In simple and choice reaction tasks, aerobic exercise appears to increase speed of movement execution while speed of processing is unaffected. In the flanker task, aerobic exercise has been shown to reduce response time on incongruent trials more than congruent trials, purportedly reflecting a selective influence on speed of processing related to cognitive control. However, it is unclear how changes in speed of processing and movement execution contribute to these exercise-induced changes in response time during the flanker task. This study examined how a single session of aerobic exercise influences speed of processing and movement execution during a flanker task using electromyography to partition response time into reaction time and movement time, respectively. Movement time decreased during aerobic exercise regardless of flanker congruence but returned to pre-exercise levels immediately after exercise. Reaction time during incongruent flanker trials decreased over time in both an aerobic exercise and non-exercise control condition indicating it was not specifically influenced by exercise. This disparate influence of aerobic exercise on movement time and reaction time indicates the importance of partitioning response time when examining the influence of aerobic exercise on speed of processing. The decrease in reaction time over time independent of aerobic exercise indicates that interpreting pre-to-post exercise changes in behavior requires caution.

  1. On the convergence of implicit iteration process with error for a finite family of asymptotically nonexpansive mappings

    NASA Astrophysics Data System (ADS)

    Chang, S. S.; Tan, K. K.; Lee, H. W. J.; Chan, Chi Kin

    2006-01-01

    The purpose of this paper is to study the weak and strong convergence of implicit iteration process with errors to a common fixed point for a finite family of asymptotically nonexpansive mappings and nonexpansive mappings in Banach spaces. The results presented in this paper extend and improve the corresponding results of [H. Bauschke, The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space, J. Math. Anal. Appl. 202 (1996) 150-159; B. Halpern, Fixed points of nonexpansive maps, Bull. Amer. Math. Soc. 73 (1967) 957-961; P.L. Lions, Approximation de points fixes de contractions, C. R. Acad. Sci. Paris, Ser. A 284 (1977), 1357-1359; S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl. 75 (1980) 287-292; Z.H. Sun, Strong convergence of an implicit iteration process for a finite family of asymptotically quasi-nonexpansive mappings, J. Math. Anal. Appl. 286 (2003) 351-358; R. Wittmann, Approximation of fixed points of nonexpansive mappings, Arch. Math. 58 (1992) 486-491; H.K. Xu, M.G. Ori, An implicit iterative process for nonexpansive mappings, Numer. Funct. Anal. Optimiz. 22 (2001) 767-773; Y.Y. Zhou, S.S. Chang, Convergence of implicit iterative process for a finite family of asymptotically nonexpansive mappings in Banach spaces, Numer. Funct. Anal. Optimiz. 23 (2002) 911-921].

  2. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  3. Linear Accelerators

    NASA Astrophysics Data System (ADS)

    Sidorin, Anatoly

    2010-01-01

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  4. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  5. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units

    NASA Astrophysics Data System (ADS)

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  6. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    PubMed

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  7. Hot Deformation and Processing Maps of Al-15%B4C Composites Containing Sc and Zr

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Zhang, Zhan; Chen, X.-Grant

    2017-03-01

    Hot deformation behavior and processing maps of three Al-15%B4C composites denoted as the base composite (Al-15vol.%B4C), S40 (Al-15vol.%B4C-0.4wt.%Sc) and SZ40 (Al-15 vol.%B4C-0.4wt.%Sc-0.24wt.%Zr) were studied by uniaxial compression tests performed at various deformation temperatures and strain rates. The constitutive equations of the three composites were established to describe the effect of the temperature and strain rate on hot deformation behavior. Using the established constitutive equations, the predicted flow stresses on various deformation conditions agreed well with the experimental data. The peak flow stress of the composites increased with the addition of Sc and Zr, attributing to the synthetic effect of solute atoms and dynamic precipitation. The addition of Sc and Zr increased the activation energy for hot deformation of Al-B4C composites. The processing maps of the three composites were constructed to evaluate the hot workability of the composites. The safe domains with optimal deformation conditions were identified for all three composites. In the safe domains, dynamic recovery and dynamic recrystallization were involved as softening mechanisms. The addition of Sc and Zr limited the dynamic softening process, especially for dynamic recrystallization. The microstructure analysis revealed that the flow instability was attributed to the void formation, cracking and flow localization during hot deformation of the composites.

  8. A new technique for processing airborne gamma ray spectrometry data for mapping low level contaminations.

    PubMed

    Aage, H K; Korsbech, U; Bargholz, K; Hovgaard, J

    1999-12-01

    A new technique for processing airborne gamma ray spectrometry data has been developed. It is based on the noise adjusted singular value decomposition method introduced by Hovgaard in 1997. The new technique opens for mapping of very low contamination levels. It is tested with data from Latvia where the remaining contamination from the 1986 Chernobyl accident together with fallout from the atmospheric nuclear weapon tests includes 137Cs at levels often well below 1 kBq/m2 equivalent surface contamination. The limiting factors for obtaining reliable results are radon in the air, spectrum stability and accurate altitude measurements.

  9. NREL Develops Accelerated Sample Activation Process for Hydrogen Storage Materials (Fact Sheet)

    SciTech Connect

    Not Available

    2010-12-01

    This fact sheet describes NREL's accomplishments in developing a new sample activation process that reduces the time to prepare samples for measurement of hydrogen storage from several days to five minutes and provides more uniform samples. Work was performed by NREL's Chemical and Materials Science Center.

  10. Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J. J.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper describes the processing applied to the cleaned, time-ordered information obtained from the Planck High Frequency Instrument (HFI) with the aim of producing photometrically calibrated maps in temperature and (for the first time) in polarization. The data from the entire 2.5-year HFI mission include almost five full-sky surveys. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. The 545 and 857 GHz data are calibrated using models of planetary atmospheric emission. The lower frequencies (from 100 to 353 GHz) are calibrated using the time-variable cosmological microwave background dipole, which we call the orbital dipole. This source of calibration only depends on the satellite velocity with respect to the solar system. Using a CMB temperature of TCMB = 2.7255 ± 0.0006 K, it permits an independent measurement of the amplitude of the CMB solar dipole (3364.3 ± 1.5 μK), which is approximatively 1σ higher than the WMAP measurement with a direction that is consistent between the two experiments. We describe the pipeline used to produce the maps ofintensity and linear polarization from the HFI timelines, and the scheme used to set the zero level of the maps a posteriori. We also summarize the noise characteristics of the HFI maps in the 2015 Planck data release and present some null tests to assess their quality. Finally, we discuss the major systematic effects and in particular the leakage induced by flux mismatch between the detectors that leads to spurious polarization signal.

  11. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2014-05-01

    The use of SAR data has become increasingly popular in recent years and in a wide array of industries. Having access to SAR can be highly important and critical especially for public safety. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. SAR imaging offers the great advantage, over its optical counterparts, of not being affected by darkness, meteorological conditions such as clouds, fog, etc., or smoke and dust, frequently associated with disaster zones. In this paper we present the operational processing of SAR data within a GIS environment for rapid disaster mapping. For this technique we integrated the SARscape modules for ENVI with ArcGIS®, eliminating the need to switch between software packages. Thereby the premier algorithms for SAR image analysis can be directly accessed from ArcGIS desktop and server environments. They allow processing and analyzing SAR data in almost real time and with minimum user interaction. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. The Bacchiglione River burst its banks on Nov. 2nd after two days of heavy rainfall throughout the northern Italian region. The community of Bovolenta, 22 km SSE of Padova, was covered by several meters of water. People were requested to stay in their homes; several roads, highways sections and railroads had to be closed. The extent of this flooding is documented by a series of Cosmo-SkyMed acquisitions with a GSD of 2.5 m (StripMap mode). Cosmo-SkyMed is a constellation of four Earth observation satellites, allowing a very frequent coverage, which enables monitoring using a very high temporal resolution. This data is processed in ArcGIS using a single-sensor, multi-mode, multi-temporal approach consisting of 3 steps: (1) The single images are filtered with a Gamma DE-MAP filter. (2) The filtered images are geocoded using a reference

  12. Single-step affinity purification of enzyme biotherapeutics: a platform methodology for accelerated process development.

    PubMed

    Brower, Kevin P; Ryakala, Venkat K; Bird, Ryan; Godawat, Rahul; Riske, Frank J; Konstantinov, Konstantin; Warikoo, Veena; Gamble, Jean

    2014-01-01

    Downstream sample purification for quality attribute analysis is a significant bottleneck in process development for non-antibody biologics. Multi-step chromatography process train purifications are typically required prior to many critical analytical tests. This prerequisite leads to limited throughput, long lead times to obtain purified product, and significant resource requirements. In this work, immunoaffinity purification technology has been leveraged to achieve single-step affinity purification of two different enzyme biotherapeutics (Fabrazyme® [agalsidase beta] and Enzyme 2) with polyclonal and monoclonal antibodies, respectively, as ligands. Target molecules were rapidly isolated from cell culture harvest in sufficient purity to enable analysis of critical quality attributes (CQAs). Most importantly, this is the first study that demonstrates the application of predictive analytics techniques to predict critical quality attributes of a commercial biologic. The data obtained using the affinity columns were used to generate appropriate models to predict quality attributes that would be obtained after traditional multi-step purification trains. These models empower process development decision-making with drug substance-equivalent product quality information without generation of actual drug substance. Optimization was performed to ensure maximum target recovery and minimal target protein degradation. The methodologies developed for Fabrazyme were successfully reapplied for Enzyme 2, indicating platform opportunities. The impact of the technology is significant, including reductions in time and personnel requirements, rapid product purification, and substantially increased throughput. Applications are discussed, including upstream and downstream process development support to achieve the principles of Quality by Design (QbD) as well as integration with bioprocesses as a process analytical technology (PAT).

  13. Analytical control of process impurities in Pazopanib hydrochloride by impurity fate mapping.

    PubMed

    Li, Yan; Liu, David Q; Yang, Shawn; Sudini, Ravinder; McGuire, Michael A; Bhanushali, Dharmesh S; Kord, Alireza S

    2010-08-01

    Understanding the origin and fate of organic impurities within the manufacturing process along with a good control strategy is an integral part of the quality control of drug substance. Following the underlying principles of quality by design (QbD), a systematic approach to analytical control of process impurities by impurity fate mapping (IFM) has been developed and applied to the investigation and control of impurities in the manufacturing process of Pazopanib hydrochloride, an anticancer drug approved recently by the U.S. FDA. This approach requires an aggressive chemical and analytical search for potential impurities in the starting materials, intermediates and drug substance, and experimental studies to track their fate through the manufacturing process in order to understand the process capability for rejecting such impurities. Comprehensive IFM can provide elements of control strategies for impurities. This paper highlights the critical roles that analytical sciences play in the IFM process and impurity control. The application of various analytical techniques (HPLC, LC-MS, NMR, etc.) and development of sensitive and selective methods for impurity detection, identification, separation and quantification are highlighted with illustrative examples. As an essential part of the entire control strategy for Pazopanib hydrochloride, analytical control of impurities with 'meaningful' specifications and the 'right' analytical methods is addressed. In particular, IFM provides scientific justification that can allow for control of process impurities up-stream at the starting materials or intermediates whenever possible.

  14. Denoising NMR time-domain signal by singular-value decomposition accelerated by graphics processing units.

    PubMed

    Man, Pascal P; Bonhomme, Christian; Babonneau, Florence

    2014-01-01

    We present a post-processing method that decreases the NMR spectrum noise without line shape distortion. As a result the signal-to-noise (S/N) ratio of a spectrum increases. This method is called Cadzow enhancement procedure that is based on the singular-value decomposition of time-domain signal. We also provide software whose execution duration is a few seconds for typical data when it is executed in modern graphic-processing unit. We tested this procedure not only on low sensitive nucleus (29)Si in hybrid materials but also on low gyromagnetic ratio, quadrupole nucleus (87)Sr in reference sample Sr(NO3)2. Improving the spectrum S/N ratio facilitates the determination of T/Q ratio of hybrid materials. It is also applicable to simulated spectrum, resulting shorter simulation duration for powder averaging. An estimation of the number of singular values needed for denoising is also provided.

  15. [Acceleration of osmotic dehydration process through ohmic heating of foods: raspberries (Rubus idaeus)].

    PubMed

    Simpson, Ricardo R; Jiménez, Maite P; Carevic, Erica G; Grancelli, Romina M

    2007-06-01

    Raspberries (Rubus idaeus) were osmotically dehydrated by applying a conventional method under the supposition of a homogeneous solution, all in a 62% glucose solution at 50 degrees C. Raspberries (Rubus idaeus) were also osmotically dehydrated by using ohmic heating in a 57% glucose solution at a variable voltage (to maintain temperature between 40 and 50 degrees C) and an electric field intensity <100 V/cm. When comparing the results from both experiments it was evident that processing time is reduced when ohmic heating technique was used. In some cases this reduction reached even 50%. This is explained by the additional effect to the thermal damage that is generated in an ohmic process, denominated electroporation.

  16. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes

    PubMed Central

    2016-01-01

    The speed and throughput of analytical platforms has been a driving force in recent years in the “omics” technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition. PMID:27983788

  17. Data Streaming for Metabolomics: Accelerating Data Processing and Analysis from Days to Minutes.

    PubMed

    Montenegro-Burke, J Rafael; Aisporna, Aries E; Benton, H Paul; Rinehart, Duane; Fang, Mingliang; Huan, Tao; Warth, Benedikt; Forsberg, Erica; Abe, Brian T; Ivanisevic, Julijana; Wolan, Dennis W; Teyton, Luc; Lairson, Luke; Siuzdak, Gary

    2017-01-17

    The speed and throughput of analytical platforms has been a driving force in recent years in the "omics" technologies and while great strides have been accomplished in both chromatography and mass spectrometry, data analysis times have not benefited at the same pace. Even though personal computers have become more powerful, data transfer times still represent a bottleneck in data processing because of the increasingly complex data files and studies with a greater number of samples. To meet the demand of analyzing hundreds to thousands of samples within a given experiment, we have developed a data streaming platform, XCMS Stream, which capitalizes on the acquisition time to compress and stream recently acquired data files to data processing servers, mimicking just-in-time production strategies from the manufacturing industry. The utility of this XCMS Online-based technology is demonstrated here in the analysis of T cell metabolism and other large-scale metabolomic studies. A large scale example on a 1000 sample data set demonstrated a 10 000-fold time savings, reducing data analysis time from days to minutes. Further, XCMS Stream has the capability to increase the efficiency of downstream biochemical dependent data acquisition (BDDA) analysis by initiating data conversion and data processing on subsets of data acquired, expanding its application beyond data transfer to smart preliminary data decision-making prior to full acquisition.

  18. Basic Electropolishing Process Research and Development in Support of Improved Reliable Performance SRF Cavities for the Future Accelerator

    SciTech Connect

    H. Tian, C.E. Reece,M.J. Kelley

    2009-05-01

    Future accelerators require unprecedented cavity performance, which is strongly influenced by interior surface nanosmoothness. Electropolishing is the technique of choice to be developed for high-field superconducting radiofrequency cavities. Electrochemical impedance spectroscopy (EIS) and related techniques point to the electropolishing mechanism of Nb in a sulfuric and hydrofluoric acid electrolyte of controlled by a compact surface salt film under F- diffusion-limited mass transport control. These and other findings are currently guiding a systematic characterization to form the basis for cavity process optimization, such as flowrate, electrolyte composition and temperature. This integrated analysis is expected to provide optimum EP parameter sets for a controlled, reproducible and uniform surface leveling for Nb SRF cavities.

  19. PRIME process: an alternative to multiple layer resist systems and high accelerating voltage for e-beam lithography

    NASA Astrophysics Data System (ADS)

    Tedesco, Serge V.; Pierrat, Christophe; Vinet, Francoise; Florin, Brigitte; Lerme, Michel; Guibert, Jean C.

    1990-05-01

    A new positive working system for e-beam lithography, called PRIME (Positive Resist IMage by dry Etching) is proposed. High contrast (about 6) and resolution 75 nm L/S in O.351um thick resist are achieved. Very steep profiles can be obtai- ned on thick resist even at low accelerating voltage as O.2pm hole in l.2pm thick resist at 20 keV. To be able to quantify both intra and inter proximity effect on positive tone resist specific two layers electric tests chips were designed. Then PRIME process has been compared, in terms of proximity effects magnitude, at 20kV and 50 kV, to RAY-PF resist show- ing clearly advantages over such three components novolac ba- sed positive resist.

  20. Process mapping evaluation of medication reconciliation in academic teaching hospitals: a critical step in quality improvement

    PubMed Central

    Holbrook, Anne; Bowen, James M; Patel, Harsit; O'Brien, Chris; You, John J; Tahavori, Roshan; Doleweerd, Jeff; Berezny, Tim; Perri, Dan; Nieuwstraten, Carmine; Troyan, Sue; Patel, Ameen

    2016-01-01

    Background Medication reconciliation (MedRec) has been a mandated or recommended activity in Canada, the USA and the UK for nearly 10 years. Accreditation bodies in North America will soon require MedRec for every admission, transfer and discharge of every patient. Studies of MedRec have revealed unintentional discrepancies in prescriptions but no clear evidence that clinically important outcomes are improved, leading to widely variable practices. Our objective was to apply process mapping methodology to MedRec to clarify current processes and resource usage, identify potential efficiencies and gaps in care, and make recommendations for improvement in the light of current literature evidence of effectiveness. Methods Process engineers observed and recorded all MedRec activities at 3 academic teaching hospitals, from initial emergency department triage to patient discharge, for general internal medicine patients. Process maps were validated with frontline staff, then with the study team, managers and patient safety leads to summarise current problems and discuss solutions. Results Across all of the 3 hospitals, 5 general problem themes were identified: lack of use of all available medication sources, duplication of effort creating inefficiency, lack of timeliness of completion of the Best Possible Medication History, lack of standardisation of the MedRec process, and suboptimal communication of MedRec issues between physicians, pharmacists and nurses. Discussion MedRec as practised in this environment requires improvements in quality, timeliness, consistency and dissemination. Further research exploring efficient use of resources, in terms of personnel and costs, is required. PMID:28039294

  1. SBGNViz: A Tool for Visualization and Complexity Management of SBGN Process Description Maps

    PubMed Central

    Dogrusoz, Ugur; Sumer, Selcuk Onur; Aksoy, Bülent Arman; Babur, Özgün; Demir, Emek

    2015-01-01

    Background Information about cellular processes and pathways is becoming increasingly available in detailed, computable standard formats such as BioPAX and SBGN. Effective visualization of this information is a key recurring requirement for biological data analysis, especially for -omic data. Biological data analysis is rapidly migrating to web based platforms; thus there is a substantial need for sophisticated web based pathway viewers that support these platforms and other use cases. Results Towards this goal, we developed a web based viewer named SBGNViz for process description maps in SBGN (SBGN-PD). SBGNViz can visualize both BioPAX and SBGN formats. Unique features of SBGNViz include the ability to nest nodes to arbitrary depths to represent molecular complexes and cellular locations, automatic pathway layout, editing and highlighting facilities to enable focus on sub-maps, and the ability to inspect pathway members for detailed information from EntrezGene. SBGNViz can be used within a web browser without any installation and can be readily embedded into web pages. SBGNViz has two editions built with ActionScript and JavaScript. The JavaScript edition, which also works on touch enabled devices, introduces novel methods for managing and reducing complexity of large SBGN-PD maps for more effective analysis. Conclusion SBGNViz fills an important gap by making the large and fast-growing corpus of rich pathway information accessible to web based platforms. SBGNViz can be used in a variety of contexts and in multiple scenarios ranging from visualization of the results of a single study in a web page to building data analysis platforms. PMID:26030594

  2. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  3. Audit Report on "Waste Processing and Recovery Act Acceleration Efforts for Contact-Handled Transuranic Waste at the Hanford Site"

    SciTech Connect

    2010-05-01

    The Department of Energy's Office of Environmental Management's (EM), Richland Operations Office (Richland), is responsible for disposing of the Hanford Site's (Hanford) transuranic (TRU) waste, including nearly 12,000 cubic meters of radioactive contact-handled TRU wastes. Prior to disposing of this waste at the Department's Waste Isolation Pilot Plant (WIPP), Richland must certify that it meets WIPP's waste acceptance criteria. To be certified, the waste must be characterized, screened for prohibited items, treated (if necessary) and placed into a satisfactory disposal container. In a February 2008 amendment to an existing Record of Decision (Decision), the Department announced its plan to ship up to 8,764 cubic meters of contact-handled TRU waste from Hanford and other waste generator sites to the Advanced Mixed Waste Treatment Project (AMWTP) at Idaho's National Laboratory (INL) for processing and certification prior to disposal at WIPP. The Department decided to maximize the use of the AMWTP's automated waste processing capabilities to compact and, thereby, reduce the volume of contact-handled TRU waste. Compaction reduces the number of shipments and permits WIPP to more efficiently use its limited TRU waste disposal capacity. The Decision noted that the use of AMWTP would avoid the time and expense of establishing a processing capability at other sites. In May 2009, EM allocated $229 million of American Recovery and Reinvestment Act of 2009 (Recovery Act) funds to support Hanford's Solid Waste Program, including Hanford's contact-handled TRU waste. Besides providing jobs, these funds were intended to accelerate cleanup in the short term. We initiated this audit to determine whether the Department was effectively using Recovery Act funds to accelerate processing of Hanford's contact-handled TRU waste. Relying on the availability of Recovery Act funds, the Department changed course and approved an alternative plan that could increase costs by about $25 million

  4. Hot Deformation Processing Map and Microstructural Evaluation of the Ni-Based Superalloy IN-738LC

    NASA Astrophysics Data System (ADS)

    Sajjadi, S. A.; Chaichi, A.; Ezatpour, H. R.; Maghsoudlou, A.; Kalaie, M. A.

    2016-04-01

    Hot deformation behavior of the Ni-based superalloy IN-738LC was investigated by means of hot compression tests over the temperature range of 1000-1200 °C and strain rate range of 0.01-1 s-1. The obtained peak flow stresses were related to strain rate and temperature through the hyperbolic sine equation with activation energy of 950 kJ/mol. Dynamic material model was used to obtain the processing map of IN-738LC. Analysis of the microstructure was carried out in order to study each domain's characteristic represented by the processing map. The results showed that dynamic recrystallization occurs in the temperature range of 1150-1200 °C and strain rate of 0.1 s-1 with the maximum power dissipation efficiency of 35%. The unstable domain was exhibited in the temperature range of 1000-1200 °C and strain rate of 1 s-1 on the occurrence of severe deformation bands and grain boundary cracking.

  5. Mapping the connectivity underlying multimodal (verbal and non-verbal) semantic processing: a brain electrostimulation study.

    PubMed

    Moritz-Gasser, Sylvie; Herbet, Guillaume; Duffau, Hugues

    2013-08-01

    Accessing the meaning of words, objects, people and facts is a human ability, made possible thanks to semantic processing. Although studies concerning its cortical organization are proficient, the subcortical connectivity underlying this semantic network received less attention. We used intraoperative direct electrostimulation, which mimics a transient virtual lesion during brain surgery for glioma in eight awaken patients, to map the anatomical white matter substrate subserving the semantic system. Patients performed a picture naming task and a non-verbal semantic association test during the electrical mapping. Direct electrostimulation of the inferior fronto-occipital fascicle, a poorly known ventral association pathway which runs throughout the brain, induced in all cases semantic disturbances. These transient disorders were highly reproducible, and concerned verbal as well as non-verbal output. Our results highlight for the first time the essential role of the left inferior fronto-occipital fascicle in multimodal (and not only in verbal) semantic processing. On the basis of these original findings, and in the lights of phylogenetic considerations regarding this fascicle, we suggest its possible implication in the monitoring of the human level of consciousness related to semantic memory, namely noetic consciousness.

  6. Mapping Rule-Based And Stochastic Constraints To Connection Architectures: Implication For Hierarchical Image Processing

    NASA Astrophysics Data System (ADS)

    Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.

    1988-10-01

    Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.

  7. Geological Mapping of Fortuna Tessera (V-2): Venus and Earth's Archean Process Comparisons

    NASA Technical Reports Server (NTRS)

    Head, James W.; Hurwitz,D. M.; Ivanov, M. A.; Basilevsky, A. T.; Kumar, P. Senthil

    2008-01-01

    The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new insight into the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record provides important insight into high heat-flux tectonic and magmatic environments and structures and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Elsewhere we have addressed the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. Here we present the major goals of the Venus-Archean comparison and show how preliminary mapping of the geology of the V-2 Fortuna Tessera quadrangle is providing insight on these problems. We have identified five key themes and questions common to both the Archean and Venus, the assessment of which could provide important new insights into the history and processes of both planets.

  8. Emerging techniques for assisting and accelerating food freezing processes: A review of recent research progresses.

    PubMed

    Cheng, Lina; Sun, Da-Wen; Zhu, Zhiwei; Zhang, Zi

    2017-03-04

    Freezing plays an important role in food preservation and the emergence of rapid freezing technologies can be highly beneficial to the food industry. This paper reviews some novel food freezing technologies, including high-pressure freezing (HPF), ultrasound-assisted freezing (UAF), electrically disturbed freezing (EF) and magnetically disturbed freezing (MF), microwave-assisted freezing (MWF), and osmo-dehydro-freezing (ODF). HPF and UAF can initiate ice nucleation rapidly, leading to uniform distribution of ice crystals and the control of their size and shape. Specifically, the former is focused on increasing the degree of supercooling, whereas the latter aims to decrease it. Direct current electric freezing (DC-EF) and alternating current electric freezing (AC-EF) exhibit different effects on ice nucleation. DC-EF can promote ice nucleation and AC-EF has the opposite effect. Furthermore, ODF has been successfully used for freezing various vegetables and fruit. MWF cannot control the nucleation temperature, but can decrease supercooling degree, thus decreasing the size of ice crystals. The heat and mass transfer processes during ODF have been investigated experimentally and modeled mathematically. More studies should be carried out to understand the effects of these technologies on food freezing process.

  9. The "step feature" of suprathermal ion distributions: a discriminator between acceleration processes?

    NASA Astrophysics Data System (ADS)

    Fahr, H. J.; Fichtner, H.

    2012-09-01

    The discussion of exactly which process is causing the preferred build-up of v-5-power law tails of the velocity distribution of suprathermal particles in the solar wind is still ongoing. Criteria allowing one to discriminate between the various suggestions that have been made would be useful in order to clarify the physics behind these tails. With this study, we draw the attention to the so-called "step feature" of the velocity distributions and offer a criterion that allows one to distinguish between those scenarios that employ velocity diffusion, i.e. second-order Fermi processes, which are prime candidates in the present debate. With an analytical approximation to the self-consistently obtained velocity diffusion coefficient, we solve the transport equation for suprathermal particles. The numerical simulation reveals that this form of the diffusion coefficient naturally leads to the step feature of the velocity distributions. This finding favours - at least in regions of the appearance of the step feature (i.e. for heliocentric distances up to about 11 AU and at lower energies) - the standard velocity diffusion as a consequence of the particle's interactions with the plasma wave turbulence as opposed to that caused by velocity fluctuation-induced compressions and rarefactions.

  10. Influence of processing procedure on the quality of Radix Scrophulariae: a quantitative evaluation of the main compounds obtained by accelerated solvent extraction and high-performance liquid chromatography.

    PubMed

    Cao, Gang; Wu, Xin; Li, Qinglin; Cai, Hao; Cai, Baochang; Zhu, Xuemei

    2015-02-01

    An improved high-performance liquid chromatography with diode array detection combined with accelerated solvent extraction method was used to simultaneously determine six compounds in crude and processed Radix Scrophulariae samples. Accelerated solvent extraction parameters such as extraction solvent, temperature, number of cycles, and analysis procedure were systematically optimized. The results indicated that compared with crude Radix Scrophulariae samples, the processed samples had lower contents of harpagide and harpagoside but higher contents of catalpol, acteoside, angoroside C, and cinnamic acid. The established method was sufficiently rapid and reliable for the global quality evaluation of crude and processed herbal medicines.

  11. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  12. VLSI architectures for geometrical mapping problems in high-definition image processing

    NASA Technical Reports Server (NTRS)

    Kim, K.; Lee, J.

    1991-01-01

    This paper explores a VLSI architecture for geometrical mapping address computation. The geometric transformation is discussed in the context of plane projective geometry, which invokes a set of basic transformations to be implemented for the general image processing. The homogeneous and 2-dimensional cartesian coordinates are employed to represent the transformations, each of which is implemented via an augmented CORDIC as a processing element. A specific scheme for a processor, which utilizes full-pipelining at the macro-level and parallel constant-factor-redundant arithmetic and full-pipelining at the micro-level, is assessed to produce a single VLSI chip for HDTV applications using state-of-art MOS technology.

  13. Accelerators and the Accelerator Community

    SciTech Connect

    Malamud, Ernest; Sessler, Andrew

    2008-06-01

    In this paper, standing back--looking from afar--and adopting a historical perspective, the field of accelerator science is examined. How it grew, what are the forces that made it what it is, where it is now, and what it is likely to be in the future are the subjects explored. Clearly, a great deal of personal opinion is invoked in this process.

  14. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  15. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    SciTech Connect

    Huynh, J; Kim, Y

    2015-06-15

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10), implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.

  16. The Potential for Signal Integration and Processing in Interacting Map Kinase Cascades

    PubMed Central

    Schwacke, John H.; Voit, Eberhard O.

    2009-01-01

    The cellular response to environmental stimuli requires biochemical information processing through which sensory inputs and cellular status are integrated and translated into appropriate responses by way of interacting networks of enzymes. One such network, the Mitogen Activated Protein (MAP) kinase cascade is a highly conserved signal transduction module that propagates signals from cell surface receptors to various cytosolic and nuclear targets by way of a phosphorylation cascade. We have investigated the potential for signal processing within a network of interacting feed-forward kinase cascades typified by the MAP kinase cascade. A genetic algorithm was used to search for sets of kinetic parameters demonstrating representative key input-output patterns of interest. We discuss two of the networks identified in our study, one implementing the exclusive-or function (XOR) and another implementing what we refer to as an in-band detector (IBD) or two-sided threshold. These examples confirm the potential for logic and amplitude-dependent signal processing in interacting MAP kinase cascades demonstrating limited cross-talk. Specifically, the XOR function allows the network to respond to either one, but not both signals simultaneously, while the IBD permits the network to respond exclusively to signals within a given range of strength, and to suppress signals below as well as above this range. The solution to the XOR problem is interesting in that it requires only two interacting pathways, crosstalk at only one layer, and no feedback or explicit inhibition. These types of responses are not only biologically relevant but constitute signal processing modules that can be combined to create other logical functions and that, in contrast to amplification, cannot be achieved with a single cascade or with two non-interacting cascades. Our computational results revealed surprising similarities between experimental data describing the JNK/MKK4/MKK7 pathway and the solution for

  17. PARTICLE ACCELERATOR

    DOEpatents

    Teng, L.C.

    1960-01-19

    ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.

  18. Natural ageing process accelerates the release of Ag from functional textile in various exposure scenarios

    NASA Astrophysics Data System (ADS)

    Ding, Dahu; Chen, Lulu; Dong, Shaowei; Cai, Hao; Chen, Jifei; Jiang, Canlan; Cai, Tianming

    2016-11-01

    Natural ageing process occurs throughout the life cycle of textile products, which may possess influences on the release behavior of additives such as silver nanoparticles (Ag NPs). In this study, we assessed the releasability of Ag NPs from a Ag NPs functionalized textile in five different exposure scenarios (i.e. tap water (TW), pond water (PW), rain water (RW), artificial sweat (AS), and detergent solution (DS) along with deionized water (DW) as reference), which were very likely to occur throughout the life cycle of the textile. For the pristine textile, although the most remarkable release was found in DW (6–15 μg Ag/g textile), the highest release rate was found in RW (around 7 μg Ag/(g textile·h)). After ageing treatment, the total released Ag could be increased by 75.7~386.0% in DW, AS and DS. Morphological analysis clearly showed that the Ag NPs were isolated from the surface of the textile fibre due to the ageing treatment. This study provides useful information for risk assessment of nano-enhanced textile products.

  19. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone.

  20. Accelerated evaluation of the robustness of treatment plans against geometric uncertainties by Gaussian processes.

    PubMed

    Sobotta, B; Söhn, M; Alber, M

    2012-12-07

    In order to provide a consistently high quality treatment, it is of great interest to assess the robustness of a treatment plan under the influence of geometric uncertainties. One possible method to implement this is to run treatment simulations for all scenarios that may arise from these uncertainties. These simulations may be evaluated in terms of the statistical distribution of the outcomes (as given by various dosimetric quality metrics) or statistical moments thereof, e.g. mean and/or variance. This paper introduces a method to compute the outcome distribution and all associated values of interest in a very efficient manner. This is accomplished by substituting the original patient model with a surrogate provided by a machine learning algorithm. This Gaussian process (GP) is trained to mimic the behavior of the patient model based on only very few samples. Once trained, the GP surrogate takes the place of the patient model in all subsequent calculations.The approach is demonstrated on two examples. The achieved computational speedup is more than one order of magnitude.

  1. Natural ageing process accelerates the release of Ag from functional textile in various exposure scenarios

    PubMed Central

    Ding, Dahu; Chen, Lulu; Dong, Shaowei; Cai, Hao; Chen, Jifei; Jiang, Canlan; Cai, Tianming

    2016-01-01

    Natural ageing process occurs throughout the life cycle of textile products, which may possess influences on the release behavior of additives such as silver nanoparticles (Ag NPs). In this study, we assessed the releasability of Ag NPs from a Ag NPs functionalized textile in five different exposure scenarios (i.e. tap water (TW), pond water (PW), rain water (RW), artificial sweat (AS), and detergent solution (DS) along with deionized water (DW) as reference), which were very likely to occur throughout the life cycle of the textile. For the pristine textile, although the most remarkable release was found in DW (6–15 μg Ag/g textile), the highest release rate was found in RW (around 7 μg Ag/(g textile·h)). After ageing treatment, the total released Ag could be increased by 75.7~386.0% in DW, AS and DS. Morphological analysis clearly showed that the Ag NPs were isolated from the surface of the textile fibre due to the ageing treatment. This study provides useful information for risk assessment of nano-enhanced textile products. PMID:27869136

  2. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  3. Rock varnish in New York: An accelerated snapshot of accretionary processes

    NASA Astrophysics Data System (ADS)

    Krinsley, David H.; Dorn, Ronald I.; DiGregorio, Barry E.; Langworthy, Kurt A.; Ditto, Jeffrey

    2012-02-01

    Samples of manganiferous rock varnish collected from fluvial, bedrock outcrop and Erie Barge Canal settings in New York state host a variety of diatom, fungal and bacterial microbial forms that are enhanced in manganese and iron. Use of a Dual-Beam Focused Ion Beam Scanning Electron Microscope to manipulate the varnish in situ reveals microbial forms that would not have otherwise been identified. The relative abundance of Mn-Fe-enriched biotic forms in New York samples is far greater than varnishes collected from warm deserts. Moisture availability has long been noted as a possible control on varnish growth rates, a hypothesis consistent with the greater abundance of Mn-enhancing bioforms. Sub-micron images of incipient varnish formation reveal that varnishing in New York probably starts with the mortality of microorganisms that enhanced Mn on bare mineral surfaces; microbial death results in the adsorption of the Mn-rich sheath onto the rock in the form of filamentous networks. Clay minerals are then cemented by remobilization of the Mn-rich material. Thus, the previously unanswered question of what comes first - clay mineral deposition or enhancement of Mn - can be answered in New York because of the faster rate of varnish growth. In contrast, very slow rates of varnishing seen in warm deserts, of microns per thousand years, make it less likely that collected samples will reveal varnish accretionary processes than samples collected from fast-accreting moist settings.

  4. Natural ageing process accelerates the release of Ag from functional textile in various exposure scenarios.

    PubMed

    Ding, Dahu; Chen, Lulu; Dong, Shaowei; Cai, Hao; Chen, Jifei; Jiang, Canlan; Cai, Tianming

    2016-11-21

    Natural ageing process occurs throughout the life cycle of textile products, which may possess influences on the release behavior of additives such as silver nanoparticles (Ag NPs). In this study, we assessed the releasability of Ag NPs from a Ag NPs functionalized textile in five different exposure scenarios (i.e. tap water (TW), pond water (PW), rain water (RW), artificial sweat (AS), and detergent solution (DS) along with deionized water (DW) as reference), which were very likely to occur throughout the life cycle of the textile. For the pristine textile, although the most remarkable release was found in DW (6-15 μg Ag/g textile), the highest release rate was found in RW (around 7 μg Ag/(g textile·h)). After ageing treatment, the total released Ag could be increased by 75.7~386.0% in DW, AS and DS. Morphological analysis clearly showed that the Ag NPs were isolated from the surface of the textile fibre due to the ageing treatment. This study provides useful information for risk assessment of nano-enhanced textile products.

  5. Making clinical case-based learning in veterinary medicine visible: analysis of collaborative concept-mapping processes and reflections.

    PubMed

    Khosa, Deep K; Volet, Simone E; Bolton, John R

    2014-01-01

    The value of collaborative concept mapping in assisting students to develop an understanding of complex concepts across a broad range of basic and applied science subjects is well documented. Less is known about students' learning processes that occur during the construction of a concept map, especially in the context of clinical cases in veterinary medicine. This study investigated the unfolding collaborative learning processes that took place in real-time concept mapping of a clinical case by veterinary medical students and explored students' and their teacher's reflections on the value of this activity. This study had two parts. The first part investigated the cognitive and metacognitive learning processes of two groups of students who displayed divergent learning outcomes in a concept mapping task. Meaningful group differences were found in their level of learning engagement in terms of the extent to which they spent time understanding and co-constructing knowledge along with completing the task at hand. The second part explored students' and their teacher's views on the value of concept mapping as a learning and teaching tool. The students' and their teacher's perceptions revealed congruent and contrasting notions about the usefulness of concept mapping. The relevance of concept mapping to clinical case-based learning in veterinary medicine is discussed, along with directions for future research.

  6. Neogene cratonic erosion fluxes and landform evolution processes from regional regolith mapping (Burkina Faso, West Africa)

    NASA Astrophysics Data System (ADS)

    Grimaud, Jean-Louis; Chardon, Dominique; Metelka, Václav; Beauvais, Anicet; Bamba, Ousmane

    2015-07-01

    The regionally correlated and dated regolith-paleolandform sequence of Sub-Saharan West Africa offers a unique opportunity to constrain continental-scale regolith dynamics as the key part of the sediment routing system. In this study, a regolith mapping protocol is developed and applied at the scale of Southwestern Burkina Faso. Mapping combines field survey and remote sensing data to reconstruct the topography of the last pediplain that formed over West Africa in the Early and Mid-Miocene (24-11 Ma). The nature and preservation pattern of the pediplain are controlled by the spatial variation of bedrock lithology and are partitioned among large drainage basins. Quantification of pediplain dissection and drainage growth allows definition of a cratonic background denudation rate of 2 m/My and a minimum characteristic timescale of 20 Ma for shield resurfacing. These results may be used to simulate minimum export fluxes of drainage basins of constrained size over geological timescales. Background cratonic denudation results in a clastic export flux of ~ 4 t/km2/year, which is limited by low denudation efficiency of slope processes and correlatively high regolith storage capacity of tropical shields. These salient characteristics of shields' surface dynamics would tend to smooth the riverine export fluxes of shields through geological time.

  7. Data processing for fabrication of GMT primary segments: raw data to final surface maps

    NASA Astrophysics Data System (ADS)

    Tuell, Michael T.; Hubler, William; Martin, Hubert M.; West, Steven C.; Zhou, Ping

    2014-07-01

    The Giant Magellan Telescope (GMT) primary mirror is a 25 meter f/0.7 surface composed of seven 8.4 meter circular segments, six of which are identical off-axis segments. The fabrication and testing challenges with these severely aspheric segments (about 14 mm of aspheric departure, mostly astigmatism) are well documented. Converting the raw phase data to useful surface maps involves many steps and compensations. They include large corrections for: image distortion from the off-axis null test; misalignment of the null test; departure from the ideal support forces; and temperature gradients in the mirror. The final correction simulates the active-optics correction that will be made at the telescope. Data are collected and phase maps are computed in 4D Technology's 4SightTM software. The data are saved to a .h5 (HDF5) file and imported into MATLAB® for further analysis. A semi-automated data pipeline has been developed to reduce the analysis time as well as reducing the potential for error. As each operation is performed, results and analysis parameters are appended to a data file, so in the end, the history of data processing is embedded in the file. A report and a spreadsheet are automatically generated to display the final statistics as well as how each compensation term varied during the data acquisition. This gives us valuable statistics and provides a quick starting point for investigating atypical results.

  8. Hot Deformation Characteristics of 13Cr-4Ni Stainless Steel Using Constitutive Equation and Processing Map

    NASA Astrophysics Data System (ADS)

    Kishor, Brij; Chaudhari, G. P.; Nath, S. K.

    2016-07-01

    Hot compression tests were performed to study the hot deformation characteristics of 13Cr-4Ni stainless steel. The tests were performed in the strain rate range of 0.001-10 s-1 and temperature range of 900-1100 °C using Gleeble® 3800 simulator. A constitutive equation of Arrhenius type was established based on the experimental data to calculate the different material constants, and average value of apparent activation energy was found to be 444 kJ/mol. Zener-Hollomon parameter, Z, was estimated in order to characterize the flow stress behavior. Power dissipation and instability maps developed on the basis of dynamic materials model for true strain of 0.5 show optimum hot working conditions corresponding to peak efficiency range of about 28-32%. These lie in the temperature range of 950-1025 °C and corresponding strain rate range of 0.001-0.01 s-1 and in the temperature range of 1050-1100 °C and corresponding strain rate range of 0.01-0.1 s-1. The flow characteristics in these conditions show dynamic recrystallization behavior. The microstructures are correlated to the different stability domains indicated in the processing map.

  9. The neurophysiology of human biological motion processing: a high-density electrical mapping study.

    PubMed

    Krakowski, Aaron I; Ross, Lars A; Snyder, Adam C; Sehatpour, Pejman; Kelly, Simon P; Foxe, John J

    2011-05-01

    The neural processing of biological motion (BM) is of profound experimental interest since it is often through the movement of another that we interpret their immediate intentions. Neuroimaging points to a specialized cortical network for processing biological motion. Here, high-density electrical mapping and source-analysis techniques were employed to interrogate the timing of information processing across this network. Participants viewed point-light-displays depicting standard body movements (e.g. jumping), while event-related potentials (ERPs) were recorded and compared to ERPs to scrambled motion control stimuli. In a pair of experiments, three major phases of BM-specific processing were identified: 1) The earliest phase of BM-sensitive modulation was characterized by a positive shift of the ERP between 100 and 200 ms after stimulus onset. This modulation was observed exclusively over the right hemisphere and source-analysis suggested a likely generator in close proximity to regions associated with general motion processing (KO/hMT). 2) The second phase of BM-sensitivity occurred from 200 to 350 ms, characterized by a robust negative-going ERP modulation over posterior middle temporal regions bilaterally. Source-analysis pointed to bilateral generators at or near the posterior superior temporal sulcus (STS). 3) A third phase of processing was evident only in our second experiment, where participants actively attended the BM aspect of the stimuli, and was manifest as a centro-parietal positive ERP deflection, likely related to later cognitive processes. These results point to very early sensory registration of biological motion, and highlight the interactive role of the posterior STS in analyzing the movements of other living organisms.

  10. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    NASA Astrophysics Data System (ADS)

    Jackson, R.; Parsons, D. R.; Czuba, J. A.; Mueller, D. S.; Rhoads, B. L.; Engel, F.; Oberg, K. A.; Best, J. L.; Johnson, K. K.; Riley, J. D.

    2011-12-01

    In addition to their common application to measurement of discharge in rivers, acoustic Doppler current profilers (ADCP) provide valuable hydrodynamic data required for understanding geomorphic and fluvial processes. The increasing use of ADCPs to explore the characteristics of complex natural flows has led to a need for standardized post-processing methods for managing, analyzing, and displaying three-dimensional velocity data. Thus far, no standard analytical technique exists for averaging velocity data from multiple ADCP transects to produce a composite depiction of three-dimensional velocity fields. A new software tool, the Velocity Mapping Toolbox (VMT), is presented herein to address this important need. VMT is a Matlab-based toolbox for processing, analyzing, and displaying velocity data collected along multiple ADCP transects. The software can be used to explore patterns of three-dimensional fluid motion through several methods for calculation of secondary flows and includes capabilities for analyzing the acoustic backscatter and bathymetric data from the ADCP. A user-friendly graphical user interface (GUI) enhances program functionality and provides ready access to two- and three- dimensional plotting functions, allowing rapid display and interrogation of velocity, backscatter, and bathymetry data. This presentation describes the basic processing methods employed by VMT and highlights the capabilities of the toolbox through some example applications.

  11. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-05

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used.

  12. Mass transport perspective on an accelerated exclusion process: Analysis of augmented current and unit-velocity phases

    NASA Astrophysics Data System (ADS)

    Dong, Jiajia; Klumpp, Stefan; Zia, R. K. P.

    2013-02-01

    In an accelerated exclusion process (AEP), each particle can “hop” to its adjacent site if empty as well as “kick” the frontmost particle when joining a cluster of size ℓ⩽ℓmax. With various choices of the interaction range, ℓmax, we find that the steady state of AEP can be found in a homogeneous phase with augmented currents (AC) or a segregated phase with holes moving at unit velocity (UV). Here we present a detailed study on the emergence of the novel phases, from two perspectives: the AEP and a mass transport process (MTP). In the latter picture, the system in the UV phase is composed of a condensate in coexistence with a fluid, while the transition from AC to UV can be regarded as condensation. Using Monte Carlo simulations, exact results for special cases, and analytic methods in a mean field approach (within the MTP), we focus on steady state currents and cluster sizes. Excellent agreement between data and theory is found, providing an insightful picture for understanding this model system.

  13. Mass transport perspective on an accelerated exclusion process: analysis of augmented current and unit-velocity phases.

    PubMed

    Dong, Jiajia; Klumpp, Stefan; Zia, R K P

    2013-02-01

    In an accelerated exclusion process (AEP), each particle can "hop" to its adjacent site if empty as well as "kick" the frontmost particle when joining a cluster of size ℓ≤ℓ(max). With various choices of the interaction range, ℓ(max), we find that the steady state of AEP can be found in a homogeneous phase with augmented currents (AC) or a segregated phase with holes moving at unit velocity (UV). Here we present a detailed study on the emergence of the novel phases, from two perspectives: the AEP and a mass transport process (MTP). In the latter picture, the system in the UV phase is composed of a condensate in coexistence with a fluid, while the transition from AC to UV can be regarded as condensation. Using Monte Carlo simulations, exact results for special cases, and analytic methods in a mean field approach (within the MTP), we focus on steady state currents and cluster sizes. Excellent agreement between data and theory is found, providing an insightful picture for understanding this model system.

  14. High-resolution mapping of combustion processes and implications for CO2 emissions

    NASA Astrophysics Data System (ADS)

    Wang, R.; Tao, S.; Ciais, P.; Shen, H. Z.; Huang, Y.; Chen, H.; Shen, G. F.; Wang, B.; Li, W.; Zhang, Y. Y.; Lu, Y.; Zhu, D.; Chen, Y. C.; Liu, X. P.; Wang, W. T.; Wang, X. L.; Liu, W. X.; Li, B. G.; Piao, S. L.

    2013-05-01

    High-resolution mapping of fuel combustion and CO2 emission provides valuable information for modeling pollutant transport, developing mitigation policy, and for inverse modeling of CO2 fluxes. Previous global emission maps included only few fuel types, and emissions were estimated on a grid by distributing national fuel data on an equal per capita basis, using population density maps. This process distorts the geographical distribution of emissions within countries. In this study, a sub-national disaggregation method (SDM) of fuel data is applied to establish a global 0.1° × 0.1° geo-referenced inventory of fuel combustion (PKU-FUEL) and corresponding CO2 emissions (PKU-CO2) based upon 64 fuel sub-types for the year 2007. Uncertainties of the emission maps are evaluated using a Monte Carlo method. It is estimated that CO2 emission from combustion sources including fossil fuel, biomass, and solid wastes in 2007 was 11.2 Pg C yr-1 (9.1 Pg C yr-1 and 13.3 Pg C yr-1 as 5th and 95th percentiles). Of this, emission from fossil fuel combustion is 7.83 Pg C yr-1, which is very close to the estimate of the International Energy Agency (7.87 Pg C yr-1). By replacing national data disaggregation with sub-national data in this study, the average 95th minus 5th percentile ranges of CO2 emission for all grid points can be reduced from 417 to 68.2 Mg km-2 yr-1. The spread is reduced because the uneven distribution of per capita fuel consumptions within countries is better taken into account by using sub-national fuel consumption data directly. Significant difference in per capita CO2 emissions between urban and rural areas was found in developing countries (2.08 vs. 0.598 Mg C/(cap. × yr)), but not in developed countries (3.55 vs. 3.41 Mg C/(cap. × yr)). This implies that rapid urbanization of developing countries is very likely to drive up their emissions in the future.

  15. The Chasm Green Machine: A Rapid Data Acquisition and Mapping System For Direct Observation of Shallow Hydrological Processes.

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Merrett, S.

    CHASM (Catchment Hydrology and Sustainable Management) is a major UK funded project investigating all aspects of hydrological observation from point to basin scale. The project includes a mobile instrumentation component, which is now referred to as the 'Green Machine'. This research facility includes an All Terrain Vehicle, an on board high resolution GPS mapping and navigation system, an EM31, an EM38 and a Seistronix seismic kit. The goal of the project is to observe and map unsaturated and saturated zone hydrological processes within soils and drift through repeated map- ping campaigns. The hydrogeophysics kit will be validated against a dense series of ground based observations of soil moisture deficit, suction and peizometric logging instruments. The Green Machine also includes a portable drilling and coring kit that can reach 10m in depth. Thus, the EM31 will attempt to map macroscale fluxes in the water table position, the EM38 will attempt to map the soil moisture deficit, and the seismic profile will show the depth of soil and drift. This 'go anywhere', rapid mapping facility will attempt to map hydrological processes in 4 dimensions in a non- intrusive and extensive manner. Whilst this paper will reflect only the experimental design and some early results, it is hoped that the Green Machine will play an active role in the future of hydrogeophysical research.

  16. DIGITAL PROCESSING TECHNIQUES FOR IMAGE MAPPING WITH LANDSAT TM AND SPOT SIMULATOR DATA.

    USGS Publications Warehouse

    Chavez, Pat S.; ,

    1984-01-01

    To overcome certain problems associated with the visual selection of Landsat TM bands for image mapping, the author used a quantitative technique that ranks the 20 possible three-band combinations based upon their information content. Standard deviations and correlation coefficients can be used to compute a value called the Optimum Index Factor (OIF) for each of the 20 possible combinations. SPOT simulator images were digitally processed and compared with Landsat-4 Thematic Mapper (TM) images covering a semi-arid region in northern Arizona and a highly vegetated urban area near Washington, D. C. Statistical comparisons indicate the more radiometric or color information exists in certain TM three-band combinations than in the three SPOT bands.

  17. Mapping variable ring polymer molecular dynamics: A path-integral based method for nonadiabatic processes

    NASA Astrophysics Data System (ADS)

    Ananth, Nandini

    2013-09-01

    We introduce mapping-variable ring polymer molecular dynamics (MV-RPMD), a model dynamics for the direct simulation of multi-electron processes. An extension of the RPMD idea, this method is based on an exact, imaginary time path-integral representation of the quantum Boltzmann operator using continuous Cartesian variables for both electronic states and nuclear degrees of freedom. We demonstrate the accuracy of the MV-RPMD approach in calculations of real-time, thermal correlation functions for a range of two-state single-mode model systems with different coupling strengths and asymmetries. Further, we show that the ensemble of classical trajectories employed in these simulations preserves the Boltzmann distribution and provides a direct probe into real-time coupling between electronic state transitions and nuclear dynamics.

  18. Demonstration of wetland vegetation mapping in Florida from computer-processed satellite and aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1979-01-01

    The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.

  19. Effect of accelerated electron beam on mechanical properties of human cortical bone: influence of different processing methods.

    PubMed

    Kaminski, Artur; Grazka, Ewelina; Jastrzebska, Anna; Marowska, Joanna; Gut, Grzegorz; Wojciechowski, Artur; Uhrynowska-Tyszkiewicz, Izabela

    2012-08-01

    Accelerated electron beam (EB) irradiation has been a sufficient method used for sterilisation of human tissue grafts for many years in a number of tissue banks. Accelerated EB, in contrast to more often used gamma photons, is a form of ionizing radiation that is characterized by lower penetration, however it is more effective in producing ionisation and to reach the same level of sterility, the exposition time of irradiated product is shorter. There are several factors, including dose and temperature of irradiation, processing conditions, as well as source of irradiation that may influence mechanical properties of a bone graft. The purpose of this study was to evaluate the effect e-beam irradiation with doses of 25 or 35 kGy, performed on dry ice or at ambient temperature, on mechanical properties of non-defatted or defatted compact bone grafts. Left and right femurs from six male cadaveric donors, aged from 46 to 54 years, were transversely cut into slices of 10 mm height, parallel to the longitudinal axis of the bone. Compact bone rings were assigned to the eight experimental groups according to the different processing method (defatted or non-defatted), as well as e-beam irradiation dose (25 or 35 kGy) and temperature conditions of irradiation (ambient temperature or dry ice). Axial compression testing was performed with a material testing machine. Results obtained for elastic and plastic regions of stress-strain curves examined by univariate analysis are described. Based on multivariate analysis, including all groups, it was found that temperature of e-beam irradiation and defatting had no consistent significant effect on evaluated mechanical parameters of compact bone rings. In contrast, irradiation with both doses significantly decreased the ultimate strain and its derivative toughness, while not affecting the ultimate stress (bone strength). As no deterioration of mechanical properties was observed in the elastic region, the reduction of the energy

  20. Mapping Glacial Weathering Processes with Thermal Infrared Remote Sensing: A Case Study at Robertson Glacier, Canada

    NASA Astrophysics Data System (ADS)

    Rutledge, A. M.; Christensen, P. R.; Shock, E.; Canovas, P. A., III

    2014-12-01

    Geologic weathering processes in cold environments, especially subglacial chemical processes acting on rock and sediment, are not well characterized due to the difficulty of accessing these environments. Glacial weathering of geologic materials contributes to the solute flux in meltwater and provides a potential source of energy to chemotrophic microbes, and is thus an important component to understand. In this study, we use Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data to map the extent of glacial weathering in the front range of the Canadian Rockies using remotely detected infrared spectra. We ground-truth our observations using laboratory infrared spectroscopy, x-ray diffraction, and geochemical analyses of field samples. The major goals of the project are to quantify weathering inputs to the glacial energy budget, and to link in situ sampling with remote sensing capabilities. Robertson Glacier, Alberta, Canada is an excellent field site for this technique as it is easily accessible and its retreating stage allows sampling of fresh subglacial and englacial sediments. Infrared imagery of the region was collected with the ASTER satellite instrument. At that same time, samples of glacially altered rock and sediments were collected on a downstream transect of the glacier and outwash plain. Infrared laboratory spectroscopy and x-ray diffraction were used to determine the composition and abundance of minerals present. Geochemical data were also collected at each location, and ice and water samples were analyzed for major and minor elements. Our initial conclusion is that the majority of the weathering seems to be occurring at the glacier-rock interface rather than in the outwash stream. Results from both laboratory and ASTER data indicate the presence of leached weathering rinds. A general trend of decreasing carbonate abundances with elevation (i.e. residence time in ice) is observed, which is consistent with increasing calcium ion

  1. Comparing Two Forms of Concept Map Critique Activities to Facilitate Knowledge Integration Processes in Evolution Education

    ERIC Educational Resources Information Center

    Schwendimann, Beat A.; Linn, Marcia C.

    2016-01-01

    Concept map activities often lack a subsequent revision step that facilitates knowledge integration. This study compares two collaborative critique activities using a Knowledge Integration Map (KIM), a form of concept map. Four classes of high school biology students (n?=?81) using an online inquiry-based learning unit on evolution were assigned…

  2. Laser acceleration

    NASA Astrophysics Data System (ADS)

    Tajima, T.; Nakajima, K.; Mourou, G.

    2017-02-01

    The fundamental idea of Laser Wakefield Acceleration (LWFA) is reviewed. An ultrafast intense laser pulse drives coherent wakefield with a relativistic amplitude robustly supported by the plasma. While the large amplitude of wakefields involves collective resonant oscillations of the eigenmode of the entire plasma electrons, the wake phase velocity ˜ c and ultrafastness of the laser pulse introduce the wake stability and rigidity. A large number of worldwide experiments show a rapid progress of this concept realization toward both the high-energy accelerator prospect and broad applications. The strong interest in this has been spurring and stimulating novel laser technologies, including the Chirped Pulse Amplification, the Thin Film Compression, the Coherent Amplification Network, and the Relativistic Mirror Compression. These in turn have created a conglomerate of novel science and technology with LWFA to form a new genre of high field science with many parameters of merit in this field increasing exponentially lately. This science has triggered a number of worldwide research centers and initiatives. Associated physics of ion acceleration, X-ray generation, and astrophysical processes of ultrahigh energy cosmic rays are reviewed. Applications such as X-ray free electron laser, cancer therapy, and radioisotope production etc. are considered. A new avenue of LWFA using nanomaterials is also emerging.

  3. Adaptive Classification of Landscape Process and Function: An Integration of Geoinformatics and Self-Organizing Maps

    SciTech Connect

    Coleman, Andre M.

    2009-07-17

    The advanced geospatial information extraction and analysis capabilities of a Geographic Information System (GISs) and Artificial Neural Networks (ANNs), particularly Self-Organizing Maps (SOMs), provide a topology-preserving means for reducing and understanding complex data relationships in the landscape. The Adaptive Landscape Classification Procedure (ALCP) is presented as an adaptive and evolutionary capability where varying types of data can be assimilated to address different management needs such as hydrologic response, erosion potential, habitat structure, instrumentation placement, and various forecast or what-if scenarios. This paper defines how the evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Establishing relationships among high-dimensional datasets through neurocomputing based pattern recognition methods can help 1) resolve large volumes of data into a structured and meaningful form; 2) provide an approach for inferring landscape processes in areas that have limited data available but exhibit similar landscape characteristics; and 3) discover the value of individual variables or groups of variables that contribute to specific processes in the landscape. Classification of hydrologic patterns in the landscape is demonstrated.

  4. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.

  5. EMITTING ELECTRONS SPECTRA AND ACCELERATION PROCESSES IN THE JET OF Mrk 421: FROM THE LOW STATE TO THE GIANT FLARE STATE

    SciTech Connect

    Yan Dahai; Zhang Li; Fan Zhonghui; Zeng Houdun; Yuan Qiang

    2013-03-10

    We investigate the electron energy distributions (EEDs) and the acceleration processes in the jet of Mrk 421 through fitting the spectral energy distributions (SEDs) in different active states in the frame of a one-zone synchrotron self-Compton model. After assuming two possible EEDs formed in different acceleration models: the shock-accelerated power law with exponential cut-off (PLC) EED and the stochastic-turbulence-accelerated log-parabolic (LP) EED, we fit the observed SEDs of Mrk 421 in both low and giant flare states using the Markov Chain Monte Carlo method which constrains the model parameters in a more efficient way. The results from our calculations indicate that (1) the PLC and LP models give comparably good fits for the SED in the low state, but the variations of model parameters from low state to flaring can be reasonably explained only in the case of the PLC in the low state; and (2) the LP model gives better fits compared to the PLC model for the SED in the flare state, and the intra-day/night variability observed at GeV-TeV bands can be accommodated only in the LP model. The giant flare may be attributed to the stochastic turbulence re-acceleration of the shock-accelerated electrons in the low state. Therefore, we may conclude that shock acceleration is dominant in the low state, while stochastic turbulence acceleration is dominant in the flare state. Moreover, our result shows that the extrapolated TeV spectra from the best-fit SEDs from optical through GeV with the two EEDs are different. It should be considered with caution when such extrapolated TeV spectra are used to constrain extragalactic background light models.

  6. Two-tank suspended growth process for accelerating the detoxification kinetics of hydrocarbons requiring initial monooxygenation reactions.

    PubMed

    Dahlen, Elizabeth P; Rittmann, Bruce E

    2002-01-01

    An experimental evaluation demonstrated that suspended growth systems operated in a two-tank accelerator/aerator configuration significantly increased the overall removal rates for phenol and 2,4-dichlorophenol (2,4-DCP), aromatic hydrocarbons that require initial monooxygenations. The accelerator tank is a small volume that receives the influent and recycled biomass. It has a high ratio of electron donor (BOD) to electron acceptor (O2). Biomass in the accelerator should be enriched in reduced nicotinamide adenine dinucleotide (NADH + H+) and have a very high specific growth rate, conditions that should accelerate the kinetics of monooxygenation reactions. For the more slowly degraded 2,4-DCP, the average percentage removal increased from 74% to 93%, even though the volume of the two-tank system was smaller than that of the one-tank system in most experiments. The average volumetric and biomass-specific removal rates increased by 50% and 100%, respectively, in the two-tank system, compared to a one-tank system. The greatest enhancement in 2,4-DCP removal occurred when the accelerator tank comprised approximately 20% of the system volume. Biomass in the accelerator tank was significantly enriched in NADH + H+ when its dissolved oxygen (DO) concentration was below 0.25 mg/L, a situation having a high ratio of donor to acceptor. The accelerator biomass had its highest NADH + H+ content for the experiments that had the highest rate of 2,4-DCP removal. Biomass in the accelerator also had a much higher specific growth rate than in the aerator or the system overall, and the specific growth rate in the accelerator was inversely correlated to the accelerator volume.

  7. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  8. Practice comparisons between accelerated resolution therapy, eye movement desensitization and reprocessing and cognitive processing therapy with case examples.

    PubMed

    Hernandez, Diego F; Waits, Wendi; Calvio, Lisseth; Byrne, Mary

    2016-12-01

    Recent outcomes for Cognitive Processing Therapy (CPT) and Prolonged Exposure (PE) therapy indicate that as many as 60-72% of patients retain their PTSD diagnosis after treatment with CPT or PE. One emerging therapy with the potential to augment existing trauma focused therapies is Accelerated Resolution Therapy (ART). ART is currently being used along with evidence based approaches at Fort Belvoir Community Hospital and by report has been both positive for clients as well as less taxing on professionals trained in ART. The following is an in-practice theoretical comparison of CPT, EMDR and ART with case examples from Fort Belvoir Community Hospital. While all three approaches share common elements and interventions, ART distinguishes itself through emphasis on the rescripting of traumatic events and the brevity of the intervention. While these case reports are not part of a formal study, they suggest that ART has the potential to augment and enhance the current delivery methods of mental health care in military environments.

  9. Searching for optimal setting conditions in technological processes using parametric estimation models and neural network mapping approach: a tutorial.

    PubMed

    Fjodorova, Natalja; Novič, Marjana

    2015-09-03

    Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits.

  10. Updated mapping and seismic reflection data processing along the Queen Charlotte fault system, southeast Alaska

    NASA Astrophysics Data System (ADS)

    Walton, M. A. L.; Gulick, S. P. S.; Haeussler, P. J.; Rohr, K.; Roland, E. C.; Trehu, A. M.

    2014-12-01

    The Queen Charlotte Fault (QCF) is an obliquely convergent strike-slip system that accommodates offset between the Pacific and North America plates in southeast Alaska and western Canada. Two recent earthquakes, including a M7.8 thrust event near Haida Gwaii on 28 October 2012, have sparked renewed interest in the margin and led to further study of how convergent stress is accommodated along the fault. Recent studies have looked in detail at offshore structure, concluding that a change in strike of the QCF at ~53.2 degrees north has led to significant differences in stress and the style of strain accommodation along-strike. We provide updated fault mapping and seismic images to supplement and support these results. One of the highest-quality seismic reflection surveys along the Queen Charlotte system to date, EW9412, was shot aboard the R/V Maurice Ewing in 1994. The survey was last processed to post-stack time migration for a 1999 publication. Due to heightened interest in high-quality imaging along the fault, we have completed updated processing of the EW9412 seismic reflection data and provide prestack migrations with water-bottom multiple reduction. Our new imaging better resolves fault and basement surfaces at depth, as well as the highly deformed sediments within the Queen Charlotte Terrace. In addition to re-processing the EW9412 seismic reflection data, we have compiled and re-analyzed a series of publicly available USGS seismic reflection data that obliquely cross the QCF. Using these data, we are able to provide updated maps of the Queen Charlotte fault system, adding considerable detail along the northernmost QCF where it links up with the Chatham Strait and Transition fault systems. Our results support conclusions that the changing geometry of the QCF leads to fundamentally different convergent stress accommodation north and south of ~53.2 degrees; namely, reactivated splay faults to the north vs. thickening of sediments and the upper crust to the south

  11. Mapping phonemic processing zones along human perisylvian cortex: an electro-corticographic investigation

    PubMed Central

    Molholm, Sophie; Mercier, Manuel R.; Liebenthal, Einat; Schwartz, Theodore H.; Ritter, Walter; Foxe, John J.; De Sanctis, Pierfilippo

    2015-01-01

    The auditory system is organized such that progressively more complex features are represented across successive cortical hierarchical stages. Just when and where the processing of phonemes, fundamental elements of the speech signal, is achieved in this hierarchy remains a matter of vigorous debate. Non-invasive measures of phonemic representation have been somewhat equivocal. While some studies point to a primary role for middle/anterior regions of the superior temporal gyrus (STG), others implicate the posterior STG. Differences in stimulation, task and inter-individual anatomical/functional variability may account for these discrepant findings. Here, we sought to clarify this issue by mapping phonemic representation across left perisylvian cortex, taking advantage of the excellent sampling density afforded by intracranial recordings in humans. We asked whether one or both major divisions of the STG were sensitive to phonemic transitions. The high signal-to-noise characteristics of direct intracranial recordings allowed for analysis at the individual participant level, circumventing issues of inter-individual anatomic and functional variability that may have obscured previous findings at the group level of analysis. The mismatch negativity (MMN), an electro-physiological response elicited by changes in repetitive streams of stimulation, served as our primary dependent measure. Oddball configurations of pairs of phonemes, spectro-temporally matched non-phonemes, and simple tones were presented. The loci of the MMN clearly differed as a function of stimulus type. Phoneme representation was most robust over middle/anterior STG/STS, but was also observed over posterior STG/SMG. These data point to multiple phonemic processing zones along perisylvian cortex, both anterior and posterior to primary auditory cortex. This finding is considered within the context of a dual stream model of auditory processing in which functionally distinct ventral and dorsal auditory

  12. Columnar- Equiaxed Transition in Solidification processing: The ESA-MAP CETSOL project

    NASA Astrophysics Data System (ADS)

    Billia, Bernard; Gandin, Charles-André; Zimmermann, Gerhard; Browne, David; Dupouy, Marie-Danielle

    2005-03-01

    Many castings are the result of a competition between the growth of columnar and equiaxed grains. Indeed, microstructures are at the center of materials science and engineering, and solidification is the most important processing route for structural materials, especially metals and alloys. Presently, microstructure models remain mostly based on diffusive transport mechanisms so that there is a need of critical benchmark data to test fundamental theories of microstructure formation, which often necessitates to have recourse to solidification experiments in the reduced-gravity environment of space. Accordingly, the CETSOL (Columnar-Equiaxed Transition in SOLidification processing)-MAP project of ESA is gathering together European groups with complementary skills to carry out experiments and model the processes, in particular in view of the utilization of reduced-gravity environment that will be afforded by the International Space Station (ISS) to get benchmark data. The ultimate objective of the CETSOL research program is to significantly contribute to the improvement of integrated modeling of grain structure in industrially important castings. To reach this goal, the approach is devised to deepen the quantitative understanding of the basic physical principles that, from the microscopic to the macroscopic scales, govern microstructure formation in solidification processing under diffusive conditions and with fluid flow in the melt. Pending questions are attacked by well-defined model experiments on technical alloys and/or on model transparent systems, physical modeling at microstructure and mesoscopic scales (e.g. large columnar front or equiaxed crystals) and numerical simulation at all scales, up to the macroscopic scales of casting with integrated numerical models.

  13. Hot Deformation Characteristics and Processing Maps of the Cu-Cr-Zr-Ag Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chai, Zhe; Volinsky, Alex A.; Sun, Huili; Tian, Baohong; Liu, Ping; Liu, Yong

    2016-03-01

    The hot deformation behavior of the Cu-Cr-Zr-Ag alloy has been investigated by hot compressive tests in the 650-950 °C temperature and 0.001-10 s-1 strain rate ranges using Gleeble-1500D thermo-mechanical simulator. The microstructure evolution of the alloy during deformation was characterized using optical and transmission electron microscopy. The flow stress decreases with the deformation temperature and increases with the strain rate. The apparent activation energy for hot deformation of the alloy was 343.23 kJ/mol. The constitutive equation of the alloy based on the hyperbolic-sine equation was established to characterize the flow stress as a function of the strain rate and the deformation temperature. The processing maps were established based on the dynamic material model. The optimal processing parameters for hot deformation of the Cu-Cr-Zr-Ag alloy are 900-950 °C and 0.001-0.1 s-1 strain rate. The evolution of DRX microstructure strongly depends on the deformation temperature and the strain rate.

  14. Mapping Flow Localization Processes in Deformation of Irradiated Reactor Structural Alloys

    SciTech Connect

    Farrell, K.

    2002-07-18

    Metals that can sustain plastic deformation homogeneously throughout their bulk tend to be tough and malleable. Often, however, if a metal has been hardened it will no longer deform uniformly. Instead, the deformation occurs in narrow bands on a microscopic scale wherein stresses and strains become concentrated in localized zones. This strain localization degrades the mechanical properties of the metal by causing premature plastic instability failure or by inducing the formation of cracks. Irradiation with neutrons hardens a metal and makes it more prone to deformation by strain localization. Although this has been known since the earliest days of radiation damage studies, a full measure of the connection between neutron irradiation hardening and strain localization is wanting, particularly in commercial alloys used in the construction of nuclear reactors. Therefore, the goal of this project is to systematically map the extent of involvement of strain localization processes in plastic deformation of three reactor alloys that have been neutron irradiated. The deformation processes are to be identified and related to changes in the tensile properties of the alloys as functions of neutron fluence (dose) and degree of plastic strain. The intent is to define the role of strain localization in radiation embrittlement phenomena. The three test materials are a tempered bainitic A533B steel, representing reactor pressure vessel steel, an annealed 316 stainless steel and annealed Zircaloy-4 representing reactor internal components.

  15. Process maps for plasma spray: Part 1: Plasma-particle interactions

    SciTech Connect

    GILMORE,DELWYN L.; NEISER JR.,RICHARD A.; WAN,YUEPENG; SAMPATH,SANJAY

    2000-01-26

    This is the first paper of a two part series based on an integrated study carried out at Sandia National Laboratories and the State University of New York at Stony Brook. The aim of the study is to develop a more fundamental understanding of plasma-particle interactions, droplet-substrate interactions, deposit formation dynamics and microstructural development as well as final deposit properties. The purpose is to create models that can be used to link processing to performance. Process maps have been developed for air plasma spray of molybdenum. Experimental work was done to investigate the importance of such spray parameters as gun current, auxiliary gas flow, and powder carrier gas flow. In-flight particle diameters, temperatures, and velocities were measured in various areas of the spray plume. Samples were produced for analysis of microstructures and properties. An empirical model was developed, relating the input parameters to the in-flight particle characteristics. Multi-dimensional numerical simulations of the plasma gas flow field and in-flight particles under different operating conditions were also performed. In addition to the parameters which were experimentally investigated, the effect of particle injection velocity was also considered. The simulation results were found to be in good general agreement with the experimental data.

  16. Effect of Grain Size Distribution on Processing Maps for Isothermal Compression of Inconel 718 Superalloy

    NASA Astrophysics Data System (ADS)

    Wang, Jianguo; Liu, Dong; Hu, Yang; Yang, Yanhui; Zhu, Xinglin

    2016-02-01

    Cylindrical specimens of Inconel 718 alloys with three types of grain size distribution were used in the compression tests and processing maps were developed in 940-1040 °C and 0.001-10 s-1. The equiaxed fine grain is more effective on the dynamic softening behavior. For partial recrystallized microstructure, the peak efficiency of power dissipation occurs at the strain rate of 0.001 s-1, and the temperature range of 1000-1020 °C. In order to obtain homogeneous microstructure with fine grains, the partial recrystallized microstructure should be deformed at the low temperature and slow strain rates. The area fraction of instability domains decreases with strain increasing. The peak efficiency of power dissipation increases with average grain size decreasing. The efficiency of power dissipation will be stimulated by the precipitation of δ phase at slow strain rate of 0.001-0.01 s-1, and the initial deformed substructure at the strain rate of 0.1-1 s-1. Equiaxed fine grain is the optimum state for forging process and dynamic recrystallization. The grain size distribution has slight influence on the microstructure evolution at high temperatures.

  17. GPR data processing for 3D fracture mapping in a marble quarry (Thassos, Greece)

    NASA Astrophysics Data System (ADS)

    Grandjean, G.; Gourry, J. C.

    1996-11-01

    Ground Penetrating Radar (GPR) has been successfully applied to detect and map fractures in marble quarries. The aim was to distinguish quickly intact marketable marble areas from fractured ones in order to improve quarry management. The GPR profiling method was chosen because it is non destructive and quickly provides a detailed image of the subsurface. It was performed in domains corresponding to future working areas in real quarry-exploitation conditions. Field surveying and data processing were adapted to the local characteristics of the fractures: E-W orientation, sub-vertical dip, and karst features. After the GPR profiles had been processed, using methods adapted from seismics (amplitude compensation, filtering and Fourier migration), the interpreted fractures from a 12 × 24 × 15 m zone were incorporated into a 3D model. Due to the low electrical conductivity of the marble, GPR provides penetration depths of about 8 and 15 m, and resolutions of about 1 and 5 cm for frequencies of 900 and 300 MHz respectively. The detection power thus seems to be sufficient to recommend use of this method. As requested by the quarriers, the 3D representation can be used directly by themselves to locate high- or low-quality marble areas. Comparison between the observed surface fractures and the fractures detected using GPR showed reasonable correlation.

  18. The PhOCoe Model--ergonomic pattern mapping in participatory design processes.

    PubMed

    Silva e Santos, Marcello

    2012-01-01

    The discipline and practice of human factors and ergonomics is quite rich in terms of the availability of analysis, development and evaluation tools and methods for its various processes. However, we lack effective instruments to either map or regulate comprehensively and effectively, cognitive and organizational related impacts, especially the environmental ones. Moreover, when ergonomic transformations through design - such as a new workstation design or even an entire new facility - is at play, ergonomics professionals tend to stay at bay, relying solely on design professionals and engineers. There is vast empirical evidence showing that participation of ergonomists as project facilitators, may contribute to an effective professional synergy amongst the various stakeholders in a multidisciplinary venue. When that happens, everyone wins - users and designers alike -because eventual conflicts, raised up in the midst of options selection, are dissipated in exchange for more convergent design alternatives. This paper presents a method for participatory design, in which users are encouraged to actively participate in the whole design process by sharing their real work activities with the design team. The negotiated results inferred from the ergonomic action and translated into a new design, are then compiled into a "Ergonomic Pattern Manual". This handbook of ergonomics-oriented design guidelines contains essential guidelines to be consulted in recurrent design project situations in which similar patterns might be used. The main drive is simple: nobody knows better than workers themselves what an adequate workplace design solution (equipment, workstation, office layout) should be.

  19. Constitutive Modeling for Flow Behavior of Medium-Carbon Bainitic Steel and Its Processing Maps

    NASA Astrophysics Data System (ADS)

    Yang, Zhinan; Li, Yingnan; Li, Yanguo; Zhang, Fucheng; Zhang, Ming

    2016-11-01

    The hot deformation behavior of a medium-carbon bainitic steel was studied in a temperature range of 900-1100 °C and a strain rate range of 0.01-10 s-1. With increasing strain, the flow stress displays three tendencies: a continuous increase under most conditions and a peak stress with and without a steady-state region. Accurate constitutive modeling was proposed and exhibits a correlation coefficient of 0.984 and an average absolute relative error of 0.063 between the experimental and predicted stress values. The activation energy of the steel increased from 393 to 447 kJ/mol, when the strain increased from 0.1 to 0.4, followed by a slight fluctuation at higher strain. Finally, processing maps under different strains were constructed and exhibit a varied instability region with increasing strain. Microstructural observations show that a mischcrystal structure formed in the specimens that worked on the instability regions, which resulted from the occurrence of flow localization. Some deformation twins were also observed in certain specimens and were responsible for negative m-values. The optimum hot working processing parameters for the studied steel were 989-1012 °C, 0.01-0.02 s-1 and 1034-1066 °C, 0.07-0.22 s-1, and a full dynamic recrystallization structure with fine homogeneous grains could be obtained.

  20. Graphics processing unit-accelerated non-rigid registration of MR images to CT images during CT-guided percutaneous liver tumor ablations

    PubMed Central

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G.; Shekhar, Raj; Hata, Nobuhiko

    2015-01-01

    Rationale and Objectives Accuracy and speed are essential for the intraprocedural nonrigid MR-to-CT image registration in the assessment of tumor margins during CT-guided liver tumor ablations. While both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique based on volume subdivision with hardware acceleration using a graphical processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Materials and Methods Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD)), and total processing time including contouring of ROIs and computation were compared using a paired Student’s t-test. Results Accuracy of the GPU-accelerated registrations and B-spline registrations, respectively were 88.3 ± 3.7% vs 89.3 ± 4.9% (p = 0.41) for DSC and 13.1 ± 5.2 mm vs 11.4 ± 6.3 mm (p = 0.15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 s vs 557 ± 116 s (p < 0.000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (p = 0.71). Conclusion The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated

  1. Hot deformation characterization of duplex low-density steel through 3D processing map development

    SciTech Connect

    Mohamadizadeh, A.; Zarei-Hanzaki, A.; Abedi, H.R.; Mehtonen, S.; Porter, D.

    2015-09-15

    The high temperature deformation behavior of duplex low-density Fe–18Mn–8Al–0.8C steel was investigated at temperatures in the range of 600–1000 °C. The primary constitutive analysis indicated that the Zener–Hollomon parameter, which represents the coupled effects of temperature and strain rate, significantly varies with the amount of deformation. Accordingly, the 3D processing maps were developed considering the effect of strain and were used to determine the safe and unsafe deformation conditions in association with the microstructural evolution. The deformation at efficiency domain I (900–1100 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}) was found to be safe at different strains due to the occurrence of dynamic recrystallization in austenite. The safe efficiency domain II (700–900 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1}), which appeared at logarithmic strain of 0.4, was characterized by deformation induced ferrite formation. Scanning electron microscopy revealed that the microband formation and crack initiation at ferrite\\austenite interphases were the main causes of deformation instability at 600–800 °C\\10{sup −} {sup 2}–10{sup −} {sup 3} s{sup −} {sup 1}. The degree of instability was found to decrease by increasing the strain due to the uniformity of microbanded structure obtained at higher strains. The shear band formation at 900–1100 °C\\1–10{sup −} {sup 1} s{sup −} {sup 1} was verified by electron backscattered diffraction. The local dynamic recrystallization of austenite and the deformation induced ferrite formation were observed within shear-banded regions as the results of flow localization. - Graphical abstract: Display Omitted - Highlights: • The 3D processing map is developed for duplex low-density Fe–Mn–Al–C steel. • The efficiency domains shrink, expand or appear with increasing strain. • The occurrence of DRX and DIFF increases the power efficiency. • Crack initiation

  2. Image based cardiac acceleration map using statistical shape and 3D+t myocardial tracking models; in-vitro study on heart phantom

    NASA Astrophysics Data System (ADS)

    Pashaei, Ali; Piella, Gemma; Planes, Xavier; Duchateau, Nicolas; de Caralt, Teresa M.; Sitges, Marta; Frangi, Alejandro F.

    2013-03-01

    It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.

  3. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing

    PubMed Central

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-01-01

    Motivation: Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. Results: We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5′-end processing and 3′-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. Availability and Implementation: The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA

  4. Effect of the drying process on the intensification of phenolic compounds recovery from grape pomace using accelerated solvent extraction.

    PubMed

    Rajha, Hiba N; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G; Maroun, Richard G

    2014-10-15

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications.

  5. Effect of the Drying Process on the Intensification of Phenolic Compounds Recovery from Grape Pomace Using Accelerated Solvent Extraction

    PubMed Central

    Rajha, Hiba N.; Ziegler, Walter; Louka, Nicolas; Hobaika, Zeina; Vorobiev, Eugene; Boechzelt, Herbert G.; Maroun, Richard G.

    2014-01-01

    In light of their environmental and economic interests, food byproducts have been increasingly exploited and valorized for their richness in dietary fibers and antioxidants. Phenolic compounds are antioxidant bioactive molecules highly present in grape byproducts. Herein, the accelerated solvent extraction (ASE) of phenolic compounds from wet and dried grape pomace, at 45 °C, was conducted and the highest phenolic compounds yield (PCY) for wet (16.2 g GAE/100 g DM) and dry (7.28 g GAE/100 g DM) grape pomace extracts were obtained with 70% ethanol/water solvent at 140 °C. The PCY obtained from wet pomace was up to two times better compared to the dry byproduct and up to 15 times better compared to the same food matrices treated with conventional methods. With regard to Resveratrol, the corresponding dry pomace extract had a better free radical scavenging activity (49.12%) than the wet extract (39.8%). The drying pretreatment process seems to ameliorate the antiradical activity, especially when the extraction by ASE is performed at temperatures above 100 °C. HPLC-DAD analysis showed that the diversity of the flavonoid and the non-flavonoid compounds found in the extracts was seriously affected by the extraction temperature and the pretreatment of the raw material. This diversity seems to play a key role in the scavenging activity demonstrated by the extracts. Our results emphasize on ASE usage as a promising method for the preparation of highly concentrated and bioactive phenolic extracts that could be used in several industrial applications. PMID:25322155

  6. Sediment budget for a polluted Hawaiian reef using hillslope monitoring and process mapping (Invited)

    NASA Astrophysics Data System (ADS)

    Stock, J. D.; Rosener, M.; Schmidt, K. M.; Hanshaw, M. N.; Brooks, B. A.; Tribble, G.; Jacobi, J.

    2010-12-01

    Pollution from coastal watersheds threatens the ecology of the nearshore, including tropical reefs. Suspended sediment concentrations off the reefs of Molokai, Hawaii, chronically exceed a toxic 10 mg/L, threatening reef ecosystems. We hypothesize that historic conversion of hillslope processes from soil creep to overland flow increased both magnitude and frequency of erosion. To create a process sediment budget, we used surficial and ecological mapping, hillslope and stream gages, and novel sensors to locate, quantify and model the generation of fine sediments polluting the reef. Ecological and geomorphic mapping from LiDAR and multi-spectral imagery located overland flow areas with vegetation cover below a threshold preventing erosion. Here, feral goat grazing exposed volcanic soils whose low matrix hydraulic conductivities (1-25 mm/hour) promote Horton overland flow. We instrumented steep, barren hillslopes with soil moisture sensors, overland flow meters, Parshal flumes, ISCO sediment samplers, and a rain gage and conducted repeat Tripod LiDAR and infiltration tests. To characterize soil resistance to overland flow erosion, we used a Cohesive Strength Meter (CSM) to simulate water stress. At the 13.5 km 2 watershed mouth we used a USGS stream gage with an ISCO sediment sampler to estimate total load. Over 3 years, storms triggered overland flow during rainfall intensities above 10-15 mm/hr. Overland flow meters indicate such flows can be up to 3 cm deep, with a tendency to deepen downslope. CSM tests indicate that these depths are insufficient to erode soils where vegetation is dense, but far above threshold values of 2-3 mm for bare soils. Sediment ratings curves for both hillslope and downstream catchment gages show clock-wise hysteresis during the first intense storms in the fall, becoming linear later in the season. During fall storms, sediment concentration is often 10X higher at a given stage. Revised annual lowering rates from experimental hillslopes are

  7. Movable RF probe eliminates need for calibration in plasma accelerators

    NASA Technical Reports Server (NTRS)

    Miller, D. B.

    1967-01-01

    Movable RF antenna probe in plasma accelerators continuously maps the RF field both within and beyond the accelerator. It eliminates the need for installing probes in the accelerator walls. The moving RF probe can be used to map the RF electrical field under various accelerator conditions.

  8. Process Mapping in a Pediatric Emergency Department to Minimize Missed Urinary Tract Infections

    PubMed Central

    Singh, Valene; Belostotsky, Vladimir; Roy, Madan; Yamamura, Deborah; Gambarotto, Kathryn; Lau, Keith

    2016-01-01

    Urinary tract infections (UTIs) are common in young children and are seen in emergency departments (EDs) frequently. Left untreated, UTIs can lead to more severe conditions. Our goal was to undertake a quality improvement (QI) initiative to help minimize the number of children with missed UTIs in a newly established tertiary care pediatric emergency department (PED). A retrospective chart review was undertaken to identify missed UTIs in children < 3 years old who presented to a children's hospital's ED with positive urine cultures. It was found that there was no treatment or follow-up in 12% of positive urine cultures, indicating a missed or possible missed UTI in a significant number of children. Key stakeholders were then gathered and process mapping (PM) was completed, where gaps and barriers were identified and interventions were subsequently implemented. A follow-up chart review was completed to assess the impact of PM in reducing the number of missed UTIs. Following PM and its implementation within the ED, there was no treatment or follow-up in only 1% of cases. Based on our results, the number of potentially missed UTIs in the ED decreased dramatically, indicating that PM can be a successful QI tool in an acute care pediatric setting. PMID:27974897

  9. Recombination at laser-processed local base contacts by dynamic infrared lifetime mapping

    NASA Astrophysics Data System (ADS)

    Müller, Jens; Bothe, Karsten; Gatz, Sebastian; Haase, Felix; Mader, Christoph; Brendel, Rolf

    2010-12-01

    Laser-processed local metal contacts to Si solar cells are a promising approach, to combine high efficiency and low production cost. Understanding carrier transport and recombination in locally contacted solar cells requires numerical simulations with experimentally verified input parameters. One of these input parameters is the reverse saturation current density J0,cont at the local base contact. We determine J0,cont by means of area averaged charge carrier lifetime measurements and an analytical model, which distinguishes between recombination at the metal contacts and at the passivated interface in between the contacts. The calibration-free dynamic infrared lifetime mapping technique is used. We measure local reverse saturation current densities J0,cont=2×103 to 2×107 fA/cm2 at metal contacts to p-type float-zone material with resistivities ρ =0.5 to 200 Ω cm. Laser contact openings (LCOs) formed by laser ablation of an amorphous Si/SiNx passivation stack and subsequent physical vapor deposition of aluminum are used as contact formation technique. As well laser fired contacts (LFCs) are applied to the same passivation stack and metallization. We observe no difference in J0,cont between LCO and LFC. Our results indicate degradation of the passivation stack by the laser treatment in the vicinity of the LCO and LFC.

  10. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  11. Tools for Developing a Quality Management Program: Proactive Tools (Process Mapping, Value Stream Mapping, Fault Tree Analysis, and Failure Mode and Effects Analysis)

    SciTech Connect

    Rath, Frank

    2008-05-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  12. BESIII Physics Data Storing and Processing on HBase and MapReduce

    NASA Astrophysics Data System (ADS)

    LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu

    2015-12-01

    In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.

  13. Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques

    SciTech Connect

    Danforth, W.W.; Schwab, W.C.; O'Brien, T.F. ); Karl, H. )

    1990-05-01

    The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred to a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.

  14. Exploring the Interactive Patterns of Concept Map-Based Online Discussion: A Sequential Analysis of Users' Operations, Cognitive Processing, and Knowledge Construction

    ERIC Educational Resources Information Center

    Wu, Sheng-Yi; Chen, Sherry Y.; Hou, Huei-Tse

    2016-01-01

    Concept maps can be used as a cognitive tool to assist learners' knowledge construction. However, in a concept map-based online discussion environment, studies that take into consideration learners' manipulative actions of composing concept maps, cognitive process among learners' discussion, and social knowledge construction at the same time are…

  15. Urban land use mapping by machine processing of ERTS-1 multispectral data: A San Francisco Bay area example

    NASA Technical Reports Server (NTRS)

    Ellefsen, R.; Swain, P. H.; Wray, J. R.

    1973-01-01

    The study is reported to develop computer produced urban land use maps using multispectral scanner data from a satellite is reported. Data processing is discussed along with the results of the San Francisco Bay area, which was chosen as the test area.

  16. Higher Education Planning for a Strategic Goal with a Concept Mapping Process at a Small Private College

    ERIC Educational Resources Information Center

    Driscoll, Deborah P.

    2010-01-01

    Faculty, staff, and administrators at a small independent college determined that planning with a Concept Mapping process efficiently produced strategic thinking and action plans for the accomplishment of a strategic goal to expand experiential learning within the curriculum. One year into a new strategic plan, the college enjoyed enrollment…

  17. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  18. Mapping mass movement processes using terrestrial LIDAR: a swift mechanism for hazard and disaster risk assessment

    NASA Astrophysics Data System (ADS)

    Garnica-Peña, Ricardo; Murillo-García, Franny; Alcántara-Ayala, Irasema

    2014-05-01

    The impact of disasters associated with mass movement processes has increased in the past decades. Either triggered by earthquakes, volcanic activity or rainfall, mass movement processes have affected people, infrastructure, economic activities and the environment in different parts of the world. Extensive damage is particularly linked to rainfall induced landslides due to the occurrence of tropical storms, hurricanes, and the combination of different meteorological phenomenon on exposed vulnerable communities. Therefore, landslide susceptibility analysis, hazard and risk assessments are considered as significant mechanisms to lessen the impact of disasters. Ideally, these procedures ought to be carried out before disasters take place. However, under intense or persistent periods of rainfall, the evaluation of potentially unstable slopes becomes a critical issue. Such evaluations are constrained by the availability of resources, capabilities and scientific and technological tools. Among them, remote sensing has proved to be a valuable tool to evaluate areas affected by mass movement processes during the post-disaster stage. Nonetheless, the high cost of imagery acquisition inhibits their wide use. High resolution topography field surveys consequently, turn out to be an essential approach to address landslide evaluation needs. In this work, we present the evaluation and mapping of a series of mass movement processes induced by hurricane Ingrid in September, 2013, in Teziutlán, Puebla, México, a municipality situated 265 km Northeast of Mexico City. Geologically, Teziutlán is characterised by the presence, in the North, of siltstones and conglomerates of the Middle Jurassic, whereas the central and Southern sectors consist of volcanic deposits of various types: andesitic tuffs of Tertiary age, and basalts, rhyolitic tuffs and ignimbrites from the Quaternary. Major relief structures are formed by the accumulation of volcanic material; lava domes, partially buried

  19. Future accelerators (?)

    SciTech Connect

    John Womersley

    2003-08-21

    I describe the future accelerator facilities that are currently foreseen for electroweak scale physics, neutrino physics, and nuclear structure. I will explore the physics justification for these machines, and suggest how the case for future accelerators can be made.

  20. Social comparison processes, narrative mapping and their shaping of the cancer experience: a case study of an elite athlete.

    PubMed

    Sparkes, Andrew C; Pérez-Samaniego, Víctor; Smith, Brett

    2012-09-01

    Drawing on data generated by life history interviews and fieldwork observations we illuminate the ways in which a young elite athlete named David (a pseudonym) gave meaning to his experiences of cancer that eventually led to his death. Central to this process were the ways in which David utilized both social comparisons and a narrative map provided by the published autobiography of Lance Armstrong (2000). Our analysis reveals the selective manner in which social comparison processes operated around the following key dimensions: mental attitude to treatment; the sporting body; the ageing body; and physical appearance. The manner in which different comparison targets were chosen, the ways in which these were framed by Armstrong's autobiography, and the work that the restitution narrative as an actor did in this process are also examined. Some reflections are offered regarding the experiential consequences of the social comparison processes utilized by David when these are shaped by specific forms of embodiment and selective narrative maps of cancer survival.

  1. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  2. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  3. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Weiland, J.L.; Hill, R.S.; Odegard, 3.; Larson, D.; Bennett, C.L.; Dunkley, J.; Jarosik, N.; Page, L.; Spergel, D.N.; Halpern, M.; Meyer, S.S.; Tucker, G.S.; Wright, E.L.

    2008-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between

  4. Interactive Query Processing in Big Data Systems: A Cross Industry Study of MapReduce Workloads

    DTIC Science & Technology

    2012-04-02

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...MapReduce behavior using existing mental models. We then describe the MapReduce work- load traces (§ 3). The next few sections present empirical evidence...system: − How uniformly or skewed are the data accesses? − How much temporal locality exists ? 2. For workload-level provisioning and load shaping: − How

  5. Additive electroplating technology as a post-CMOS process for the production of MEMS acceleration-threshold switches for transportation applications

    NASA Astrophysics Data System (ADS)

    Michaelis, Sven; Timme, Hans-Jörg; Wycisk, Michael; Binder, Josef

    2000-06-01

    This paper presents an acceleration-threshold sensor fabricated with an electroplating technology which can be integrated on top of a pre-processed CMOS signal processing circuit. The device can be manufactured using a standard low-cost CMOS production line and then adding the mechanical sensor elements via a specialized back-end process. This makes the system especially interesting for automotive applications, such as airbag safety systems or transportation shock monitoring systems, where smaller size, improved functionality, high reliability and low costs are important.

  6. High Gradient Accelerator Research

    SciTech Connect

    Temkin, Richard

    2016-07-12

    The goal of the MIT program of research on high gradient acceleration is the development of advanced acceleration concepts that lead to a practical and affordable next generation linear collider at the TeV energy level. Other applications, which are more near-term, include accelerators for materials processing; medicine; defense; mining; security; and inspection. The specific goals of the MIT program are: • Pioneering theoretical research on advanced structures for high gradient acceleration, including photonic structures and metamaterial structures; evaluation of the wakefields in these advanced structures • Experimental research to demonstrate the properties of advanced structures both in low-power microwave cold test and high-power, high-gradient test at megawatt power levels • Experimental research on microwave breakdown at high gradient including studies of breakdown phenomena induced by RF electric fields and RF magnetic fields; development of new diagnostics of the breakdown process • Theoretical research on the physics and engineering features of RF vacuum breakdown • Maintaining and improving the Haimson / MIT 17 GHz accelerator, the highest frequency operational accelerator in the world, a unique facility for accelerator research • Providing the Haimson / MIT 17 GHz accelerator facility as a facility for outside users • Active participation in the US DOE program of High Gradient Collaboration, including joint work with SLAC and with Los Alamos National Laboratory; participation of MIT students in research at the national laboratories • Training the next generation of Ph. D. students in the field of accelerator physics.

  7. Fluid expulsion sites on the Cascadia accretionary prism: mapping diagenetic deposits with processed GLORIA imagery

    USGS Publications Warehouse

    Carson, Bobb; Seke, Erol; Paskevich, Valerie F.; Holmes, Mark L.

    1994-01-01

     Point-discharge fluid expulsion on accretionary prisms is commonly indicated by diagenetic deposition of calcium carbonate cements and gas hydrates in near-surface (<10 m below seafloor; mbsf) hemipelagic sediment. The contrasting clastic and diagenetic lithologies should be apparent in side scan images. However, sonar also responds to variations in bottom slope, so unprocessed images mix topographic and lithologic information. We have processed GLORIA imagery from the Oregon continental margin to remove topographic effects. A synthetic side scan image was created initially from Sea Beam bathymetric data and then was subtracted iteratively from the original GLORIA data until topographic features disappeared. The residual image contains high-amplitude backscattering that we attribute to diagenetic deposits associated with fluid discharge, based on submersible mapping, Ocean Drilling Program drilling, and collected samples. Diagenetic deposits are concentrated (1) near an out-of-sequence thrust fault on the second ridge landward of the base of the continental slope, (2) along zones characterized by deep-seated strikeslip faults that cut transversely across the margin, and (3) in undeformed Cascadia Basin deposits which overlie incipient thrust faults seaward of the toe of the prism. There is no evidence of diagenetic deposition associated with the frontal thrust that rises from the dècollement. If the dècollement is an important aquifer, apparently the fluids are passed either to the strike-slip faults which intersect the dècollement or to the incipient faults in Cascadia Basin for expulsion. Diagenetic deposits seaward of the prism toe probably consist dominantly of gas hydrates

  8. Two-dimensional intraventricular flow mapping by digital processing conventional color-Doppler echocardiography images.

    PubMed

    Garcia, Damien; Del Alamo, Juan C; Tanne, David; Yotti, Raquel; Cortina, Cristina; Bertrand, Eric; Antoranz, José Carlos; Perez-David, Esther; Rieu, Régis; Fernandez-Aviles, Francisco; Bermejo, Javier

    2010-10-01

    Doppler echocardiography remains the most extended clinical modality for the evaluation of left ventricular (LV) function. Current Doppler ultrasound methods, however, are limited to the representation of a single flow velocity component. We thus developed a novel technique to construct 2D time-resolved (2D+t) LV velocity fields from conventional transthoracic clinical acquisitions. Combining color-Doppler velocities with LV wall positions, the cross-beam blood velocities were calculated using the continuity equation under a planar flow assumption. To validate the algorithm, 2D Doppler flow mapping and laser particle image velocimetry (PIV) measurements were carried out in an atrio-ventricular duplicator. Phase-contrast magnetic resonance (MR) acquisitions were used to measure in vivo the error due to the 2D flow assumption and to potential scan-plane misalignment. Finally, the applicability of the Doppler technique was tested in the clinical setting. In vitro experiments demonstrated that the new method yields an accurate quantitative description of the main vortex that forms during the cardiac cycle (mean error for vortex radius, position and circulation). MR image analysis evidenced that the error due to the planar flow assumption is close to 15% and does not preclude the characterization of major vortex properties neither in the normal nor in the dilated LV. These results are yet to be confirmed by a head-to-head clinical validation study. Clinical Doppler studies showed that the method is readily applicable and that a single large anterograde vortex develops in the healthy ventricle while supplementary retrograde swirling structures may appear in the diseased heart. The proposed echocardiographic method based on the continuity equation is fast, clinically-compliant and does not require complex training. This technique will potentially enable investigators to study of additional quantitative aspects of intraventricular flow dynamics in the clinical setting by

  9. Decolorization of anthraquinone dye intermediate and its accelerating effect on reduction of azo acid dyes by Sphingomonas xenophaga in anaerobic-aerobic process.

    PubMed

    Lu, Hong; Zhou, Jiti; Wang, Jing; Ai, Haixin; Zheng, Chunli; Yang, Yusuo

    2008-09-01

    Decolorization of 1-aminoanthraquinone-2-sulfonic acid (ASA-2) and its accelerating effect on the reduction of azo acid dyes by Sphingomonas xenophaga QYY were investigated. The study showed that ASA-2 could be efficiently decolorized by strain QYY under aerobic conditions according to the analysis of total organic carbon removal and UV-VIS spectra changes. Moreover, strain QYY was able to reduce azo acid dyes under anaerobic conditions. The effects of various operating conditions such as carbon sources, temperature, and pH on the reduction rate were studied. It was demonstrated that ASA-2 used as a redox mediator could accelerate the reduction process. Consequently the reduction of azo acid dyes mediated by ASA-2 and the decolorization of ASA-2 with strain QYY could be achieved in an anaerobic-aerobic process.

  10. Five-Year Wilkinson Microwave Anisotropy Probe Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Weiland, J. L.; Hill, R. S.; Odegard, N.; Larson, D.; Bennett, C. L.; Dunkley, J.; Gold, B.; Greason, M. R.; Jarosik, N.; Komatsu, E.; Nolta, M. R.; Page, L.; Spergel, D. N.; Wollack, E.; Halpern, M.; Kogut, A.; Limon, M.; Meyer, S. S.; Tucker, G. S.; Wright, E. L.

    2010-01-01

    We present new full-sky temperature and polarization maps in five frequency bands from 23 to 94 GHz, based on data from the first five years of the Wilkinson Microwave Anisotropy Probe (WMAP) sky survey. The new maps are consistent with previous maps and are more sensitive. The five-year maps incorporate several improvements in data processing made possible by the additional years of data and by a more complete analysis of the instrument calibration and in-flight beam response. We present several new tests for systematic errors in the polarization data and conclude that W-band polarization data is not yet suitable for cosmological studies, but we suggest directions for further study. We do find that Ka-band data is suitable for use; in conjunction with the additional years of data, the addition of Ka band to the previously used Q- and V-band channels significantly reduces the uncertainty in the optical depth parameter, tau. Further scientific results from the five-year data analysis are presented in six companion papers and are summarized in Section 7 of this paper. With the five-year WMAP data, we detect no convincing deviations from the minimal six-parameter ACDM model: a flat universe dominated by a cosmological constant, with adiabatic and nearly scale-invariant Gaussian fluctuations. Using WMAP data combined with measurements of Type Ia supernovae and Baryon Acoustic Oscillations in the galaxy distribution, we find (68% CL uncertainties): OMEGA(sub b)h(sup 2) = 0.02267(sup +0.00058)(sub -0.00059), OMEGA(sub c)h(sup 2) = 0.1131 plus or minus 0.0034, OMEGA(sub logical and) = 0.726 plus or minus 0.015, ns = .960 plus or minus 0.013, tau = 0.84 plus or minus 0.016, and DELTA(sup 2)(sub R) = (22.445 plus or minus 0.096) x 10(exp -9) at k = 0.002 Mpc(exp -1). From these we derive sigma(sub 8) = 0.812 plus or minus 0.026, H(sub 0) = 70.5 plus or minus 1.3 kilometers per second Mpc(exp -1), OMEGA(sub b) = 0.0456 plus or minus 0.0015, OMEGA(sub c) = .228 plus or minus

  11. Inferring coastal processes from regional-scale mapping of 222Radon and salinity: examples from the Great Barrier Reef, Australia.

    PubMed

    Stieglitz, Thomas C; Cook, Peter G; Burnett, William C

    2010-07-01

    The radon isotope 222Rn and salinity in coastal surface water were mapped on regional scales, to improve the understanding of coastal processes and their spatial variability. Radon was measured with a surface-towed, continuously recording multi-detector setup on a moving vessel. Numerous processes and locations of land-ocean interaction along the Central Great Barrier Reef coastline were identified and interpreted based on the data collected. These included riverine fluxes, terrestrially-derived fresh submarine groundwater discharge (SGD) and the tidal pumping of seawater through mangrove forests. Based on variations in the relationship of the tracers radon and salinity, some aspects of regional freshwater inputs to the coastal zone and to estuaries could be assessed. Concurrent mapping of radon and salinity allowed an efficient qualitative assessment of land-ocean interaction on various spatial and temporal scales, indicating that such surveys on coastal scales can be a useful tool to obtain an overview of SGD locations and processes.

  12. Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP

    NASA Astrophysics Data System (ADS)

    Shen, B.; Li, J. F.; Cheung, S.

    2013-12-01

    In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural

  13. User alternatives in post-processing for raster-to-vector conversion. [Landsat-based forest mapping

    NASA Technical Reports Server (NTRS)

    Logan, T. L.; Woodcock, C. E.

    1983-01-01

    A number of Landsat-based coniferous forest stratum maps have been created of the Eldorado National Forest in California. These maps were produced in raster image format which is not directly usable by the U.S. Forest Service's vector-based Wildland Resource Information System (WRIS). As a solution, raster-to-vector conversion software has been developed for processing classified images into polygonal data structures. Before conversion, however, the digital classification images must be simplified to remove high spatial variance ('noise', 'speckle') and meet a USFS ten acre minimum requirement. A post-processing (simplification) strategy different from those commonly used in raster image processing may be desired for preparing maps for conversion to vector format, because simplification routines typically permit diagonal connections in the process of reclassifying pixels and forming new polygons. Diagonal connections are often undesirable when converting to vector format because they permit polygons to effectively cross over each other and occupy a common location. Three alternative methodologies are discussed for simplifying raster data for conversion to vector format.

  14. PETROMAP: MS-DOS software package for quantitative processing of X-ray maps of zoned minerals

    NASA Astrophysics Data System (ADS)

    Cossio, Roberto; Borghi, Alessandro

    1998-10-01

    This paper shows an application of energy dispersive spectrometry (EDS) for digital acquisition of multi-element X-ray compositional maps of minerals in polished thin sections. A square matrix of n EDS spectra with known X, Y coordinates is collected, converted and exported to a personal computer. Each spectrum of the matrix is processed and the apparent concentration of each analyzed element is calculated by means of PETROMAP, a program written in Quick-Basic which applies a quantitative ZAF/FLS correction. The results of processing are comparable to the conventional quantitative microprobe analyses, with similar counting statistics. The output is a numerical matrix, compatible with the most popular graphic and spreadsheet programs from which it is possible to produce two-dimensional wt% oxide, mole fractions and mineral end-members pseudocolored or black/white maps. The procedure has been tested using a metamorphic garnet of the medium-grade Stilo unit (Calabrian Arc, Southern Italy).

  15. Tuning maps for setpoint changes and load disturbance upsets in a three capacity process under multivariable control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Smith, Ira C.

    1991-01-01

    Tuning maps are an aid in the controller tuning process because they provide a convenient way for the plant operator to determine the consequences of adjusting different controller parameters. In this application the maps provide a graphical representation of the effect of varying the gains in the state feedback matrix on startup and load disturbance transients for a three capacity process. Nominally, the three tank system, represented in diagonal form, has a Proportional-Integral control on each loop. Cross coupling is then introduced between the loops by using non-zero off-diagonal proportional parameters. Changes in transient behavior due to setpoint and load changes are examined by varying the gains of the cross coupling terms.

  16. Recombinant growth factor mixtures induce cell cycle progression and the upregulation of type I collagen in human skin fibroblasts, resulting in the acceleration of wound healing processes.

    PubMed

    Lee, Do Hyun; Choi, Kyung-Ha; Cho, Jae-We; Kim, So Young; Kwon, Tae Rin; Choi, Sun Young; Choi, Yoo Mi; Lee, Jay; Yoon, Ho Sang; Kim, Beom Joon

    2014-05-01

    Application of growth factor mixtures has been used for wound healing and anti-wrinkles agents. The aim of this study was to evaluate the effect of recombinant growth factor mixtures (RGFM) on the expression of cell cycle regulatory proteins, type I collagen, and wound healing processes of acute animal wound models. The results showed that RGFM induced increased rates of cell proliferation and cell migration of human skin fibroblasts (HSF). In addition, expression of cyclin D1, cyclin E, cyclin-dependent kinase (Cdk)4, and Cdk2 proteins was markedly increased with a growth factor mixtures treatment in fibroblasts. Expression of type I collagen was also increased in growth factor mixtures-treated HSF. Moreover, growth factor mixtures-induced the upregulation of type I collagen was associated with the activation of Smad2/3. In the animal model, RGFM-treated mice showed accelerated wound closure, with the closure rate increasing as early as on day 7, as well as re-epithelization and reduced inflammatory cell infiltration than phosphate-buffered saline (PBS)-treated mice. In conclusion, the results indicated that RGFM has the potential to accelerate wound healing through the upregulation of type I collagen, which is partly mediated by activation of Smad2/3-dependent signaling pathway as well as cell cycle progression in HSF. The topical application of growth factor mixtures to acute and chronic skin wound may accelerate the epithelization process through these molecular mechanisms.

  17. Plasma acceleration above martian magnetic anomalies.

    PubMed

    Lundin, R; Winningham, D; Barabash, S; Frahm, R; Holmström, M; Sauvaud, J-A; Fedorov, A; Asamura, K; Coates, A J; Soobiah, Y; Hsieh, K C; Grande, M; Koskinen, H; Kallio, E; Kozyra, J; Woch, J; Fraenz, M; Brain, D; Luhmann, J; McKenna-Lawler, S; Orsini, R S; Brandt, P; Wurz, P

    2006-02-17

    Auroras are caused by accelerated charged particles precipitating along magnetic field lines into a planetary atmosphere, the auroral brightness being roughly proportional to the precipitating particle energy flux. The Analyzer of Space Plasma and Energetic Atoms experiment on the Mars Express spacecraft has made a detailed study of acceleration processes on the nightside of Mars. We observed accelerated electrons and ions in the deep nightside high-altitude region of Mars that map geographically to interface/cleft regions associated with martian crustal magnetization regions. By integrating electron and ion acceleration energy down to the upper atmosphere, we saw energy fluxes in the range of 1 to 50 milliwatts per square meter per second. These conditions are similar to those producing bright discrete auroras above Earth. Discrete auroras at Mars are therefore expected to be associated with plasma acceleration in diverging magnetic flux tubes above crustal magnetization regions, the auroras being distributed geographically in a complex pattern by the many multipole magnetic field lines extending into space.

  18. Tailoring online information retrieval to user's needs based on a logical semantic approach to natural language processing and UMLS mapping.

    PubMed

    Kossman, Susan; Jones, Josette; Brennan, Patricia Flatley

    2007-10-11

    Depression can derail teenagers' lives and cause serious chronic health problems. Acquiring pertinent knowledge and skills supports care management, but retrieving appropriate information can be difficult. This poster presents a strategy to tailor online information to user attributes using a logical semantic approach to natural language processing (NLP) and mapping propositions to UMLS terms. This approach capitalizes on existing NLM resources and presents a potentially sustainable plan for meeting consumers and providers information needs.

  19. On the retention of high-energy protons and nuclei with charges Z or equal to 2 in large solar flares after the process of their acceleration

    NASA Technical Reports Server (NTRS)

    Volodichev, N. N.; Kuzhevsky, B. M.; Nechaev, O. Y.; Savenko, I. A.

    1985-01-01

    Data which suggest that the protons with energies of up to several GeV should be retained on the Sun after the process of their acceleration are presented. The protons are on the average retained for 15 min, irrespectively of the solar flare heliolatitude and of the accelerated particle energy ranging from 100 MeV to several GeV. It is suggested that the particles are retained in a magnetic trap formed in a solar active region. No Z or = 2 nuclei of solar origin during large solar flares. The absence of the 500 MeV/nucleon nuclei with Z or = 2 may be due to their retention in the magnetic trap which also retains the high-energy protons. During the trapping time the approx. 500 MeV/nucleon nuclei with Z or = 2 may escape due to nuclear interactions and ionization loss.

  20. ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION

    EPA Science Inventory

    The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...

  1. Using Saliency Maps to Separate Competing Processes in Infant Visual Cognition

    ERIC Educational Resources Information Center

    Althaus, Nadja; Mareschal, Denis

    2012-01-01

    This article presents an eye-tracking study using a novel combination of visual saliency maps and "area-of-interest" analyses to explore online feature extraction during category learning in infants. Category learning in 12-month-olds (N = 22) involved a transition from looking at high-saliency image regions to looking at more…

  2. A Soft OR Approach to Fostering Systems Thinking: SODA Maps plus Joint Analytical Process

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2016-01-01

    Higher order thinking skills are important for managers. Systems thinking is an important type of higher order thinking in business education. This article investigates a soft Operations Research approach to teaching and learning systems thinking. It outlines the integrative use of Strategic Options Development and Analysis maps for visualizing…

  3. USING IMAGE PROCESSING METHODS WITH RASTER EDITING TOOLS FOR MAPPING EELGRASS DISTRIBUTIONS IN PACIFIC NORHWEST ESTUARIES

    EPA Science Inventory

    False-color near-infrared (CIR) aerial photography of seven Oregon estuaries was acquired at extreme low tides and digitally orthorectified with a ground pixel resolution of 25 cm to provide data for intertidal vegetation mapping. Exposed, semi-exposed and some submerged eelgras...

  4. Conceptual Maps: Measuring Learning Processes of Engineering Students Concerning Sustainable Development

    ERIC Educational Resources Information Center

    Segalas, J.; Ferrer-Balas, D.; Mulder, K. F.

    2008-01-01

    In the 1990s, courses on sustainable development (SD) were introduced in technological universities. After some years of practice, there is increased interest in the evaluation of the most effective ways for teaching SD. This paper introduces the use of conceptual maps as a tool to measure the knowledge acquired by students when taking a…

  5. Processing Flexible Form-to-Meaning Mappings: Evidence for Enriched Composition as Opposed to Indeterminacy

    ERIC Educational Resources Information Center

    Roehm, Dietmar; Sorace, Antonella; Bornkessel-Schlesewsky, Ina

    2013-01-01

    Sometimes, the relationship between form and meaning in language is not one-to-one. Here, we used event-related brain potentials (ERPs) to illuminate the neural correlates of such flexible syntax-semantics mappings during sentence comprehension by examining split-intransitivity. While some ("rigid") verbs consistently select one…

  6. Processing the CONSOL Energy, Inc. Mine Maps and Records Collection at the University of Pittsburgh

    ERIC Educational Resources Information Center

    Rougeux, Debora A.

    2011-01-01

    This article describes the efforts of archivists and student assistants at the University of Pittsburgh's Archives Service Center to organize, describe, store, and provide timely and efficient access to over 8,000 maps of underground coal mines in southwestern Pennsylvania, as well the records that accompanied them, donated by CONSOL Energy, Inc.…

  7. Exploring Students' Mapping Behaviors and Interactive Discourses in a Case Diagnosis Problem: Sequential Analysis of Collaborative Causal Map Drawing Processes

    ERIC Educational Resources Information Center

    Lee, Woon Jee

    2012-01-01

    The purpose of this study was to explore the nature of students' mapping and discourse behaviors while constructing causal maps to articulate their understanding of a complex, ill-structured problem. In this study, six graduate-level students were assigned to one of three pair groups, and each pair used the causal mapping software program,…

  8. Susceptibility mapping of shallow landslides using kernel-based Gaussian process, support vector machines and logistic regression

    NASA Astrophysics Data System (ADS)

    Colkesen, Ismail; Sahin, Emrehan Kutlug; Kavzoglu, Taskin

    2016-06-01

    Identification of landslide prone areas and production of accurate landslide susceptibility zonation maps have been crucial topics for hazard management studies. Since the prediction of susceptibility is one of the main processing steps in landslide susceptibility analysis, selection of a suitable prediction method plays an important role in the success of the susceptibility zonation process. Although simple statistical algorithms (e.g. logistic regression) have been widely used in the literature, the use of advanced non-parametric algorithms in landslide susceptibility zonation has recently become an active research topic. The main purpose of this study is to investigate the possible application of kernel-based Gaussian process regression (GPR) and support vector regression (SVR) for producing landslide susceptibility map of Tonya district of Trabzon, Turkey. Results of these two regression methods were compared with logistic regression (LR) method that is regarded as a benchmark method. Results showed that while kernel-based GPR and SVR methods generally produced similar results (90.46% and 90.37%, respectively), they outperformed the conventional LR method by about 18%. While confirming the superiority of the GPR method, statistical tests based on ROC statistics, success rate and prediction rate curves revealed the significant improvement in susceptibility map accuracy by applying kernel-based GPR and SVR methods.

  9. Added value products for imaging remote sensing by processing actual GNSS reflectometry delay doppler maps

    NASA Astrophysics Data System (ADS)

    Schiavulli, Domenico; Frappart, Frédéric; Ramilien, Guillaume; Darrozes, José; Nunziata, Ferdinando; Migliaccio, Maurizio

    2016-04-01

    Global Navigation Satellite System Reflectometry (GNSS-R) is an innovative and promising tool for remote sensing. It is based on the exploitation of GNSS signals reflected off Earth's surface as signals of opportunity to infer geophysical information of the reflecting surface. The main advantages of GNSS-R with respect dedicated sensors are: the unprecedented spatial-temporal coverage due to the availability of a great amount of transmitting satellite, e.g. GPS, Galileo, Glonass, etc…, long term GNSS mission life and cost effectiveness. In fact only a simple receiver is needed. In the last years several works demonstrated the meaningful of this technique in several Earth Observation applications. All these applications presented results obtained by using a receiver mounted on an aircraft or on a fixed platform. Moreover, space borne missions have been launched or are planned: UK-DMC, TechDemoSat-1 (TDS-1), NASA CYGNSS, Geros ISS. Practically, GNSS-R can be seen as a bistatic radar system where the GNSS satellites continuously transmit the L-band all-weather night-and-day signals that are reflected off a surface, called Glistening Zone (GZ), and a receiver measures the scattered microwave signals in terms of Delay-Doppler maps (DDMs) or delay waveforms. These two products have been widely studied in the literature to extract compact parameters for different remote sensing applications. However, products measured in the Delay Doppler (DD) domain are not able to provide any spatial information of the scattering scene. This could represent a drawback for applications related to imaging remote sensing, e.g. target detection, sea/land and sea/ice transition, oil spill detection, etc…. To overcome these limitations some deconvolution techniques have been proposed in the state of the art aiming at the reconstruction of a radar image of the observed scene by processing the measured DDMs. These techniques have been tested on DDMs related to simulated marine scenario

  10. Detecting Buried Archaeological Remains by the Use of Geophysical Data Processing with 'Diffusion Maps' Methodology

    NASA Astrophysics Data System (ADS)

    Eppelbaum, Lev

    2015-04-01

    observe that as a result of the above operations we embedded the original data into 3-dimensional space where data related to the AT subsurface are well separated from the N data. This 3D set of the data representatives can be used as a reference set for the classification of newly arriving data. Geophysically it means a reliable division of the studied areas for the AT-containing and not containing (N) these objects. Testing this methodology for delineation of archaeological cavities by magnetic and gravity data analysis displayed an effectiveness of this approach. References Alperovich, L., Eppelbaum, L., Zheludev, V., Dumoulin, J., Soldovieri, F., Proto, M., Bavusi, M. and Loperte, A., 2013. A new combined wavelet methodology applied to GPR and ERT data in the Montagnole experiment (French Alps). Journal of Geophysics and Engineering, 10, No. 2, 025017, 1-17. Averbuch, A., Hochman, K., Rabin, N., Schclar, A. and Zheludev, V., 2010. A diffusion frame-work for detection of moving vehicles. Digital Signal Processing, 20, No.1, 111-122. Averbuch A.Z., Neittaanmäki, P., and Zheludev, V.A., 2014. Spline and Spline Wavelet Methods with Applications to Signal and Image Processing. Volume I: Periodic Splines. Springer. Coifman, R.R. and Lafon, S., 2006. Diffusion maps, Applied and Computational Harmonic Analysis. Special issue on Diffusion Maps and Wavelets, 21, No. 7, 5-30. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., 2014a. Geophysical observations at archaeological sites: Estimating informational content. Archaeological Prospection, 21, No. 2, 25-38. Eppelbaum, L.V. 2014b. Four Color Theorem and Applied Geophysics. Applied Mathematics, 5, 358-366. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed

  11. Utilization of Workflow Process Maps to Analyze Gaps in Critical Event Notification at a Large, Urban Hospital.

    PubMed

    Bowen, Meredith; Prater, Adam; Safdar, Nabile M; Dehkharghani, Seena; Fountain, Jack A

    2016-08-01

    Stroke care is a time-sensitive workflow involving multiple specialties acting in unison, often relying on one-way paging systems to alert care providers. The goal of this study was to map and quantitatively evaluate such a system and address communication gaps with system improvements. A workflow process map of the stroke notification system at a large, urban hospital was created via observation and interviews with hospital staff. We recorded pager communication regarding 45 patients in the emergency department (ED), neuroradiology reading room (NRR), and a clinician residence (CR), categorizing transmissions as successful or unsuccessful (dropped or unintelligible). Data analysis and consultation with information technology staff and the vendor informed a quality intervention-replacing one paging antenna and adding another. Data from a 1-month post-intervention period was collected. Error rates before and after were compared using a chi-squared test. Seventy-five pages regarding 45 patients were recorded pre-intervention; 88 pages regarding 86 patients were recorded post-intervention. Initial transmission error rates in the ED, NRR, and CR were 40.0, 22.7, and 12.0 %. Post-intervention, error rates were 5.1, 18.8, and 1.1 %, a statistically significant improvement in the ED (p < 0.0001) and CR (p = 0.004) but not NRR (p = 0.208). This intervention resulted in measureable improvement in pager communication to the ED and CR. While results in the NRR were not significant, this intervention bolsters the utility of workflow process maps. The workflow process map effectively defined communication failure parameters, allowing for systematic testing and intervention to improve communication in essential clinical locations.

  12. Machine processing of S-192 and supporting aircraft data: Studies of atmospheric effects, agricultural classifications, and land resource mapping

    NASA Technical Reports Server (NTRS)

    Thomson, F.

    1975-01-01

    Two tasks of machine processing of S-192 multispectral scanner data are reviewed. In the first task, the effects of changing atmospheric and base altitude on the ability to machine-classify agricultural crops were investigated. A classifier and atmospheric effects simulation model was devised and its accuracy verified by comparison of its predicted results with S-192 processed results. In the second task, land resource maps of a mountainous area near Cripple Creek, Colorado were prepared from S-192 data collected on 4 August 1973.

  13. Hydrogeological Mapping and Hydrological Process Modelling for understanding the interaction of surface runoff and infiltration in a karstic catchment

    NASA Astrophysics Data System (ADS)

    Stadler, Hermann; Reszler, Christian; Komma, Jürgen; Poltnig, Walter; Strobl, Elmar; Blöschl, Günter

    2013-04-01

    This paper presents a study at the interface hydrogeology - hydrology, concerning mapping of surface runoff generation areas in a karstic catchment. The governing processes range from surface runoff with subsequent infiltration to direct infiltration and further deep percolation into different karst conduits. The aim is to identify areas with a potential of surface erosion and thus, identify the hazard of solute/contaminant input into the karst system during aestival thundershowers, which can affect water quality at springs draining the karst massif. According to hydrogeological methods the emphasis of the study are field investigations based on hydrogeological mapping and field measurements in order to gain extensive knowledge about processes and their spatial distribution in the catchment to establish a site specific Dominant Process Concept (DPC). Based on the hydrogeological map, which describes the lithological units relating to their hydrogeological classification, mapping focuses on the following attributes of the overlaying loose material/debris and soils: (i) infiltration capability, (ii) soil depth (as a measure for storage capacity), and (iii) potential surface flow length. Detailed mapping is performed in the reference area, where a variety of data are acquired, such as soil grain size distribution, soil moisture through TDR measurements at characteristic points, etc. The reference area borders both end-members of the dominant surface runoff processes as described above. Geomorphologic analyses based on a 1m resolution Laserscan assist in allocating sinks and flow accumulation paths in the catchment. By a regionalisation model, developed and calibrated based on the results in the reference areas, the process disposition is transposed onto the whole study area. In a further step, a hydrological model will be set up, where model structure and parameters are identified based on above described working steps and following the DPC. The model will be

  14. Scalability of the LEU-Modified Cintichem Process: 3-MeV Van de Graaff and 35-MeV Electron Linear Accelerator Studies

    SciTech Connect

    Rotsch, David A.; Brossard, Tom; Roussin, Ethan; Quigley, Kevin; Chemerisov, Sergey; Gromov, Roman; Jonah, Charles; Hafenrichter, Lohman; Tkac, Peter; Krebs, John; Vandegrift, George F.

    2016-10-31

    Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABO were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.

  15. Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact Running-in Process

    DTIC Science & Technology

    2015-10-01

    lubricated gear and bearing contacts typically demonstrate a period of wear called “running-in” when they are first put into service, during which time the...values, depending on the ramp direction and extent of mapping range. 15. SUBJECT TERMS elastohydrodynamic, lubrication, wear, gears, bearings 16...1. Introduction Elastohydrodynamic lubricated contacts are common in vehicle transmission gears and bearings . The hydrodynamic pressure built up

  16. Exploiting comparative mapping among Brassica species to accelerate the physical delimitation of a genic male-sterile locus (BnRf) in Brassica napus.

    PubMed

    Xie, Yanzhou; Dong, Faming; Hong, Dengfeng; Wan, Lili; Liu, Pingwu; Yang, Guangsheng

    2012-07-01

    The recessive genic male sterility (RGMS) line 9012AB has been used as an important pollination control system for rapeseed hybrid production in China. Here, we report our study on physical mapping of one male-sterile locus (BnRf) in 9012AB by exploiting the comparative genomics among Brassica species. The genetic maps around BnRf from previous reports were integrated and enriched with markers from the Brassica A7 chromosome. Subsequent collinearity analysis of these markers contributed to the identification of a novel ancestral karyotype block F that possibly encompasses BnRf. Fourteen insertion/deletion markers were further developed from this conserved block and genotyped in three large backcross populations, leading to the construction of high-resolution local genetic maps where the BnRf locus was restricted to a less than 0.1-cM region. Moreover, it was observed that the target region in Brassica napus shares a high collinearity relationship with a region from the Brassica rapa A7 chromosome. A BnRf-cosegregated marker (AT3G23870) was then used to screen a B. napus bacterial artificial chromosome (BAC) library. From the resulting 16 positive BAC clones, one (JBnB089D05) was identified to most possibly contain the BnRf (c) allele. With the assistance of the genome sequence from the Brassica rapa homolog, the 13.8-kb DNA fragment covering both closest flanking markers from the BAC clone was isolated. Gene annotation based on the comparison of microcollinear regions among Brassica napus, B. rapa and Arabidopsis showed that five potential open reading frames reside in this fragment. These results provide a foundation for the characterization of the BnRf locus and allow a better understanding of the chromosome evolution around BnRf.

  17. Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing.

    PubMed

    Lyu, Bingjiang; Ge, Jianqiao; Niu, Zhendong; Tan, Li Hai; Gao, Jia-Hong

    2016-10-19

    Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological-semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner.

  18. Production of a water quality map of Saginaw Bay by computer processing of LANDSAT-2 data

    NASA Technical Reports Server (NTRS)

    Mckeon, J. B.; Rogers, R. H.; Smith, V. E.

    1977-01-01

    Surface truth and LANDSAT measurements collected July 31, 1975, for Saginaw Bay were used to demonstrate a technique for producing a color coded water quality map. On this map, color was used as a code to quantify five discrete ranges in the following water quality parameters: (1) temperature, (2) Secchi depth, (3) chloride, (4) conductivity, (5) total Kjeldahl nitrogen, (6) total phosphorous, (7)chlorophyll a, (8) total solids and (9) suspended solids. The LANDSAT and water quality relationship was established through the use of a set of linear regression equations where the water quality parameters are the dependent variables and LANDSAT measurements are the independent variables. Although the procedure is scene and surface truth dependent, it provides both a basis for extrapolating water quality parameters from point samples to unsampled areas and a synoptic view of water mass boundaries over the 3000 sq. km bay area made from one day's ship data that is superior, in many ways, to the traditional machine contoured maps made from three day's ship data.

  19. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population

    PubMed Central

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  20. Airborne mapping of earth-atmosphere exchange processes and remote sensing of surface characteristics over heterogeneous areas

    SciTech Connect

    Schuepp, P.H.; Ogunjemiyo, S.; Mitic, C.M.

    1996-10-01

    Given the spatial heterogeneity of much of the biosphere, and the difficulty in establishing representative observation points at the surface, airborne flux observations coupled with airborne and satellite-based remote sensing plays an increasing role in the description of surface-atmosphere exchange processes. Our paper summarizes flux mapping procedures based on low level airborne sampling by the Canadian Twin Otter research aircraft, over three ecosystems with different degrees of spatial heterogeneity (grassland, mixed agricultural land and boreal forest). Observations show that the degree to which flux maps for heat, moisture and trace gases are correlated, among themselves and with maps of radiometrically observable surface features, cannot be generalized. This means that, wherever possible, algorithms for the prediction of surface-atmosphere exchange processes based on remote sensing observations should be developed for - and tested in - each structurally different ecosystem. The flexibility of deployment of aircraft serves well, both for the gathering of data to develop such algorithms, as well as for their testing at scales that integrate over an adequate sample of the various components that constitute a spatially heterogeneous ecosystem. 23 refs., 4 figs.

  1. The use of concept maps during knowledge elicitation in ontology development processes – the nutrigenomics use case

    PubMed Central

    Castro, Alexander Garcia; Rocca-Serra, Philippe; Stevens, Robert; Taylor, Chris; Nashar, Karim; Ragan, Mark A; Sansone, Susanna-Assunta

    2006-01-01

    Background Incorporation of ontologies into annotations has enabled 'semantic integration' of complex data, making explicit the knowledge within a certain field. One of the major bottlenecks in developing bio-ontologies is the lack of a unified methodology. Different methodologies have been proposed for different scenarios, but there is no agreed-upon standard methodology for building ontologies. The involvement of geographically distributed domain experts, the need for domain experts to lead the design process, the application of the ontologies and the life cycles of bio-ontologies are amongst the features not considered by previously proposed methodologies. Results Here, we present a methodology for developing ontologies within the biological domain. We describe our scenario, competency questions, results and milestones for each methodological stage. We introduce the use of concept maps during knowledge acquisition phases as a feasible transition between domain expert and knowledge engineer. Conclusion The contributions of this paper are the thorough description of the steps we suggest when building an ontology, example use of concept maps, consideration of applicability to the development of lower-level ontologies and application to decentralised environments. We have found that within our scenario conceptual maps played an important role in the development process. PMID:16725019

  2. Application of ERTS images and image processing to regional geologic problems and geologic mapping in northern Arizona

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H. (Principal Investigator); Billingsley, F. C.; Gillespie, A. R.; Abrams, M. J.; Squires, R. L.; Shoemaker, E. M.; Lucchitta, I.; Elston, D. P.

    1975-01-01

    The author has identified the following significant results. Computer image processing was shown to be both valuable and necessary in the extraction of the proper subset of the 200 million bits of information in an ERTS image to be applied to a specific problem. Spectral reflectivity information obtained from the four MSS bands can be correlated with in situ spectral reflectance measurements after path radiance effects have been removed and a proper normalization has been made. A detailed map of the major fault systems in a 90,000 sq km area in northern Arizona was compiled from high altitude photographs and pre-existing published and unpublished map data. With the use of ERTS images, three major fault systems, the Sinyala, Bright Angel, and Mesa Butte, were identified and their full extent measured. A byproduct of the regional studies was the identification of possible sources of shallow ground water, a scarce commodity in these regions.

  3. A Study of Variables That Affect Results in the ASTM D2274 Accelerated Stability Test. Part 1. Laboratory, Operator, and Process Variable Effects.

    DTIC Science & Technology

    1987-04-01

    indicator adsorption GC Gas chromatography HPLC High-pressure liquid chromatography Hz Hertz LCO Light-cycle oils L/hr Liters per hour urm Micrometers mg...Process- Var iah Ii’ F fee-t s P FLD CROUP I- SBGROUP h te IeO StI,1i Ii i t\\ P roe edtore DI) i f viCe *𔄃 AB RACT (Continue on reverSe *f necesSary and...34 APPENDIX A - QUESTIONNAIRE ON THE USE OF THE ASTM TEST FOR OXIDATION STABILITY OF DISTILLATE FUEL OIL (ACCELERATED

  4. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  5. Formation Mechanisms, Structure, and Properties of HVOF-Sprayed WC-CoCr Coatings: An Approach Toward Process Maps

    NASA Astrophysics Data System (ADS)

    Varis, T.; Suhonen, T.; Ghabchi, A.; Valarezo, A.; Sampath, S.; Liu, X.; Hannula, S.-P.

    2014-08-01

    Our study focuses on understanding the damage tolerance and performance reliability of WC-CoCr coatings. In this paper, the formation of HVOF-sprayed tungsten carbide-based cermet coatings is studied through an integrated strategy: First-order process maps are created by using online-diagnostics to assess particle states in relation to process conditions. Coating properties such as hardness, wear resistance, elastic modulus, residual stress, and fracture toughness are discussed with a goal to establish a linkage between properties and particle characteristics via second-order process maps. A strong influence of particle state on the mechanical properties, wear resistance, and residual stress stage of the coating was observed. Within the used processing window (particle temperature ranged from 1687 to 1831 °C and particle velocity from 577 to 621 m/s), the coating hardness varied from 1021 to 1507 HV and modulus from 257 to 322 GPa. The variation in coating mechanical state is suggested to relate to the microstructural changes arising from carbide dissolution, which affects the properties of the matrix and, on the other hand, cohesive properties of the lamella. The complete tracking of the coating particle state and its linking to mechanical properties and residual stresses enables coating design with desired properties.

  6. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    NASA Astrophysics Data System (ADS)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  7. Accelerator simulation activities at the SSCL

    SciTech Connect

    Bourianoff, G.

    1992-11-01

    This paper will attempt to summarize the activities related to accelerator simulation at the SSC laboratory during the recent past. Operational simulations including injection, extraction, and correction, performance prediction of a specified lattice design, in particular, the effect of higher-order multipoles on linear aperture and the effect of power supply ripple on emittance growth in the collider, and lastly, the development and application of advanced techniques to particle tracking, e.g., parallel processing and mapping techniques will be discussed in this paper.

  8. Machine processing of Landsat MSS data and DMA topographic data for forest cover type mapping

    NASA Technical Reports Server (NTRS)

    Fleming, M. D.; Hoffer, R. M.

    1979-01-01

    A study with the objective of developing and testing techniques which utilize both digital topographic data and Landsat MSS spectral data to map forest cover types is examined. Emphasis is given to the topographic distribution model (TDM), which combines point-by-point information about forest species, elevation, slope, and aspect to quantitatively describe topographic positions. Results show the stratified random sample approach to be very effective for developing the TDM, while the use of topographic data significantly improved the overall classification accuracy of forest cover types as compared to using spectral data alone.

  9. Snowcover mapping by machine processing of Skylab and LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Bartolucci, L. A.; Hoffer, R. M.; Luther, S. G.

    1975-01-01

    Skylab and LANDSAT MSS data were analyzed using computer-aided analysis techniques (CAAT). Results indicated that the middle infrared wavelength bands of the Skylab S-192 scanner would allow effective discrimination between snowcover and water-droplet clouds, whereas the limited spectral response of the LANDSAT-1 or 2 scanners do not allow such spectral discrimination. Five spectral classes of snowcover were defined and mapped. These classes were found to be related to differences in the proportion of snow and forest cover in the individual resolution elements.

  10. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  11. Breakthrough: Fermilab Accelerator Technology

    ScienceCinema

    None

    2016-07-12

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  12. Section 7.3. accelerator facilities. Technology review of accelerator facilities

    NASA Astrophysics Data System (ADS)

    McKeown, Joseph

    New initiatives in basic science, accelerator engineering and market development, continue to stimulate applications of electron accelerators. Contributions from scientific experts in each of these segments have been assimulated to reflect the present status of accelerator technology in radiation processing.

  13. Acceleration switch

    DOEpatents

    Abbin, Jr., Joseph P.; Devaney, Howard F.; Hake, Lewis W.

    1982-08-17

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  14. Acceleration switch

    DOEpatents

    Abbin, J.P. Jr.; Devaney, H.F.; Hake, L.W.

    1979-08-29

    The disclosure relates to an improved integrating acceleration switch of the type having a mass suspended within a fluid filled chamber, with the motion of the mass initially opposed by a spring and subsequently not so opposed.

  15. ION ACCELERATOR

    DOEpatents

    Bell, J.S.

    1959-09-15

    An arrangement for the drift tubes in a linear accelerator is described whereby each drift tube acts to shield the particles from the influence of the accelerating field and focuses the particles passing through the tube. In one embodiment the drift tube is splii longitudinally into quadrants supported along the axis of the accelerator by webs from a yoke, the quadrants. webs, and yoke being of magnetic material. A magnetic focusing action is produced by energizing a winding on each web to set up a magnetic field between adjacent quadrants. In the other embodiment the quadrants are electrically insulated from each other and have opposite polarity voltages on adjacent quadrants to provide an electric focusing fleld for the particles, with the quadrants spaced sufficienily close enough to shield the particles within the tube from the accelerating electric field.

  16. Mapping of geomorphic processes on abandoned fields and cultivated land in small catchments in semi-arid Spain

    NASA Astrophysics Data System (ADS)

    Geißler, C.; Ries, J. B.; Marzolff, I.

    2009-04-01

    In semi-arid landscapes vegetation succession on abandoned agricultural land is a long lasting process due to the water deficit for the best time of the year. During this phase of succession, geomorphic processes like the formation and development of rills, gullies and other geomorphic processes lead to a more or less constant deterioration of the abandoned land. But also on currently cultivated land and under quasi-natural vegetation the processes of soil degradation by flowing water take place. Regarding small catchments like gully catchments, the topography and the land cover (abandoned land, cultivated land, quasi-natural vegetation) are highly important factors in gully formation and soil degradation. Another important point is the distribution of different land cover units and therefore the connectivity of the catchment as described by Bracken & Croke (2007). In this study, 11 catchments of single gullies have been mapped geomorphologically and compared to the rate of gully development derived from small-format aerial photography. It could be shown that there is a high variability of processes due to differences in topography and the way the land is or has been cultivated. On abandoned land, geomorphic processes are highly active and enhance or even predestinate the direction of headcut movement. Another result is that geomorphological mapping of these gully catchments revealed interactions and dependencies of linear erosion features like the connection to the main drainage line, e.g. the gully. In the larger of the observed catchments (>5 ha) it became clear that some catchments have morphological features that tend to enhance connectivity (long rills, shallow drainage lines) and some catchments have features which tend to restrict connectivity (terraces, dense vegetation). In "more connected" catchments the retreat rate of the headcut is generally higher. By the method of geomorphological mapping, valuable information about the soil degrading processes

  17. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  18. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  19. Geomorphology, acoustic backscatter, and processes in Santa Monica Bay from multibeam mapping.

    PubMed

    Gardner, James V; Dartnell, Peter; Mayer, Larry A; Hughes Clarke, John E

    2003-01-01

    Santa Monica Bay was mapped in 1996 using a high-resolution multibeam system, providing the first substantial update of the submarine geomorphology since the initial compilation by Shepard and Emery [(1941) Geol. Soc. Amer. Spec. Paper 31]. The multibeam mapping generated not only high-resolution bathymetry, but also coregistered, calibrated acoustic backscatter at 95 kHz. The geomorphology has been subdivided into six provinces; shelf, marginal plateau, submarine canyon, basin slope, apron, and basin. The dimensions, gradients, and backscatter characteristics of each province is described and related to a combination of tectonics, climate, sea level, and sediment supply. Fluctuations of eustatic sea level have had a profound effect on the area; by periodically eroding the surface of Santa Monica plateau, extending the mouth of the Los Angeles River to various locations along the shelf break, and by connecting submarine canyons to rivers. A wetter glacial climate undoubtedly generated more sediment to the rivers that then transported the increased sediment load to the low-stand coastline and canyon heads. The trends of Santa Monica Canyon and several bathymetric highs suggest a complex tectonic stress field that has controlled the various segments. There is no geomorphic evidence to suggest Redondo Canyon is fault controlled. The San Pedro fault can be extended more than 30 km to the northwest by the alignment of a series of bathymetric highs and abrupt changes in direction of channel thalwegs.

  20. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.