Sample records for accelerated processing map

  1. 77 FR 21991 - Federal Housing Administration (FHA): Multifamily Accelerated Processing (MAP)-Lender and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... Administration (FHA): Multifamily Accelerated Processing (MAP)--Lender and Underwriter Eligibility Criteria and....gov . FOR FURTHER INFORMATION CONTACT: Terry W. Clark, Office of Multifamily Development, Office of... qualifications could underwrite loans involving more complex multifamily housing programs and transactions. II...

  2. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  4. cudaMap: a GPU accelerated program for gene expression connectivity mapping

    PubMed Central

    2013-01-01

    Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap

  5. cudaMap: a GPU accelerated program for gene expression connectivity mapping.

    PubMed

    McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong

    2013-10-11

    Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

  6. Asymmetric neighborhood functions accelerate ordering process of self-organizing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji

    2011-02-15

    A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensionalmore » stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.« less

  7. BowMapCL: Burrows-Wheeler Mapping on Multiple Heterogeneous Accelerators.

    PubMed

    Nogueira, David; Tomas, Pedro; Roma, Nuno

    2016-01-01

    The computational demand of exact-search procedures has pressed the exploitation of parallel processing accelerators to reduce the execution time of many applications. However, this often imposes strict restrictions in terms of the problem size and implementation efforts, mainly due to their possibly distinct architectures. To circumvent this limitation, a new exact-search alignment tool (BowMapCL) based on the Burrows-Wheeler Transform and FM-Index is presented. Contrasting to other alternatives, BowMapCL is based on a unified implementation using OpenCL, allowing the exploitation of multiple and possibly different devices (e.g., NVIDIA, AMD/ATI, and Intel GPUs/APUs). Furthermore, to efficiently exploit such heterogeneous architectures, BowMapCL incorporates several techniques to promote its performance and scalability, including multiple buffering, work-queue task-distribution, and dynamic load-balancing, together with index partitioning, bit-encoding, and sampling. When compared with state-of-the-art tools, the attained results showed that BowMapCL (using a single GPU) is 2 × to 7.5 × faster than mainstream multi-threaded CPU BWT-based aligners, like Bowtie, BWA, and SOAP2; and up to 4 × faster than the best performing state-of-the-art GPU implementations (namely, SOAP3 and HPG-BWT). When multiple and completely distinct devices are considered, BowMapCL efficiently scales the offered throughput, ensuring a convenient load-balance of the involved processing in the several distinct devices.

  8. A hybrid short read mapping accelerator

    PubMed Central

    2013-01-01

    Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance

  9. Direct and accelerated parameter mapping using the unscented Kalman filter.

    PubMed

    Zhao, Li; Feng, Xue; Meyer, Craig H

    2016-05-01

    To accelerate parameter mapping using a new paradigm that combines image reconstruction and model regression as a parameter state-tracking problem. In T2 mapping, the T2 map is first encoded in parameter space by multi-TE measurements and then encoded by Fourier transformation with readout/phase encoding gradients. Using a state transition function and a measurement function, the unscented Kalman filter can describe T2 mapping as a dynamic system and directly estimate the T2 map from the k-space data. The proposed method was validated with a numerical brain phantom and volunteer experiments with a multiple-contrast spin echo sequence. Its performance was compared with a conjugate-gradient nonlinear inversion method at undersampling factors of 2 to 8. An accelerated pulse sequence was developed based on this method to achieve prospective undersampling. Compared with the nonlinear inversion reconstruction, the proposed method had higher precision, improved structural similarity and reduced normalized root mean squared error, with acceleration factors up to 8 in numerical phantom and volunteer studies. This work describes a new perspective on parameter mapping by state tracking. The unscented Kalman filter provides a highly accelerated and efficient paradigm for T2 mapping. © 2015 Wiley Periodicals, Inc.

  10. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    NASA Astrophysics Data System (ADS)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  11. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, Nathan

    2016-04-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high

  12. One map policy (OMP) implementation strategy to accelerate mapping of regional spatial planing (RTRW) in Indonesia

    NASA Astrophysics Data System (ADS)

    Hasyim, Fuad; Subagio, Habib; Darmawan, Mulyanto

    2016-06-01

    A preparation of spatial planning documents require basic geospatial information and thematic accuracies. Recently these issues become important because spatial planning maps are impartial attachment of the regional act draft on spatial planning (PERDA). The needs of geospatial information in the preparation of spatial planning maps preparation can be divided into two major groups: (i). basic geospatial information (IGD), consist of of Indonesia Topographic maps (RBI), coastal and marine environmental maps (LPI), and geodetic control network and (ii). Thematic Geospatial Information (IGT). Currently, mostly local goverment in Indonesia have not finished their regulation draft on spatial planning due to some constrain including technical aspect. Some constrain in mapping of spatial planning are as follows: the availability of large scale ofbasic geospatial information, the availability of mapping guidelines, and human resources. Ideal conditions to be achieved for spatial planning maps are: (i) the availability of updated geospatial information in accordance with the scale needed for spatial planning maps, (ii) the guideline of mapping for spatial planning to support local government in completion their PERDA, and (iii) capacity building of local goverment human resources to completed spatial planning maps. The OMP strategies formulated to achieve these conditions are: (i) accelerating of IGD at scale of 1:50,000, 1: 25,000 and 1: 5,000, (ii) to accelerate mapping and integration of Thematic Geospatial Information (IGT) through stocktaking availability and mapping guidelines, (iii) the development of mapping guidelines and dissemination of spatial utilization and (iv) training of human resource on mapping technology.

  13. Symplectic maps and chromatic optics in particle accelerators

    DOE PAGES

    Cai, Yunhai

    2015-07-06

    Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less

  14. Standard map in magnetized relativistic systems: fixed points and regular acceleration.

    PubMed

    de Sousa, M C; Steffens, F M; Pakter, R; Rizzato, F B

    2010-08-01

    We investigate the concept of a standard map for the interaction of relativistic particles and electrostatic waves of arbitrary amplitudes, under the action of external magnetic fields. The map is adequate for physical settings where waves and particles interact impulsively, and allows for a series of analytical result to be exactly obtained. Unlike the traditional form of the standard map, the present map is nonlinear in the wave amplitude and displays a series of peculiar properties. Among these properties we discuss the relation involving fixed points of the maps and accelerator regimes.

  15. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    DOE PAGES

    Clayton, C. E.; Adli, E.; Allen, J.; ...

    2016-08-16

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.).more » Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m –1 to a similar degree of accuracy. Lastly, these results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.« less

  16. Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity

    PubMed Central

    Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.

    2016-01-01

    The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m−1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569

  17. Mapping and energization in the magnetotail. II - Particle acceleration

    NASA Technical Reports Server (NTRS)

    Kaufmann, Richard L.; Larson, Douglas J.; Lu, Chen

    1993-01-01

    Mapping with the Tsyganenko (1989) or T89 magnetosphere model has been examined previously. In the present work, an attempt is made to evaluate quantitatively what the selection of T89 implies for steady-state particle energization. The Heppner and Maynard (1987) or HM87 electric field model is mapped from the ionosphere to the equatorial plane, and the electric currents associated with T89 are evaluated. Consideration is also given to the nature of the acceleration that occurs when cross-tail current is suddenly diverted to the ionosphere.

  18. SU-F-T-475: An Evaluation of the Overlap Between the Acceptance Testing and Commissioning Processes for Conventional Medical Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, A; Rangaraj, D; Perez-Andujar, A

    2016-06-15

    Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each weremore » calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.« less

  19. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  20. Enhancements to Demilitarization Process Maps Program (ProMap)

    DTIC Science & Technology

    2016-10-14

    map tool, ProMap, was improved by implementing new features, and sharing data with MIDAS and AMDIT databases . Specifically, process efficiency was...improved by 1) providing access to APE information contained in the AMDIT database directly from inside ProMap when constructing a process map, 2...what equipment can be efficiently used to demil a particular munition. Associated with this task was the upgrade of the AMDIT database so that

  1. Jupiter's Auroras Acceleration Processes

    NASA Image and Video Library

    2017-09-06

    This image, created with data from Juno's Ultraviolet Imaging Spectrometer (UVS), marks the path of Juno's readings of Jupiter's auroras, highlighting the electron measurements that show the discovery of the so-called discrete auroral acceleration processes indicated by the "inverted Vs" in the lower panel (Figure 1). This signature points to powerful magnetic-field-aligned electric potentials that accelerate electrons toward the atmosphere to energies that are far greater than what drive the most intense aurora at Earth. Scientists are looking into why the same processes are not the main factor in Jupiter's most powerful auroras. https://photojournal.jpl.nasa.gov/catalog/PIA21937

  2. Preliminary map of peak horizontal ground acceleration for the Hanshin-Awaji earthquake of January 17, 1995, Japan - Description of Mapped Data Sets

    USGS Publications Warehouse

    Borcherdt, R.D.; Mark, R.K.

    1995-01-01

    The Hanshin-Awaji earthquake (also known as the Hyogo-ken Nanbu and the Great Hanshin earthquake) provided an unprecedented set of measurements of strong ground shaking. The measurements constitute the most comprehensive set of strong- motion recordings yet obtained for sites underlain by soft soil deposits of Holocene age within a few kilometers of the crustal rupture zone. The recordings, obtained on or near many important structures, provide an important new empirical data set for evaluating input ground motion levels and site amplification factors for codes and site-specific design procedures world wide. This report describes the data used to prepare a preliminary map summarizing the strong motion data in relation to seismicity and underlying geology (Wentworth, Borcherdt, and Mark., 1995; Figure 1, hereafter referred to as Figure 1/I). The map shows station locations, peak acceleration values, and generalized acceleration contours superimposed on pertinent seismicity and the geologic map of Japan. The map (Figure 1/I) indicates a zone of high acceleration with ground motions throughout the zone greater than 400 gal and locally greater than 800 gal. This zone encompasses the area of most intense damage mapped as JMA intensity level 7, which extends through Kobe City. The zone of most intense damage is parallel, but displaced slightly from the surface projection of the crustal rupture zone implied by aftershock locations. The zone is underlain by soft-soil deposits of Holocene age.

  3. The status and road map of Turkish Accelerator Center (TAC)

    NASA Astrophysics Data System (ADS)

    Yavaş, Ö.

    2012-02-01

    Turkish Accelerator Center (TAC) project is supported by the State Planning Organization (SPO) of Turkey and coordinated by Ankara University. After having completed the Feasibility Report (FR) in 2000 and the Conceptual Design Report (CDR) in 2005, third phase of the project started in 2006 as an inter-universities project including ten Turkish Universities with the support of SPO. Third phase of the project has two main scientific goals: to prepare the Technical Design Report (TDR) of TAC and to establish an Infrared Free Electron Laser (IR FEL) facility, named as Turkish Accelerator and Radiation Laboratory at Ankara (TARLA) as a first step. The facility is planned to be completed in 2015 and will be based on 15-40 MeV superconducting linac. In this paper, main aims, national and regional importance, main parts main parameters, status and road map of Turkish Accelerator Center will be presented.

  4. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  5. Detecting chaos in particle accelerators through the frequency map analysis method.

    PubMed

    Papaphilippou, Yannis

    2014-06-01

    The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.

  6. 75 FR 62410 - Notice of Proposed Information Collection: Comment Request; The Multifamily Accelerated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-08

    ... Information Collection: Comment Request; The Multifamily Accelerated Processing Guide AGENCY: Office of the... also lists the following information: Title of Proposal: Multifamily Accelerated Processing Guide (MAP...-0541. Description of the need for the information and proposed use: Multifamily Accelerated Processing...

  7. Diffusive Shock Acceleration and Reconnection Acceleration Processes

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; Hunana, P.; Mostafavi, P.; Le Roux, J. A.; Li, Gang; Webb, G. M.; Khabarova, O.; Cummings, A.; Stone, E.; Decker, R.

    2015-12-01

    Shock waves, as shown by simulations and observations, can generate high levels of downstream vortical turbulence, including magnetic islands. We consider a combination of diffusive shock acceleration (DSA) and downstream magnetic-island-reconnection-related processes as an energization mechanism for charged particles. Observations of electron and ion distributions downstream of interplanetary shocks and the heliospheric termination shock (HTS) are frequently inconsistent with the predictions of classical DSA. We utilize a recently developed transport theory for charged particles propagating diffusively in a turbulent region filled with contracting and reconnecting plasmoids and small-scale current sheets. Particle energization associated with the anti-reconnection electric field, a consequence of magnetic island merging, and magnetic island contraction, are considered. For the former only, we find that (i) the spectrum is a hard power law in particle speed, and (ii) the downstream solution is constant. For downstream plasmoid contraction only, (i) the accelerated spectrum is a hard power law in particle speed; (ii) the particle intensity for a given energy peaks downstream of the shock, and the distance to the peak location increases with increasing particle energy, and (iii) the particle intensity amplification for a particular particle energy, f(x,c/{c}0)/f(0,c/{c}0), is not 1, as predicted by DSA, but increases with increasing particle energy. The general solution combines both the reconnection-induced electric field and plasmoid contraction. The observed energetic particle intensity profile observed by Voyager 2 downstream of the HTS appears to support a particle acceleration mechanism that combines both DSA and magnetic-island-reconnection-related processes.

  8. A mini-photofragment translational spectrometer with ion velocity map imaging using low voltage acceleration

    NASA Astrophysics Data System (ADS)

    Qi, Wenke; Jiang, Pan; Lin, Dan; Chi, Xiaoping; Cheng, Min; Du, Yikui; Zhu, Qihe

    2018-01-01

    A mini time-sliced ion velocity map imaging photofragment translational spectrometer using low voltage acceleration has been constructed. The innovation of this apparatus adopts a relative low voltage (30-150 V) to substitute the traditional high voltage (650-4000 V) to accelerate and focus the fragment ions. The overall length of the flight path is merely 12 cm. There are many advantages for this instrument, such as compact structure, less interference, and easy to operate and control. Low voltage acceleration gives a longer turn-around time to the photofragment ions forming a thicker Newton sphere, which provides sufficient time for slicing. Ion trajectory simulation has been performed for determining the structure dimensions and the operating voltages. The photodissociation and multiphoton ionization of O2 at 224.999 nm is used to calibrate the ion images and examine the overall performance of the new spectrometer. The velocity resolution (Δν/ν) of this spectrometer from O2 photodissociation is about 0.8%, which is better than most previous results using high acceleration voltage. For the case of CF3I dissociation at 277.38 nm, many CF3 vibrational states have been resolved, and the anisotropy parameter has been measured. The application of low voltage acceleration has shown its advantages on the ion velocity map imaging (VMI) apparatus. The miniaturization of the VMI instruments can be realized on the premise of high resolution.

  9. Experiments to Distribute Map Generalization Processes

    NASA Astrophysics Data System (ADS)

    Berli, Justin; Touya, Guillaume; Lokhat, Imran; Regnauld, Nicolas

    2018-05-01

    Automatic map generalization requires the use of computationally intensive processes often unable to deal with large datasets. Distributing the generalization process is the only way to make them scalable and usable in practice. But map generalization is a highly contextual process, and the surroundings of a generalized map feature needs to be known to generalize the feature, which is a problem as distribution might partition the dataset and parallelize the processing of each part. This paper proposes experiments to evaluate the past propositions to distribute map generalization, and to identify the main remaining issues. The past propositions to distribute map generalization are first discussed, and then the experiment hypotheses and apparatus are described. The experiments confirmed that regular partitioning was the quickest strategy, but also the less effective in taking context into account. The geographical partitioning, though less effective for now, is quite promising regarding the quality of the results as it better integrates the geographical context.

  10. Transient aerodynamic characteristics of vans during the accelerated overtaking process

    NASA Astrophysics Data System (ADS)

    Liu, Li-ning; Wang, Xing-shen; Du, Guang-sheng; Liu, Zheng-gang; Lei, Li

    2018-04-01

    This paper studies the influence of the accelerated overtaking process on the vehicles' transient aerodynamic characteristics, through 3-D numerical simulations with dynamic meshes and sliding interface technique. Numerical accuracy is verified by experimental results. The aerodynamic characteristics of vehicles in the uniform overtaking process and the accelerated overtaking process are compared. It is shown that the speed variation of the overtaking van would influence the aerodynamic characteristics of the two vans, with greater influence on the overtaken van than on the overtaking van. The simulations of three different accelerated overtaking processes show that the greater the acceleration of the overtaking van, the larger the aerodynamic coefficients of the overtaken van. When the acceleration of the overtaking van increases by 1 m/s2, the maximum drag force, side force and yawing moment coefficients of the overtaken van all increase by more than 6%, to seriously affect the power performance and the stability of the vehicles. The analysis of the pressure fields under different accelerated conditions reveals the cause of variations of the aerodynamic characteristics of vehicles.

  11. Monitoring oil displacement processes with k-t accelerated spin echo SPI.

    PubMed

    Li, Ming; Xiao, Dan; Romero-Zerón, Laura; Balcom, Bruce J

    2016-03-01

    Magnetic resonance imaging (MRI) is a robust tool to monitor oil displacement processes in porous media. Conventional MRI measurement times can be lengthy, which hinders monitoring time-dependent displacements. Knowledge of the oil and water microscopic distribution is important because their pore scale behavior reflects the oil trapping mechanisms. The oil and water pore scale distribution is reflected in the magnetic resonance T2 signal lifetime distribution. In this work, a pure phase-encoding MRI technique, spin echo SPI (SE-SPI), was employed to monitor oil displacement during water flooding and polymer flooding. A k-t acceleration method, with low-rank matrix completion, was employed to improve the temporal resolution of the SE-SPI MRI measurements. Comparison to conventional SE-SPI T2 mapping measurements revealed that the k-t accelerated measurement was more sensitive and provided higher-quality results. It was demonstrated that the k-t acceleration decreased the average measurement time from 66.7 to 20.3 min in this work. A perfluorinated oil, containing no (1) H, and H2 O brine were employed to distinguish oil and water phases in model flooding experiments. High-quality 1D water saturation profiles were acquired from the k-t accelerated SE-SPI measurements. Spatially and temporally resolved T2 distributions were extracted from the profile data. The shift in the (1) H T2 distribution of water in the pore space to longer lifetimes during water flooding and polymer flooding is consistent with increased water content in the pore space. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  12. The Interstellar Mapping and Acceleration Probe - A Mission to Discover the Origin of Particle Acceleration and its Fundamental Connection to the Global Interstellar Interaction

    NASA Astrophysics Data System (ADS)

    Schwadron, N.

    2017-12-01

    Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. The Interstellar Boundary Explorer (IBEX) was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies ( 5-55 keV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The global structure of the heliosphere is highly complex and influenced by competing factors ranging from the local interstellar magnetic field, suprathermal populations both within and beyond the heliopause, and the detailed flow properties of the LISM. Global heliospheric structure and microphysics in turn influences the acceleration of energetic particles and creates feedbacks that modify the interstellar interaction as a whole. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics and probe the acceleration of suprathermal and higher energy particles at a time when the space environment is rapidly evolving. IMAP ultimately connects the acceleration processes observed directly at 1 AU with unprecedented sensitivity and temporal resolution with the global structure of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose

  13. Advanced Map For Real-Time Process Control

    NASA Astrophysics Data System (ADS)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  14. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.

  15. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  16. Suprathermal and Solar Energetic Particles - Key questions for the Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Desai, M. I.; McComas, D. J.; Christian, E. R.; Mewaldt, R. A.; Schwadron, N.

    2014-12-01

    Solar energetic particles or SEPs from suprathermal (few keV) up to relativistic (~few GeV) speeds are accelerated near the Sun in at least two ways, namely, (1) by magnetic reconnection-driven processes during solar flares resulting in impulsive SEPs and (2) at fast coronal-mass-ejection-driven shock waves that produce large gradual SEP events. Large gradual SEP events are of particular interest because the accompanying high-energy (>10s MeV) protons pose serious radiation threats to human explorers living and working outside low-Earth orbit and to technological assets such as communications and scientific satellites in space. However, a complete understanding of SEP events has eluded us primarily because their properties, as observed near Earth orbit, are smeared due to mixing and contributions from many important physical effects. Thus, despite being studied for decades, several key questions regarding SEP events remain unanswered. These include (1) What are the contributions of co-temporal flares, jets, and CME shocks to impulsive and gradual SEP events?; (2) Do flares contribute to large SEP events directly by providing high-energy particles and/or by providing the suprathermal seed population?; (3) What are the roles of ambient turbulence/waves and self-generated waves?; (4) What are the origins of the source populations and how do their temporal and spatial variations affect SEP properties?; and (5) How do diffusion and scattering during acceleration and propagation through the interplanetary medium affect SEP properties observed out in the heliosphere? This talk describes how during the next decade, inner heliospheric measurements from the Solar Probe Plus and Solar Orbiter in conjunction with high sensitivity measurements from the Interstellar Mapping and Acceleration Probe will provide the ground-truth for various models of particle acceleration and transport and address these questions.

  17. Speech processing using maximum likelihood continuity mapping

    DOEpatents

    Hogden, John E.

    2000-01-01

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  18. Speech processing using maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.E.

    Speech processing is obtained that, given a probabilistic mapping between static speech sounds and pseudo-articulator positions, allows sequences of speech sounds to be mapped to smooth sequences of pseudo-articulator positions. In addition, a method for learning a probabilistic mapping between static speech sounds and pseudo-articulator position is described. The method for learning the mapping between static speech sounds and pseudo-articulator position uses a set of training data composed only of speech sounds. The said speech processing can be applied to various speech analysis tasks, including speech recognition, speaker recognition, speech coding, speech synthesis, and voice mimicry.

  19. Accelerated time-resolved three-dimensional MR velocity mapping of blood flow patterns in the aorta using SENSE and k-t BLAST.

    PubMed

    Stadlbauer, Andreas; van der Riet, Wilma; Crelier, Gerard; Salomonowitz, Erich

    2010-07-01

    To assess the feasibility and potential limitations of the acceleration techniques SENSE and k-t BLAST for time-resolved three-dimensional (3D) velocity mapping of aortic blood flow. Furthermore, to quantify differences in peak velocity versus heart phase curves. Time-resolved 3D blood flow patterns were investigated in eleven volunteers and two patients suffering from aortic diseases with accelerated PC-MR sequences either in combination with SENSE (R=2) or k-t BLAST (6-fold). Both sequences showed similar data acquisition times and hence acceleration efficiency. Flow-field streamlines were calculated and visualized using the GTFlow software tool in order to reconstruct 3D aortic blood flow patterns. Differences between the peak velocities from single-slice PC-MRI experiments using SENSE 2 and k-t BLAST 6 were calculated for the whole cardiac cycle and averaged for all volunteers. Reconstruction of 3D flow patterns in volunteers revealed attenuations in blood flow dynamics for k-t BLAST 6 compared to SENSE 2 in terms of 3D streamlines showing fewer and less distinct vortices and reduction in peak velocity, which is caused by temporal blurring. Solely by time-resolved 3D MR velocity mapping in combination with SENSE detected pathologic blood flow patterns in patients with aortic diseases. For volunteers, we found a broadening and flattering of the peak velocity versus heart phase diagram between the two acceleration techniques, which is an evidence for the temporal blurring of the k-t BLAST approach. We demonstrated the feasibility of SENSE and detected potential limitations of k-t BLAST when used for time-resolved 3D velocity mapping. The effects of higher k-t BLAST acceleration factors have to be considered for application in 3D velocity mapping. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  20. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  1. A General Accelerated Degradation Model Based on the Wiener Process.

    PubMed

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  2. A General Accelerated Degradation Model Based on the Wiener Process

    PubMed Central

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107

  3. Symplectic Propagation of the Map, Tangent Map and Tangent Map Derivative through Quadrupole and Combined-Function Dipole Magnets without Truncation

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.

    1998-04-01

    The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.

  4. The use of process mapping in healthcare quality improvement projects.

    PubMed

    Antonacci, Grazia; Reed, Julie E; Lennox, Laura; Barlow, James

    2018-05-01

    Introduction Process mapping provides insight into systems and processes in which improvement interventions are introduced and is seen as useful in healthcare quality improvement projects. There is little empirical evidence on the use of process mapping in healthcare practice. This study advances understanding of the benefits and success factors of process mapping within quality improvement projects. Methods Eight quality improvement projects were purposively selected from different healthcare settings within the UK's National Health Service. Data were gathered from multiple data-sources, including interviews exploring participants' experience of using process mapping in their projects and perceptions of benefits and challenges related to its use. These were analysed using inductive analysis. Results Eight key benefits related to process mapping use were reported by participants (gathering a shared understanding of the reality; identifying improvement opportunities; engaging stakeholders in the project; defining project's objectives; monitoring project progress; learning; increased empathy; simplicity of the method) and five factors related to successful process mapping exercises (simple and appropriate visual representation, information gathered from multiple stakeholders, facilitator's experience and soft skills, basic training, iterative use of process mapping throughout the project). Conclusions Findings highlight benefits and versatility of process mapping and provide practical suggestions to improve its use in practice.

  5. A Hybrid CPU-GPU Accelerated Framework for Fast Mapping of High-Resolution Human Brain Connectome

    PubMed Central

    Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong

    2013-01-01

    Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson’s Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states. PMID:23675425

  6. Granger-causality maps of diffusion processes.

    PubMed

    Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A

    2016-02-01

    Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.

  7. Beyond data collection in digital mapping: interpretation, sketching and thought process elements in geological map making

    NASA Astrophysics Data System (ADS)

    Watkins, Hannah; Bond, Clare; Butler, Rob

    2016-04-01

    Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching

  8. In-Storage Embedded Accelerator for Sparse Pattern Processing

    DTIC Science & Technology

    2016-08-13

    performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory

  9. Susceptibility of materials processing experiments to low-level accelerations

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.

    1981-01-01

    The types of material processing experiments being considered for shuttle can be grouped into four categories: (1) contained solidification experiment; (2) quasicontainerless experiments; (3) containerless experiments; and (4) fluids experiments. Low level steady acceleration, compensated and uncompensated transient accelerations, and rotation induced flow factors that must be considered in the acceleration environment of a space vehicle whose importance depends on the type of experiment being performed. Some control of these factors may be exercised by the location and orientation of the experiment relative to shuttle and by the orbit vehicle attitude chosen for mission. The effects of the various residual accelerations can have serious consequence to the control of the experiment and must be factored into the design and operation of the apparatus.

  10. Radiative processes of uniformly accelerated entangled atoms

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2016-05-01

    We study radiative processes of uniformly accelerated entangled atoms, interacting with an electromagnetic field prepared in the Minkowski vacuum state. We discuss the structure of the rate of variation of the atomic energy for two atoms traveling in different hyperbolic world lines. We identify the contributions of vacuum fluctuations and radiation reaction to the generation of entanglement as well as to the decay of entangled states. Our results resemble the situation in which two inertial atoms are coupled individually to two spatially separated cavities at different temperatures. In addition, for equal accelerations we obtain that one of the maximally entangled antisymmetric Bell state is a decoherence-free state.

  11. Managing mapping data using commercial data base management software.

    USGS Publications Warehouse

    Elassal, A.A.

    1985-01-01

    Electronic computers are involved in almost every aspect of the map making process. This involvement has become so thorough that it is practically impossible to find a recently developed process or device in the mapping field which does not employ digital processing in some form or another. This trend, which has been evolving over two decades, is accelerated by the significant improvements in capility, reliability, and cost-effectiveness of electronic devices. Computerized mapping processes and devices share a common need for machine readable data. Integrating groups of these components into automated mapping systems requires careful planning for data flow amongst them. Exploring the utility of commercial data base management software to assist in this task is the subject of this paper. -Author

  12. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Bjorkman, Gerry; Kooney, Alex; Russell, Carolyn

    2003-01-01

    The weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  13. Emitting electron spectra and acceleration processes in the jet of PKS 0447-439

    NASA Astrophysics Data System (ADS)

    Zhou, Yao; Yan, Dahai; Dai, Benzhong; Zhang, Li

    2014-02-01

    We investigate the electron energy distributions (EEDs) and the corresponding acceleration processes in the jet of PKS 0447-439, and estimate its redshift through modeling its observed spectral energy distribution (SED) in the frame of a one-zone synchrotron-self Compton (SSC) model. Three EEDs formed in different acceleration scenarios are assumed: the power-law with exponential cut-off (PLC) EED (shock-acceleration scenario or the case of the EED approaching equilibrium in the stochastic-acceleration scenario), the log-parabolic (LP) EED (stochastic-acceleration scenario and the acceleration dominating), and the broken power-law (BPL) EED (no acceleration scenario). The corresponding fluxes of both synchrotron and SSC are then calculated. The model is applied to PKS 0447-439, and modeled SEDs are compared to the observed SED of this object by using the Markov Chain Monte Carlo method. The results show that the PLC model fails to fit the observed SED well, while the LP and BPL models give comparably good fits for the observed SED. The results indicate that it is possible that a stochastic acceleration process acts in the emitting region of PKS 0447-439 and the EED is far from equilibrium (acceleration dominating) or no acceleration process works (in the emitting region). The redshift of PKS 0447-439 is also estimated in our fitting: z = 0.16 ± 0.05 for the LP case and z = 0.17 ± 0.04 for BPL case.

  14. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data.

    PubMed

    Sinha, A; Smith, A D

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.

  15. Self-organizing map (SOM) of space acceleration measurement system (SAMS) data

    NASA Technical Reports Server (NTRS)

    Sinha, A.; Smith, A. D.

    1999-01-01

    In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.

  16. Secondary electron emission from plasma processed accelerating cavity grade niobium

    NASA Astrophysics Data System (ADS)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for higher energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were treated

  17. Secondary Electron Emission from Plasma Processed Accelerating Cavity Grade Niobium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basovic, Milos

    Advances in the particle accelerator technology have enabled numerous fundamental discoveries in 20th century physics. Extensive interdisciplinary research has always supported further development of accelerator technology in efforts of reaching each new energy frontier. Accelerating cavities, which are used to transfer energy to accelerated charged particles, have been one of the main focuses of research and development in the particle accelerator field. Over the last fifty years, in the race to break energy barriers, there has been constant improvement of the maximum stable accelerating field achieved in accelerating cavities. Every increase in the maximum attainable accelerating fields allowed for highermore » energy upgrades of existing accelerators and more compact designs of new accelerators. Each new and improved technology was faced with ever emerging limiting factors. With the standard high accelerating gradients of more than 25 MV/m, free electrons inside the cavities get accelerated by the field, gaining enough energy to produce more electrons in their interactions with the walls of the cavity. The electron production is exponential and the electron energy transfer to the walls of a cavity can trigger detrimental processes, limiting the performance of the cavity. The root cause of the free electron number gain is a phenomenon called Secondary Electron Emission (SEE). Even though the phenomenon has been known and studied over a century, there are still no effective means of controlling it. The ratio between the electrons emitted from the surface and the impacting electrons is defined as the Secondary Electron Yield (SEY). A SEY ratio larger than 1 designates an increase in the total number of electrons. In the design of accelerator cavities, the goal is to reduce the SEY to be as low as possible using any form of surface manipulation. In this dissertation, an experimental setup was developed and used to study the SEY of various sample surfaces that were

  18. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Kooney, Alex; Bjorkman, Gerry; Russell, Carolyn; Smelser, Jerry (Technical Monitor)

    2002-01-01

    In FSW (friction stir welding), the weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule. The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  19. Particle Acceleration and Heating Processes at the Dayside Magnetopause

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Lapenta, G.; Richard, R. L.; El-Alaoui, M.; Walker, R. J.; Schriver, D.

    2017-12-01

    It is well established that electrons and ions are accelerated and heated during magnetic reconnection at the dayside magnetopause. However, a detailed description of the actual physical mechanisms driving these processes and where they are operating is still incomplete. Many basic mechanisms are known to accelerate particles, including resonant wave-particle interactions as well as stochastic, Fermi, and betatron acceleration. In addition, acceleration and heating processes can occur over different scales. We have carried out kinetic simulations to investigate the mechanisms by which electrons and ions are accelerated and heated at the dayside magnetopause. The simulation model uses the results of global magnetohydrodynamic (MHD) simulations to set the initial state and the evolving boundary conditions of fully kinetic implicit particle-in-cell (iPic3D) simulations for different solar wind and interplanetary magnetic field conditions. This approach allows us to include large domains both in space and energy. In particular, some of these regional simulations include both the magnetopause and bow shock in the kinetic domain, encompassing range of particle energies from a few eV in the solar wind to keV in the magnetospheric boundary layer. We analyze the results of the iPic3D simulations by discussing wave spectra and particle velocity distribution functions observed in the different regions of the simulation domain, as well as using large-scale kinetic (LSK) computations to follow particles' time histories. We discuss the relevance of our results by comparing them with local observations by the MMS spacecraft.

  20. Interstellar Mapping and Acceleration Probe (IMAP)

    NASA Astrophysics Data System (ADS)

    Schwadron, N. A.; Opher, M.; Kasper, J.; Mewaldt, R.; Moebius, E.; Spence, H. E.; Zurbuchen, T. H.

    2016-11-01

    Our piece of cosmic real estate, the heliosphere, is the domain of all human existence - an astrophysical case history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX is the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (˜5-55 keV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. This paper summarizes the next quantum leap enabled by IMAP that will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal, with unprecedented resolution, global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. Voyager 2 moves outward in the same region of sky covered by a portion of the IBEX ribbon. Voyager 2’s plasma measurements will create singular opportunities for discovery in the context of IMAP's global measurements. IMAP, like ACE before, will be a keystone of the Heliophysics System Observatory by providing comprehensive measurements of interstellar neutral atoms and pickup ions, the solar wind distribution, composition, and magnetic field, as well as suprathermal ion, energetic

  1. Process mapping and the integration of care

    PubMed Central

    Santana, Silvina; Redondo, Patrícia

    2011-01-01

    Introduction The main objective of this work is to show how process mapping may contribute to the improvement of intra- and inter-organizational integration of care. Theory and methods Under logic of service integration, quality of care depends not only on how the internal processes are implemented, but also on the quality of the transitions of care with external entities. We conducted a case study on a health centre located in the Centre Region of Portugal. Data was collected during the first semester of 2009. Petri nets were used as a modeling tool. Results We mapped eleven processes involving a patient directly. The informality of many of the processes became evident. Activities are guided by formalisms imposed by law and by the good practices of professionals. Some processes are not normalized and represented in the computerized information system. The media most used to communicate with other entities are the phone and paper. Under the RNCCI (Rede Nacional de Cuidados Integrados e Continuados—National Network for Integrated Care), the information is all organized in an integrated manner, and the processes are support by a customized, nation-wide, web-based information system. However, this platform is not integrated with the other applications in use. Conclusions and discussion We have demonstrated the viability and the benefits of process mapping techniques in the context of a Health Centre. It allowed to identify and understand the ‘what’, ‘why’, ‘when’, ‘where’, ‘who’ of each process, sub-process, task and activity and to develop graphical views of the processes.

  2. Thermal Spray Maps: Material Genomics of Processing Technologies

    NASA Astrophysics Data System (ADS)

    Ang, Andrew Siao Ming; Sanpo, Noppakun; Sesso, Mitchell L.; Kim, Sun Yung; Berndt, Christopher C.

    2013-10-01

    There is currently no method whereby material properties of thermal spray coatings may be predicted from fundamental processing inputs such as temperature-velocity correlations. The first step in such an important understanding would involve establishing a foundation that consolidates the thermal spray literature so that known relationships could be documented and any trends identified. This paper presents a method to classify and reorder thermal spray data so that relationships and correlations between competing processes and materials can be identified. Extensive data mining of published experimental work was performed to create thermal spray property-performance maps, known as "TS maps" in this work. Six TS maps will be presented. The maps are based on coating characteristics of major importance; i.e., porosity, microhardness, adhesion strength, and the elastic modulus of thermal spray coatings.

  3. CIMOSA process classification for business process mapping in non-manufacturing firms: A case study

    NASA Astrophysics Data System (ADS)

    Latiffianti, Effi; Siswanto, Nurhadi; Wiratno, Stefanus Eko; Saputra, Yudha Andrian

    2017-11-01

    A business process mapping is one important means to enable an enterprise to effectively manage the value chain. One of widely used approaches to classify business process for mapping purpose is Computer Integrated Manufacturing System Open Architecture (CIMOSA). CIMOSA was initially designed for Computer Integrated Manufacturing (CIM) system based enterprises. This paper aims to analyze the use of CIMOSA process classification for business process mapping in the firms that do not fall within the area of CIM. Three firms of different business area that have used CIMOSA process classification were observed: an airline firm, a marketing and trading firm for oil and gas products, and an industrial estate management firm. The result of the research has shown that CIMOSA can be used in non-manufacturing firms with some adjustment. The adjustment includes addition, reduction, or modification of some processes suggested by CIMOSA process classification as evidenced by the case studies.

  4. Recent Advances in Understanding Particle Acceleration Processes in Solar Flares

    NASA Astrophysics Data System (ADS)

    Zharkova, V. V.; Arzner, K.; Benz, A. O.; Browning, P.; Dauphin, C.; Emslie, A. G.; Fletcher, L.; Kontar, E. P.; Mann, G.; Onofri, M.; Petrosian, V.; Turkmani, R.; Vilmer, N.; Vlahos, L.

    2011-09-01

    We review basic theoretical concepts in particle acceleration, with particular emphasis on processes likely to occur in regions of magnetic reconnection. Several new developments are discussed, including detailed studies of reconnection in three-dimensional magnetic field configurations (e.g., current sheets, collapsing traps, separatrix regions) and stochastic acceleration in a turbulent environment. Fluid, test-particle, and particle-in-cell approaches are used and results compared. While these studies show considerable promise in accounting for the various observational manifestations of solar flares, they are limited by a number of factors, mostly relating to available computational power. Not the least of these issues is the need to explicitly incorporate the electrodynamic feedback of the accelerated particles themselves on the environment in which they are accelerated. A brief prognosis for future advancement is offered.

  5. Experimental Results from a Resonant Dielectric Laser Accelerator

    NASA Astrophysics Data System (ADS)

    Yoder, Rodney; McNeur, Joshua; Sozer, Esin; Travish, Gil; Hazra, Kiran Shankar; Matthews, Brian; England, Joel; Peralta, Edgar; Wu, Ziran

    2015-04-01

    Laser-powered accelerators have the potential to operate with very large accelerating gradients (~ GV/m) and represent a path toward extremely compact colliders and accelerator technology. Optical-scale laser-powered devices based on field-shaping structures (known as dielectric laser accelerators, or DLAs) have been described and demonstrated recently. Here we report on the first experimental results from the Micro-Accelerator Platform (MAP), a DLA based on a slab-symmetric resonant optical-scale structure. As a resonant (rather than near-field) device, the MAP is distinct from other DLAs. Its cavity resonance enhances its accelerating field relative to the incoming laser fields, which are coupled efficiently through a diffractive optic on the upper face of the device. The MAP demonstrated modest accelerating gradients in recent experiments, in which it was powered by a Ti:Sapphire laser well below its breakdown limit. More detailed results and some implications for future developments will be discussed. Supported in part by the U.S. Defense Threat Reduction Agency (UCLA); U.S. Dept of Energy (SLAC); and DARPA (SLAC).

  6. Torque-based optimal acceleration control for electric vehicle

    NASA Astrophysics Data System (ADS)

    Lu, Dongbin; Ouyang, Minggao

    2014-03-01

    The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.

  7. Smartphone-based noise mapping: Integrating sound level meter app data into the strategic noise mapping process.

    PubMed

    Murphy, Enda; King, Eoin A

    2016-08-15

    The strategic noise mapping process of the EU has now been ongoing for more than ten years. However, despite the fact that a significant volume of research has been conducted on the process and related issues there has been little change or innovation in how relevant authorities and policymakers are conducting the process since its inception. This paper reports on research undertaken to assess the possibility for smartphone-based noise mapping data to be integrated into the traditional strategic noise mapping process. We compare maps generated using the traditional approach with those generated using smartphone-based measurement data. The advantage of the latter approach is that it has the potential to remove the need for exhaustive input data into the source calculation model for noise prediction. In addition, the study also tests the accuracy of smartphone-based measurements against simultaneous measurements taken using traditional sound level meters in the field. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Accelerating sino-atrium computer simulations with graphic processing units.

    PubMed

    Zhang, Hong; Xiao, Zheng; Lin, Shien-fong

    2015-01-01

    Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.

  9. Accelerated transport and growth with symmetrized dynamics

    NASA Astrophysics Data System (ADS)

    Merikoski, Juha

    2013-12-01

    In this paper we consider a model of accelerated dynamics with the rules modified from those of the recently proposed [Dong et al., Phys. Rev. Lett. 109, 130602 (2012), 10.1103/PhysRevLett.109.130602] accelerated exclusion process (AEP) such that particle-vacancy symmetry is restored to facilitate a mapping to a solid-on-solid growth model in 1+1 dimensions. In addition to kicking a particle ahead of the moving particle, as in the AEP, in our model another particle from behind is drawn, provided it is within the "distance of interaction" denoted by ℓmax. We call our model the doubly accelerated exclusion process (DAEP). We observe accelerated transport and interface growth and widening of the cluster size distribution for cluster sizes above ℓmax, when compared with the ordinary totally asymmetric exclusion process (TASEP). We also characterize the difference between the TASEP, AEP, and DAEP by computing a "staggered" order parameter, which reveals the local order in the steady state. This order in part explains the behavior of the particle current as a function of density. The differences of the steady states are also reflected by the behavior of the temporal and spatial correlation functions in the interface picture.

  10. Enzyme clustering accelerates processing of intermediates through metabolic channeling

    PubMed Central

    Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.

    2015-01-01

    We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299

  11. Conceptual Framework for the Mapping of Management Process with Information Technology in a Business Process

    PubMed Central

    Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified. PMID:25861688

  12. Conceptual framework for the mapping of management process with information technology in a business process.

    PubMed

    Rajarathinam, Vetrickarthick; Chellappa, Swarnalatha; Nagarajan, Asha

    2015-01-01

    This study on component framework reveals the importance of management process and technology mapping in a business environment. We defined ERP as a software tool, which has to provide business solution but not necessarily an integration of all the departments. Any business process can be classified as management process, operational process and the supportive process. We have gone through entire management process and were enable to bring influencing components to be mapped with a technology for a business solution. Governance, strategic management, and decision making are thoroughly discussed and the need of mapping these components with the ERP is clearly explained. Also we suggest that implementation of this framework might reduce the ERP failures and especially the ERP misfit was completely rectified.

  13. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  14. Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes

    PubMed Central

    Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin

    2012-01-01

    Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993

  15. Particle acceleration on a chip: A laser-driven micro-accelerator for research and industry

    NASA Astrophysics Data System (ADS)

    Yoder, R. B.; Travish, G.

    2013-03-01

    Particle accelerators are conventionally built from radio-frequency metal cavities, but this technology limits the maximum energy available and prevents miniaturization. In the past decade, laser-powered acceleration has been intensively studied as an alternative technology promising much higher accelerating fields in a smaller footprint and taking advantage of recent advances in photonics. Among the more promising approaches are those based on dielectric field-shaping structures. These ``dielectric laser accelerators'' (DLAs) scale with the laser wavelength employed and can be many orders of magnitude smaller than conventional accelerators; DLAs may enable the production of high-intensity, ultra-short relativistic electron bunches in a chip-scale device. When combined with a high- Z target or an optical-period undulator, these systems could produce high-brilliance x-rays from a breadbox-sized device having multiple applications in imaging, medicine, and homeland security. In our research program we have developed one such DLA, the Micro-Accelerator Platform (MAP). We describe the fundamental physics, our fabrication and testing program, and experimental results to date, along with future prospects for MAP-based light-sources and some remaining challenges. Supported in part by the Defense Threat Reduction Agency and National Nuclear Security Administration.

  16. TU-AB-BRD-01: Process Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palta, J.

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before amore » failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn

  17. Evaluating secular acceleration in geomagnetic field model GRIMM-3

    NASA Astrophysics Data System (ADS)

    Lesur, V.; Wardinski, I.

    2012-12-01

    Secular acceleration of the magnetic field is the rate of change of its secular variation. One of the main results of studying magnetic data collected by the German survey satellite CHAMP was the mapping of field acceleration and its evolution in time. Questions remain about the accuracy of the modeled acceleration and the effect of the applied regularization processes. We have evaluated to what extent the regularization affects the temporal variability of the Gauss coefficients. We also obtained results of temporal variability of the Gauss coefficients where alternative approaches to the usual smoothing norms have been applied for regularization. Except for the dipole term, the secular acceleration of the Gauss coefficients is fairly well described up to spherical harmonic degree 5 or 6. There is no clear evidence from observatory data that the spectrum of this acceleration is underestimated at the Earth surface. Assuming a resistive mantle, the observed acceleration supports a characteristic time scale for the secular variation of the order of 11 years.

  18. A physical process of the radial acceleration of disc galaxies

    NASA Astrophysics Data System (ADS)

    Wilhelm, Klaus; Dwivedi, Bhola N.

    2018-03-01

    An impact model of gravity designed to emulate Newton's law of gravitation is applied to the radial acceleration of disc galaxies. Based on this model (Wilhelm et al. 2013), the rotation velocity curves can be understood without the need to postulate any dark matter contribution. The increased acceleration in the plane of the disc is a consequence of multiple interactions of gravitons (called `quadrupoles' in the original paper) and the subsequent propagation in this plane and not in three-dimensional space. The concept provides a physical process that relates the fit parameter of the acceleration scale defined by McGaugh et al. (2016) to the mean free path length of gravitons in the discs of galaxies. It may also explain the gravitational interaction at low acceleration levels in MOdification of the Newtonian Dynamics (MOND, Milgrom 1983, 1994, 2015, 2016). Three examples are discussed in some detail: the spiral galaxies NGC 7814, NGC 6503 and M 33.

  19. Learning process mapping heuristics under stochastic sampling overheads

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  20. Intelligent process mapping through systematic improvement of heuristics

    NASA Technical Reports Server (NTRS)

    Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.

    1992-01-01

    The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.

  1. Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words.

    PubMed

    Vlach, Haley A; Sandhofer, Catherine M

    2012-01-01

    Children's remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children's (N = 216) and adults' (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail - forgetting supports extended mapping by promoting the memory and generalization of words and categories.

  2. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  3. Effects of Experiment Location and Orbiter Attitude on the Residual Acceleration On-Board STS-73 (USML-2)

    NASA Technical Reports Server (NTRS)

    Hakimzadeh, Roshanak; McPherson, Kevin M.; Matisak, Brian P.; Wagar, William O.

    1997-01-01

    A knowledge of the quasi-steady acceleration environment on the NASA Space Shuttle Orbiter is of particular importance for materials processing experiments which are limited by slow diffusive processes. The quasi-steady (less than 1 HZ) acceleration environment on STS-73 (USML-2) was measured using the Orbital Acceleration Research Experiment (OARE) accelerometer. One of the facilities flown on USML-2 was the Crystal Growth Furnace (CGF), which was used by several Principal Investigators (PIS) to grow crystals. In this paper the OARE data mapped to the sample melt location within this furnace is presented. The ratio of the axial to radial components of the quasi-steady acceleration at the melt site is presented. Effects of Orbiter attitude on the acceleration data is discussed.

  4. Demystifying process mapping: a key step in neurosurgical quality improvement initiatives.

    PubMed

    McLaughlin, Nancy; Rodstein, Jennifer; Burke, Michael A; Martin, Neil A

    2014-08-01

    Reliable delivery of optimal care can be challenging for care providers. Health care leaders have integrated various business tools to assist them and their teams in ensuring consistent delivery of safe and top-quality care. The cornerstone to all quality improvement strategies is the detailed understanding of the current state of a process, captured by process mapping. Process mapping empowers caregivers to audit how they are currently delivering care to subsequently strategically plan improvement initiatives. As a community, neurosurgery has clearly shown dedication to enhancing patient safety and delivering quality care. A care redesign strategy named NERVS (Neurosurgery Enhanced Recovery after surgery, Value, and Safety) is currently being developed and piloted within our department. Through this initiative, a multidisciplinary team led by a clinician neurosurgeon has process mapped the way care is currently being delivered throughout the entire episode of care. Neurosurgeons are becoming leaders in quality programs, and their education on the quality improvement strategies and tools is essential. The authors present a comprehensive review of process mapping, demystifying its planning, its building, and its analysis. The particularities of using process maps, initially a business tool, in the health care arena are discussed, and their specific use in an academic neurosurgical department is presented.

  5. Access to Data Accelerates Innovation and Adoption of Geothermal

    Science.gov Websites

    Technologies | News | NREL Access to Data Accelerates Innovation and Adoption of Geothermal Technologies Access to Data Accelerates Innovation and Adoption of Geothermal Technologies May 18, 2018 A map of the continental U.S. is overlaid with a colored map showing deep geothermal heat potential. NREL's

  6. Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings

    NASA Astrophysics Data System (ADS)

    Hussein Maibed, Zena

    2018-05-01

    The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.

  7. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  8. Mapping Perinatal Nursing Process Measurement Concepts to Standard Terminologies.

    PubMed

    Ivory, Catherine H

    2016-07-01

    The use of standard terminologies is an essential component for using data to inform practice and conduct research; perinatal nursing data standardization is needed. This study explored whether 76 distinct process elements important for perinatal nursing were present in four American Nurses Association-recognized standard terminologies. The 76 process elements were taken from a valid paper-based perinatal nursing process measurement tool. Using terminology-supported browsers, the elements were manually mapped to the selected terminologies by the researcher. A five-member expert panel validated 100% of the mapping findings. The majority of the process elements (n = 63, 83%) were present in SNOMED-CT, 28% (n = 21) in LOINC, 34% (n = 26) in ICNP, and 15% (n = 11) in CCC. SNOMED-CT and LOINC are terminologies currently recommended for use to facilitate interoperability in the capture of assessment and problem data in certified electronic medical records. Study results suggest that SNOMED-CT and LOINC contain perinatal nursing process elements and are useful standard terminologies to support perinatal nursing practice in electronic health records. Terminology mapping is the first step toward incorporating traditional paper-based tools into electronic systems.

  9. Suitability aero-geophysical methods for generating conceptual soil maps and their use in the modeling of process-related susceptibility maps

    NASA Astrophysics Data System (ADS)

    Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid

    2014-05-01

    In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted

  10. Are supernova remnants quasi-parallel or quasi-perpendicular accelerators

    NASA Technical Reports Server (NTRS)

    Spangler, S. R.; Leckband, J. A.; Cairns, I. H.

    1989-01-01

    Observations of shock waves in the solar system which show a pronounced difference in the plasma wave and particle environment depending on whether the shock is propagating along or perpendicular to the interplanetary magnetic field are discussed. Theories for particle acceleration developed for quasi-parallel and quasi-perpendicular shocks, when extended to the interstellar medium suggest that the relativistic electrons in radio supernova remnants are accelerated by either the Q parallel or Q perpendicular mechanisms. A model for the galactic magnetic field and published maps of supernova remnants were used to search for a dependence of structure on the angle Phi. Results show no tendency for the remnants as a whole to favor the relationship expected for either mechanism, although individual sources resemble model remnants of one or the other acceleration process.

  11. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    NASA Technical Reports Server (NTRS)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  12. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  13. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria

    2016-04-01

    The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.

  14. GPU-BSM: A GPU-Based Tool to Map Bisulfite-Treated Reads

    PubMed Central

    Manconi, Andrea; Orro, Alessandro; Manca, Emanuele; Armano, Giuliano; Milanesi, Luciano

    2014-01-01

    Cytosine DNA methylation is an epigenetic mark implicated in several biological processes. Bisulfite treatment of DNA is acknowledged as the gold standard technique to study methylation. This technique introduces changes in the genomic DNA by converting cytosines to uracils while 5-methylcytosines remain nonreactive. During PCR amplification 5-methylcytosines are amplified as cytosine, whereas uracils and thymines as thymine. To detect the methylation levels, reads treated with the bisulfite must be aligned against a reference genome. Mapping these reads to a reference genome represents a significant computational challenge mainly due to the increased search space and the loss of information introduced by the treatment. To deal with this computational challenge we devised GPU-BSM, a tool based on modern Graphics Processing Units. Graphics Processing Units are hardware accelerators that are increasingly being used successfully to accelerate general-purpose scientific applications. GPU-BSM is a tool able to map bisulfite-treated reads from whole genome bisulfite sequencing and reduced representation bisulfite sequencing, and to estimate methylation levels, with the goal of detecting methylation. Due to the massive parallelization obtained by exploiting graphics cards, GPU-BSM aligns bisulfite-treated reads faster than other cutting-edge solutions, while outperforming most of them in terms of unique mapped reads. PMID:24842718

  15. Heralded processes on continuous-variable spaces as quantum maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreyrol, Franck; Spagnolo, Nicolò; Blandino, Rémi

    2014-12-04

    Heralding processes, which only work when a measurement on a part of the system give the good result, are particularly interesting for continuous-variables. They permit non-Gaussian transformations that are necessary for several continuous-variable quantum information tasks. However if maps and quantum process tomography are commonly used to describe quantum transformations in discrete-variable space, they are much rarer in the continuous-variable domain. Also, no convenient tool for representing maps in a way more adapted to the particularities of continuous variables have yet been explored. In this paper we try to fill this gap by presenting such a tool.

  16. The spinning disc: studying radial acceleration and its damping process with smartphone acceleration sensors

    NASA Astrophysics Data System (ADS)

    Hochberg, K.; Gröber, S.; Kuhn, J.; Müller, A.

    2014-03-01

    Here, we show the possibility of analysing circular motion and acceleration using the acceleration sensors of smartphones. For instance, the known linear dependence of the radial acceleration on the distance to the centre (a constant angular frequency) can be shown using multiple smartphones attached to a revolving disc. As a second example, the decrease of the radial acceleration and the rotation frequency due to friction can be measured and fitted with a quadratic function, in accordance with theory. Finally, because the disc is not set up exactly horizontal, each smartphone measures a component of the gravitational acceleration that adds to the radial acceleration during one half of the period and subtracts from the radial acceleration during the other half. Hence, every graph shows a small modulation, which can be used to determine the rotation frequency, thus converting a ‘nuisance effect’ into a source of useful information, making additional measurements with stopwatches or the like unnecessary.

  17. Probing SEP Acceleration Processes With Near-relativistic Electrons

    NASA Astrophysics Data System (ADS)

    Haggerty, Dennis K.; Roelof, Edmond C.

    2009-11-01

    Processes in the solar corona are prodigious accelerators of near-relativistic electrons. Only a small fraction of these electrons escape the low corona, yet they are by far the most abundant species observed in Solar Energetic Particle events. These beam-like energetic electron events are sometimes time-associated with coronal mass ejections from the western solar hemisphere. However, a significant number of events are observed without any apparent association with a transient event. The relationship between solar energetic particle events, coronal mass ejections, and near-relativistic electron events are better ordered when we classify the intensity time profiles during the duration of the beam-like anisotropies into three broad categories: 1) Spikes (rapid and equal rise and decay) 2) Pulses (rapid rise, slower decay) and 3) Ramps (rapid rise followed by a plateau). We report on the results of a study that is based on our catalog (covering nearly the complete Solar Cycle 23) of 216 near-relativistic electron events and their association with: solar electromagnetic emissions, shocks driven by coronal mass ejections, models of the coronal magnetic fields and energetic protons. We conclude that electron events with time-intensity profiles of Spikes and Pulses are associated with explosive events in the low corona while events with time-intensity profiles of Ramps are associated with the injection/acceleration process of the CME driven shock.

  18. Mapping spatial patterns with morphological image processing

    Treesearch

    Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham

    2006-01-01

    We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...

  19. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    PubMed

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  20. Acceleration processes in the quasi-steady magnetoplasmadynamic discharge. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Boyle, M. J.

    1974-01-01

    The flow field characteristics within the discharge chamber and exhaust of a quasi-steady magnetoplasmadynamic (MPD) arcjet were examined to clarify the nature of the plasma acceleration process. The observation of discharge characteristics unperturbed by insulator ablation and terminal voltage fluctuations, first requires the satisfaction of three criteria: the use of refractory insulator materials; a mass injection geometry tailored to provide propellant to both electrode regions of the discharge; and a cathode of sufficient surface area to permit nominal MPD arcjet operation for given combinations of arc current and total mass flow. The axial velocity profile and electromagnetic discharge structure were measured for an arcjet configuration which functions nominally at 15.3 kA and 6 g/sec argon mass flow. An empirical two-flow plasma acceleration model is advanced which delineates inner and outer flow regions and accounts for the observed velocity profile and calculated thrust of the accelerator.

  1. Particle Acceleration via Reconnection Processes in the Supersonic Solar Wind

    NASA Astrophysics Data System (ADS)

    Zank, G. P.; le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.

    2014-12-01

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = -(3 + MA )/2, where MA is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index -3(1 + τ c /(8τdiff)), where τ c /τdiff is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τdiff/τ c . Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c -5 (c particle speed) spectra observed by Fisk & Gloeckler and Mewaldt et

  2. On the mapping associated with the complex representation of functions and processes.

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1972-01-01

    The mapping between function spaces that is implied by the representation of a real 'bandpass' function by a complex 'low-pass' function is explicitly accepted. The discussion is extended to the representation of stationary random processes where the mapping is between spaces of random processes. This approach clarifies the nature of the complex representation, especially in the case of random processes and, in addition, derives the properties of the complex representation.-

  3. Facilitating the exploitation of ERTS-1 imagery using snow enhancement techniques. [geological fault maps of Massachusetts and Connecticut

    NASA Technical Reports Server (NTRS)

    Wobber, F. J. (Principal Investigator); Martin, K. R.; Amato, R. V.; Leshendok, T.

    1973-01-01

    The author has identified the following significant results. The applications of ERTS-1 imagery for geological fracture mapping regardless of season has been repeatedly confirmed. The enhancement provided by a differential cover of snow increases the number and length of fracture-lineaments which can be detected with ERTS-1 data and accelerates the fracture mapping process for a variety of practical applications. The geological mapping benefits of the program will be realized in geographic areas where data are most needed - complex glaciated terrain and areas of deep residual soils. ERTS-1 derived fracture-lineament maps which provide detail well in excess of existing geological maps are not available in the Massachusetts-Connecticut area. The large quantity of new data provided by ERTS-1 may accelerate and improve field mapping now in progress in the area. Numerous other user groups have requested data on the techniques. This represents a major change in operating philosophy for groups who to data judged that snow obscured geological detail.

  4. Schooling in Times of Acceleration

    ERIC Educational Resources Information Center

    Buddeberg, Magdalena; Hornberg, Sabine

    2017-01-01

    Modern societies are characterised by forms of acceleration, which influence social processes. Sociologist Hartmut Rosa has systematised temporal structures by focusing on three categories of social acceleration: technical acceleration, acceleration of social change, and acceleration of the pace of life. All three processes of acceleration are…

  5. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  6. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  7. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  8. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  9. Soil mapping and processes modelling for sustainable land management: a review

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Brevik, Eric; Muñoz-Rojas, Miriam; Miller, Bradley; Smetanova, Anna; Depellegrin, Daniel; Misiune, Ieva; Novara, Agata; Cerda, Artemi

    2017-04-01

    Soil maps and models are fundamental for a correct and sustainable land management (Pereira et al., 2017). They are an important in the assessment of the territory and implementation of sustainable measures in urban areas, agriculture, forests, ecosystem services, among others. Soil maps represent an important basis for the evaluation and restoration of degraded areas, an important issue for our society, as consequence of climate change and the increasing pressure of humans on the ecosystems (Brevik et al. 2016; Depellegrin et al., 2016). The understanding of soil spatial variability and the phenomena that influence this dynamic is crucial to the implementation of sustainable practices that prevent degradation, and decrease the economic costs of soil restoration. In this context, soil maps and models are important to identify areas affected by degradation and optimize the resources available to restore them. Overall, soil data alone or integrated with data from other sciences, is an important part of sustainable land management. This information is extremely important land managers and decision maker's implements sustainable land management policies. The objective of this work is to present a review about the advantages of soil mapping and process modeling for sustainable land management. References Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. (2016) Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274. Depellegrin, D.A., Pereira, P., Misiune, I., Egarter-Vigl, L. (2016) Mapping Ecosystem Services in Lithuania. International Journal of Sustainable Development and World Ecology, 23, 441-455. Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B., Smetanova, A., Depellegrin, D., Misiune, I., Novara, A., Cerda, A. (2017) Soil mapping and process modelling for sustainable land management. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B

  10. GRAIL Gravity Map of Orientale Basin

    NASA Image and Video Library

    2016-10-27

    This color-coded map shows the strength of surface gravity around Orientale basin on Earth's moon, derived from data obtained by NASA's GRAIL mission. The GRAIL mission produced a very high-resolution map of gravity over the surface of the entire moon. This plot is zoomed in on the part of that map that features Orientale basin, where the two GRAIL spacecraft flew extremely low near the end of their mission. Their close proximity to the basin made the probes' measurements particularly sensitive to the gravitational acceleration there (due to the inverse squared law). The color scale plots the gravitational acceleration in units of "gals," where 1 gal is one centimeter per second squared, or about 1/1000th of the gravitational acceleration at Earth's surface. (The unit was devised in honor of the astronomer Galileo). Labels on the x and y axes represent latitude and longitude. http://photojournal.jpl.nasa.gov/catalog/PIA21050

  11. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  12. A Web-Based Interactive Mapping System of State Wide School Performance: Integrating Google Maps API Technology into Educational Achievement Data

    ERIC Educational Resources Information Center

    Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis

    2008-01-01

    Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…

  13. Highly Efficient Proteolysis Accelerated by Electromagnetic Waves for Peptide Mapping

    PubMed Central

    Chen, Qiwen; Liu, Ting; Chen, Gang

    2011-01-01

    Proteomics will contribute greatly to the understanding of gene functions in the post-genomic era. In proteome research, protein digestion is a key procedure prior to mass spectrometry identification. During the past decade, a variety of electromagnetic waves have been employed to accelerate proteolysis. This review focuses on the recent advances and the key strategies of these novel proteolysis approaches for digesting and identifying proteins. The subjects covered include microwave-accelerated protein digestion, infrared-assisted proteolysis, ultraviolet-enhanced protein digestion, laser-assisted proteolysis, and future prospects. It is expected that these novel proteolysis strategies accelerated by various electromagnetic waves will become powerful tools in proteome research and will find wide applications in high throughput protein digestion and identification. PMID:22379392

  14. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  15. TAC Proton Accelerator Facility: The Status and Road Map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Algin, E.; Akkus, B.; Caliskan, A.

    2011-06-28

    Proton Accelerator (PA) Project is at a stage of development, working towards a Technical Design Report under the roof of a larger-scale Turkish Accelerator Center (TAC) Project. The project is supported by the Turkish State Planning Organization. The PA facility will be constructed in a series of stages including a 3 MeV test stand, a 55 MeV linac which can be extended to 100+ MeV, and then a full 1-3 GeV proton synchrotron or superconducting linac. In this article, science applications, overview, and current status of the PA Project will be given.

  16. Vibration environment - Acceleration mapping strategy and microgravity requirements for Spacelab and Space Station

    NASA Technical Reports Server (NTRS)

    Martin, Gary L.; Baugher, Charles R.; Delombard, Richard

    1990-01-01

    In order to define the acceleration requirements for future Shuttle and Space Station Freedom payloads, methods and hardware characterizing accelerations on microgravity experiment carriers are discussed. The different aspects of the acceleration environment and the acceptable disturbance levels are identified. The space acceleration measurement system features an adjustable bandwidth, wide dynamic range, data storage, and ability to be easily reconfigured and is expected to fly on the Spacelab Life Sciences-1. The acceleration characterization and analysis project describes the Shuttle acceleration environment and disturbance mechanisms, and facilitates the implementation of the microgravity research program.

  17. Seismic hazard maps for Haiti

    USGS Publications Warehouse

    Frankel, Arthur; Harmsen, Stephen; Mueller, Charles; Calais, Eric; Haase, Jennifer

    2011-01-01

    We have produced probabilistic seismic hazard maps of Haiti for peak ground acceleration and response spectral accelerations that include the hazard from the major crustal faults, subduction zones, and background earthquakes. The hazard from the Enriquillo-Plantain Garden, Septentrional, and Matheux-Neiba fault zones was estimated using fault slip rates determined from GPS measurements. The hazard from the subduction zones along the northern and southeastern coasts of Hispaniola was calculated from slip rates derived from GPS data and the overall plate motion. Hazard maps were made for a firm-rock site condition and for a grid of shallow shear-wave velocities estimated from topographic slope. The maps show substantial hazard throughout Haiti, with the highest hazard in Haiti along the Enriquillo-Plantain Garden and Septentrional fault zones. The Matheux-Neiba Fault exhibits high hazard in the maps for 2% probability of exceedance in 50 years, although its slip rate is poorly constrained.

  18. Particle acceleration via reconnection processes in the supersonic solar wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zank, G. P.; Le Roux, J. A.; Webb, G. M.

    An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced bymore » quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = –(3 + M{sub A} )/2, where M{sub A} is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index –3(1 + τ {sub c}/(8τ{sub diff})), where τ {sub c}/τ{sub diff} is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τ{sub diff}/τ {sub c}. Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c {sup –5} (c

  19. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.

    2011-09-01

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.

  20. Using mind mapping techniques for rapid qualitative data analysis in public participation processes.

    PubMed

    Burgess-Allen, Jilla; Owen-Smith, Vicci

    2010-12-01

    In a health service environment where timescales for patient participation in service design are short and resources scarce, a balance needs to be achieved between research rigour and the timeliness and utility of the findings of patient participation processes. To develop a pragmatic mind mapping approach to managing the qualitative data from patient participation processes. While this article draws on experience of using mind maps in a variety of participation processes, a single example is used to illustrate the approach. In this example mind maps were created during the course of patient participation focus groups. Two group discussions were also transcribed verbatim to allow comparison of the rapid mind mapping approach with traditional thematic analysis of qualitative data. The illustrative example formed part of a local alcohol service review which included consultation with local alcohol service users, their families and staff groups. The mind mapping approach provided a pleasing graphical format for representing the key themes raised during the focus groups. It helped stimulate and galvanize discussion and keep it on track, enhanced transparency and group ownership of the data analysis process, allowed a rapid dynamic between data collection and feedback, and was considerably faster than traditional methods for the analysis of focus groups, while resulting in similar broad themes. This study suggests that the use of a mind mapping approach to managing qualitative data can provide a pragmatic resolution of the tension between limited resources and quality in patient participation processes. © 2010 The Authors. Health Expectations © 2010 Blackwell Publishing Ltd.

  1. First muon acceleration using a radio-frequency accelerator

    NASA Astrophysics Data System (ADS)

    Bae, S.; Choi, H.; Choi, S.; Fukao, Y.; Futatsukawa, K.; Hasegawa, K.; Iijima, T.; Iinuma, H.; Ishida, K.; Kawamura, N.; Kim, B.; Kitamura, R.; Ko, H. S.; Kondo, Y.; Li, S.; Mibe, T.; Miyake, Y.; Morishita, T.; Nakazawa, Y.; Otani, M.; Razuvaev, G. P.; Saito, N.; Shimomura, K.; Sue, Y.; Won, E.; Yamazaki, T.

    2018-05-01

    Muons have been accelerated by using a radio-frequency accelerator for the first time. Negative muonium atoms (Mu- ), which are bound states of positive muons (μ+) and two electrons, are generated from μ+'s through the electron capture process in an aluminum degrader. The generated Mu- 's are initially electrostatically accelerated and injected into a radio-frequency quadrupole linac (RFQ). In the RFQ, the Mu- 's are accelerated to 89 keV. The accelerated Mu- 's are identified by momentum measurement and time of flight. This compact muon linac opens the door to various muon accelerator applications including particle physics measurements and the construction of a transmission muon microscope.

  2. Digital Signal Processing and Generation for a DC Current Transformer for Particle Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zorzetti, Silvia

    2013-01-01

    The thesis topic, digital signal processing and generation for a DC current transformer, focuses on the most fundamental beam diagnostics in the field of particle accelerators, the measurement of the beam intensity, or beam current. The technology of a DC current transformer (DCCT) is well known, and used in many areas, including particle accelerator beam instrumentation, as non-invasive (shunt-free) method to monitor the DC current in a conducting wire, or in our case, the current of charged particles travelling inside an evacuated metal pipe. So far, custom and commercial DCCTs are entirely based on analog technologies and signal processing, whichmore » makes them inflexible, sensitive to component aging, and difficult to maintain and calibrate.« less

  3. Time Recovery for a Complex Process Using Accelerated Dynamics.

    PubMed

    Paz, S Alexis; Leiva, Ezequiel P M

    2015-04-14

    The hyperdynamics method (HD) developed by Voter (J. Chem. Phys. 1996, 106, 4665) sets the theoretical basis to construct an accelerated simulation scheme that holds the time scale information. Since HD is based on transition state theory, pseudoequilibrium conditions (PEC) must be satisfied before any system in a trapped state may be accelerated. As the system evolves, many trapped states may appear, and the PEC must be assumed in each one to accelerate the escape. However, since the system evolution is a priori unknown, the PEC cannot be permanently assumed to be true. Furthermore, the different parameters of the bias function used may need drastic recalibration during this evolution. To overcome these problems, we present a general scheme to switch between HD and conventional molecular dynamics (MD) in an automatic fashion during the simulation. To decide when HD should start and finish, criteria based on the energetic properties of the system are introduced. On the other hand, a very simple bias function is proposed, leading to a straightforward on-the-fly set up of the required parameters. A way to measure the quality of the simulation is suggested. The efficiency of the present hybrid HD-MD method is tested for a two-dimensional model potential and for the coalescence process of two nanoparticles. In spite of the important complexity of the latter system (165 degrees of freedoms), some relevant mechanistic properties were recovered within the present method.

  4. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  5. Ozonation of oil sands process-affected water accelerates microbial bioremediation.

    PubMed

    Martin, Jonathan W; Barri, Thaer; Han, Xiumei; Fedorak, Phillip M; El-Din, Mohamed Gamal; Perez, Leonidas; Scott, Angela C; Jiang, Jason Tiange

    2010-11-01

    Ozonation can degrade toxic naphthenic acids (NAs) in oil sands process-affected water (OSPW), but even after extensive treatment a residual NA fraction remains. Here we hypothesized that mild ozonation would selectively oxidize the most biopersistent NA fraction, thereby accelerating subsequent NA biodegradation and toxicity removal by indigenous microbes. OSPW was ozonated to achieve approximately 50% and 75% NA degradation, and the major ozonation byproducts included oxidized NAs (i.e., hydroxy- or keto-NAs). However, oxidized NAs are already present in untreated OSPW and were shown to be formed during the microbial biodegradation of NAs. Ozonation alone did not affect OSPW toxicity, based on Microtox; however, there was a significant acceleration of toxicity removal in ozonated OSPW following inoculation with native microbes. Furthermore, all residual NAs biodegraded significantly faster in ozonated OSPW. The opposite trend was found for ozonated commercial NAs, which are known to contain no significant biopersistent fraction. Thus, we suggest that ozonation preferentially degraded the most biopersistent OSPW NA fraction, and that ozonation is complementary to the biodegradation capacity of microbial populations in OSPW. The toxicity of ozonated OSPW to higher organisms needs to be assessed, but there is promise that this technique could be applied to accelerate the bioremediation of large volumes of OSPW in Northern Alberta, Canada.

  6. Assessment of HER2 status in breast cancer biopsies is not affected by accelerated tissue processing.

    PubMed

    Bulte, Joris P; Halilovic, Altuna; Kalkman, Shona; van Cleef, Patricia H J; van Diest, Paul J; Strobbe, Luc J A; de Wilt, Johannes H W; Bult, Peter

    2018-03-01

    To establish whether core needle biopsy (CNB) specimens processed with an accelerated processing method with short fixation time can be used to determine accurately the human epidermal growth factor receptor 2 (HER2) status of breast cancer. A consecutive case-series from two high-volume breast clinics was created. We compared routine HER2 immunohistochemistry (IHC) assessment between accelerated processing CNB specimens and routinely processed postoperative excision specimens. Additional amplification-based testing was performed in cases with equivocal results. The formalin fixation time was less than 2 h and between 6 and 72 h, respectively. Fluorescence in-situ hybridisation and multiplex ligation-dependent probe amplification were used for amplification testing. One hundred and forty-four cases were included, 15 of which were HER2-positive on the routinely processed excision specimens. On the CNB specimens, 44 were equivocal on IHC and required an amplification-based test. Correlation between the CNB specimens and the corresponding excision specimens was high for final HER2 status, with an accuracy of 97% and a kappa of 0.85. HER2 status can be determined reliably on CNB specimens with accelerated processing time using standard clinical testing methods. Using this accelerated technology the minimum 6 h of formalin fixation, which current guidelines consider necessary, can be decreased safely. This allows for a complete and expedited histology-based diagnosis of breast lesions in the setting of a one-stop-shop, same-day breast clinic. © 2018 The Authors. Histopathology Published by John Wiley & Sons Ltd.

  7. Scanning probe acceleration microscopy (SPAM) in fluids: Mapping mechanical properties of surfaces at the nanoscale

    NASA Astrophysics Data System (ADS)

    Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz

    2006-03-01

    One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by “comb” filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach.

  8. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2017-12-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  9. Process mapping as a framework for performance improvement in emergency general surgery.

    PubMed

    DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad

    2018-02-01

    Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.

  10. Plasma inverse transition acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Ming

    It can be proved fundamentally from the reciprocity theorem with which the electromagnetism is endowed that corresponding to each spontaneous process of radiation by a charged particle there is an inverse process which defines a unique acceleration mechanism, from Cherenkov radiation to inverse Cherenkov acceleration (ICA) [1], from Smith-Purcell radiation to inverse Smith-Purcell acceleration (ISPA) [2], and from undulator radiation to inverse undulator acceleration (IUA) [3]. There is no exception. Yet, for nearly 30 years after each of the aforementioned inverse processes has been clarified for laser acceleration, inverse transition acceleration (ITA), despite speculation [4], has remained the least understood,more » and above all, no practical implementation of ITA has been found, until now. Unlike all its counterparts in which phase synchronism is established one way or the other such that a particle can continuously gain energy from an acceleration wave, the ITA to be discussed here, termed plasma inverse transition acceleration (PITA), operates under fundamentally different principle. As a result, the discovery of PITA has been delayed for decades, waiting for a conceptual breakthrough in accelerator physics: the principle of alternating gradient acceleration [5, 6, 7, 8, 9, 10]. In fact, PITA was invented [7, 8] as one of several realizations of the new principle.« less

  11. Field methods and data processing techniques associated with mapped inventory plots

    Treesearch

    William A. Bechtold; Stanley J. Zarnoch

    1999-01-01

    The U.S. Forest Inventory and Analysis (FIA) and Forest Health Monitoring (FHM) programs utilize a fixed-area mapped-plot design as the national standard for extensive forest inventories. The mapped-plot design is explained, as well as the rationale for its selection as the national standard. Ratio-of-means estimators am presented as a method to process data from...

  12. High-throughput SNP genotyping in Cucurbita pepo for map construction and quantitative trait loci mapping

    PubMed Central

    2012-01-01

    of these markers are located in the coding regions of genes involved in different physiological processes. The platform will also be useful for future mapping and diversity studies, and will be essential in order to accelerate the process of breeding new and better-adapted squash varieties. PMID:22356647

  13. Process mapping as a tool for home health network analysis.

    PubMed

    Pluto, Delores M; Hirshorn, Barbara A

    2003-01-01

    Process mapping is a qualitative tool that allows service providers, policy makers, researchers, and other concerned stakeholders to get a "bird's eye view" of a home health care organizational network or a very focused, in-depth view of a component of such a network. It can be used to share knowledge about community resources directed at the older population, identify gaps in resource availability and access, and promote on-going collaborative interactions that encourage systemic policy reassessment and programmatic refinement. This article is a methodological description of process mapping, which explores its utility as a practice and research tool, illustrates its use in describing service-providing networks, and discusses some of the issues that are key to successfully using this methodology.

  14. Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Michael R.

    2006-11-16

    Project Title: Magnetohydrodynamic Particle Acceleration Processes: SSX Experiments, Theory, and Astrophysical Applications PI: Michael R. Brown, Swarthmore College The purpose of the project was to provide theoretical and modeling support to the Swarthmore Spheromak Experiment (SSX). Accordingly, the theoretical effort was tightly integrated into the SSX experimental effort. During the grant period, Michael Brown and his experimental collaborators at Swarthmore, with assistance from W. Matthaeus as appropriate, made substantial progress in understanding the physics SSX plasmas.

  15. Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.

    2011-09-26

    For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less

  16. Visualizing complex processes using a cognitive-mapping tool to support the learning of clinical reasoning.

    PubMed

    Wu, Bian; Wang, Minhong; Grotzer, Tina A; Liu, Jun; Johnson, Janice M

    2016-08-22

    Practical experience with clinical cases has played an important role in supporting the learning of clinical reasoning. However, learning through practical experience involves complex processes difficult to be captured by students. This study aimed to examine the effects of a computer-based cognitive-mapping approach that helps students to externalize the reasoning process and the knowledge underlying the reasoning process when they work with clinical cases. A comparison between the cognitive-mapping approach and the verbal-text approach was made by analyzing their effects on learning outcomes. Fifty-two third-year or higher students from two medical schools participated in the study. Students in the experimental group used the computer-base cognitive-mapping approach, while the control group used the verbal-text approach, to make sense of their thinking and actions when they worked with four simulated cases over 4 weeks. For each case, students in both groups reported their reasoning process (involving data capture, hypotheses formulation, and reasoning with justifications) and the underlying knowledge (involving identified concepts and the relationships between the concepts) using the given approach. The learning products (cognitive maps or verbal text) revealed that students in the cognitive-mapping group outperformed those in the verbal-text group in the reasoning process, but not in making sense of the knowledge underlying the reasoning process. No significant differences were found in a knowledge posttest between the two groups. The computer-based cognitive-mapping approach has shown a promising advantage over the verbal-text approach in improving students' reasoning performance. Further studies are needed to examine the effects of the cognitive-mapping approach in improving the construction of subject-matter knowledge on the basis of practical experience.

  17. Accelerating artificial intelligence with reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  18. First USGS urban seismic hazard maps predict the effects of soils

    USGS Publications Warehouse

    Cramer, C.H.; Gomberg, J.S.; Schweig, E.S.; Waldron, B.A.; Tucker, K.

    2006-01-01

    Probabilistic and scenario urban seismic hazard maps have been produced for Memphis, Shelby County, Tennessee covering a six-quadrangle area of the city. The nine probabilistic maps are for peak ground acceleration and 0.2 s and 1.0 s spectral acceleration and for 10%, 5%, and 2% probability of being exceeded in 50 years. Six scenario maps for these three ground motions have also been generated for both an M7.7 and M6.2 on the southwest arm of the New Madrid seismic zone ending at Marked Tree, Arkansas. All maps include the effect of local geology. Relative to the national seismic hazard maps, the effect of the thick sediments beneath Memphis is to decrease 0.2 s probabilistic ground motions by 0-30% and increase 1.0 s probabilistic ground motions by ???100%. Probabilistic peak ground accelerations remain at levels similar to the national maps, although the ground motion gradient across Shelby County is reduced and ground motions are more uniform within the county. The M7.7 scenario maps show ground motions similar to the 5%-in-50-year probabilistic maps. As an effect of local geology, both M7.7 and M6.2 scenario maps show a more uniform seismic ground-motion hazard across Shelby County than scenario maps with constant site conditions (i.e., NEHRP B/C boundary).

  19. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated

  20. Analyzing collision processes with the smartphone acceleration sensor

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kuhn, Jochen

    2014-02-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1). The physical issues in that case are manifold and apply, for example, to free fall,2 radial acceleration,3 several pendula, or the exploitation of everyday contexts.6 This paper supplements these applications and presents an experiment to study elastic and inelastic collisions. In addition to the masses of the two impact partners, their velocities before and after the collision are of importance, and these velocities can be determined by numerical integration of the measured acceleration profile.

  1. Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact Running-in Process

    DTIC Science & Technology

    2015-10-01

    ARL-TR-7501 ● OCT 2015 US Army Research Laboratory Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact...Research Laboratory Friction Mapping as a Tool for Measuring the Elastohydrodynamic Contact Running-in Process by Stephen Berkebile Vehicle...YYYY) October 2015 2. REPORT TYPE Final 3. DATES COVERED (From - To) 1 January–30 June 2015 4. TITLE AND SUBTITLE Friction Mapping as a Tool for

  2. Operational shoreline mapping with high spatial resolution radar and geographic processing

    USGS Publications Warehouse

    Rangoonwala, Amina; Jones, Cathleen E; Chi, Zhaohui; Ramsey, Elijah W.

    2017-01-01

    A comprehensive mapping technology was developed utilizing standard image processing and available GIS procedures to automate shoreline identification and mapping from 2 m synthetic aperture radar (SAR) HH amplitude data. The development used four NASA Uninhabited Aerial Vehicle SAR (UAVSAR) data collections between summer 2009 and 2012 and a fall 2012 collection of wetlands dominantly fronted by vegetated shorelines along the Mississippi River Delta that are beset by severe storms, toxic releases, and relative sea-level rise. In comparison to shorelines interpreted from 0.3 m and 1 m orthophotography, the automated GIS 10 m alongshore sampling found SAR shoreline mapping accuracy to be ±2 m, well within the lower range of reported shoreline mapping accuracies. The high comparability was obtained even though water levels differed between the SAR and photography image pairs and included all shorelines regardless of complexity. The SAR mapping technology is highly repeatable and extendable to other SAR instruments with similar operational functionality.

  3. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate

  4. Compact Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Foster, John E.

    2004-01-01

    A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.

  5. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  6. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the

  7. Analyzing Collision Processes with the Smartphone Acceleration Sensor

    ERIC Educational Resources Information Center

    Vogt, Patrik; Kuhn, Jochen

    2014-01-01

    It has been illustrated several times how the built-in acceleration sensors of smartphones can be used gainfully for quantitative experiments in school and university settings (see the overview in Ref. 1 ). The physical issues in that case are manifold and apply, for example, to free fall, radial acceleration, several pendula, or the exploitation…

  8. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  9. The Impact of Concept Mapping on the Process of Problem-Based Learning

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2012-01-01

    A concept map is a graphical tool to activate and elaborate on prior knowledge, to support problem solving, promote conceptual thinking and understanding, and to organize and memorize knowledge. The aim of this study is to determine if the use of concept mapping (CM) in a problem-based learning (PBL) curriculum enhances the PBL process. The paper…

  10. Estimating and mapping ecological processes influencing microbial community assembly

    DOE PAGES

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...

    2015-05-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less

  11. Estimating and mapping ecological processes influencing microbial community assembly

    PubMed Central

    Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.

    2015-01-01

    Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725

  12. Paleobathymetric Reconstruction of Ross Sea: seismic data processing and regional reflectors mapping

    NASA Astrophysics Data System (ADS)

    Olivo, Elisabetta; De Santis, Laura; Wardell, Nigel; Geletti, Riccardo; Busetti, Martina; Sauli, Chiara; Bergamasco, Andrea; Colleoni, Florence; Vanzella, Walter; Sorlien, Christopher; Wilson, Doug; De Conto, Robert; Powell, Ross; Bart, Phil; Luyendyk, Bruce

    2017-04-01

    PURPOSE: New maps of some major unconformities of the Ross Sea have been reconstructed, by using seismic data grids, combined with the acoustic velocities from previous works, from new and reprocessed seismic profiles. This work is carried out with the support of PNRA and in the frame of the bilateral Italy-USA project GLAISS (Global Sea Level Rise & Antarctic Ice Sheet Stability predictions), funded by the Ministry of Foreign Affairs. Paleobathymetric maps of 30, 14 and 4 million years ago, three 'key moments' for the glacial history of the Antarctic Ice Sheet, coinciding with global climatic changes. The paleobathymetric maps will then be used for numeric simulations focused on the width and thickness of the Ross Sea Ice Sheet. PRELIMINARY RESULTS: The first step was to create TWT maps of three main unconformity (RSU6, RSU4, and RSU2) of Ross Sea, revisiting and updating the ANTOSTRAT maps, through the interpretation of sedimentary bodies and erosional features, used to infer active or old processes along the slope, we identified the main seismic unconformities. We used the HIS Kingdom academic license. The different groups contribution was on the analysis of the Eastern Ross Sea continental slope and rise (OGS), of the Central Basin (KOPRI) of the western and central Ross Sea (Univ. of Santa Barbara and OGS), where new drill sites and seismic profiles were collected after the publication of the ANTOSTRAT maps. Than we joined our interpretation with previous interpretations. We examined previous processing of several seismic lines and all the old acoustic velocity analysis. In addiction we reprocessed some lines in order to have a higher data coverage. Then, combining the TWT maps of the unconformity with the old and new speed data we created new depth maps of the study area. The new depth maps will then be used for reconstructing the paleobathymetry of the Ross Sea by applying backstripping technique.

  13. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The Com

  14. Use of Networked Collaborative Concept Mapping To Measure Team Processes and Team Outcomes.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; O'Neil, Harold F., Jr.; Herl, Howard E.; Dennis, Robert A.

    The feasibility of using a computer-based networked collaborative concept mapping system to measure teamwork skills was studied. A concept map is a node-link-node representation of content, where the nodes represent concepts and links represent relationships between connected concepts. Teamwork processes were examined for a group concept mapping…

  15. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  16. Application of process mapping to understand integration of high risk medicine care bundles within community pharmacy practice.

    PubMed

    Weir, Natalie M; Newham, Rosemary; Corcoran, Emma D; Ali Atallah Al-Gethami, Ashwag; Mohammed Abd Alridha, Ali; Bowie, Paul; Watson, Anne; Bennie, Marion

    2017-11-21

    The Scottish Patient Safety Programme - Pharmacy in Primary Care collaborative is a quality improvement initiative adopting the Institute of Healthcare Improvement Breakthrough Series collaborative approach. The programme developed and piloted High Risk Medicine (HRM) Care Bundles (CB), focused on warfarin and non-steroidal anti-inflammatories (NSAIDs), within 27 community pharmacies over 4 NHS Regions. Each CB involves clinical assessment and patient education, although the CB content varies between regions. To support national implementation, this study aims to understand how the pilot pharmacies integrated the HRM CBs into routine practice to inform the development of a generic HRM CB process map. Regional process maps were developed in 4 pharmacies through simulation of the CB process, staff interviews and documentation of resources. Commonalities were collated to develop a process map for each HRM, which were used to explore variation at a national event. A single, generic process map was developed which underwent validation by case study testing. The findings allowed development of a generic process map applicable to warfarin and NSAID CB implementation. Five steps were identified as required for successful CB delivery: patient identification; clinical assessment; pharmacy CB prompt; CB delivery; and documentation. The generic HRM CB process map encompasses the staff and patients' journey and the CB's integration into routine community pharmacy practice. Pharmacist involvement was required only for clinical assessment, indicating suitability for whole-team involvement. Understanding CB integration into routine practice has positive implications for successful implementation. The generic process map can be used to develop targeted resources, and/or be disseminated to facilitate CB delivery and foster whole team involvement. Similar methods could be utilised within other settings, to allow those developing novel services to distil the key processes and consider

  17. Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2013-09-01

    The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.

  18. Spatial structure of the neck and acceleration processes in a micropinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgov, A. N., E-mail: alnikdolgov@mail.ru; Klyachin, N. A., E-mail: NAKlyachin@mephi.ru; Prokhorovich, D. E., E-mail: prokhorovich73@mail.ru

    2016-12-15

    It is shown that the spatial structure of the micropinch neck during the transition from magnetohydrodynamic to radiative compression and the bremsstrahlung spectrum of the discharge in the photon energy range of up to 30 keV depend on the configuration of the inner electrode of the coaxial electrode system of the micropinch discharge. Analysis of the experimental results indicates that the acceleration processes in the electron component of the micropinch plasma develop earlier than radiative compression.

  19. Beyond Event Segmentation: Spatial- and Social-Cognitive Processes in Verb-to-Action Mapping

    ERIC Educational Resources Information Center

    Friend, Margaret; Pace, Amy

    2011-01-01

    The present article investigates spatial- and social-cognitive processes in toddlers' mapping of concepts to real-world events. In 2 studies we explore how event segmentation might lay the groundwork for extracting actions from the event stream and conceptually mapping novel verbs to these actions. In Study 1, toddlers demonstrated the ability to…

  20. Modeling a Material's Instantaneous Velocity during Acceleration Driven by a Detonation's Gas-Push Process

    NASA Astrophysics Data System (ADS)

    Backofen, Joseph E.

    2005-07-01

    This paper will describe both the scientific findings and the model developed in order to quantfy a material's instantaneous velocity versus position, time, or the expansion ratio of an explosive's gaseous products while its gas pressure is accelerating the material. The formula derived to represent this gas-push process for the 2nd stage of the BRIGS Two-Step Detonation Propulsion Model was found to fit very well the published experimental data available for twenty explosives. When the formula's two key parameters (the ratio Vinitial / Vfinal and ExpansionRatioFinal) were adjusted slightly from the average values describing closely many explosives to values representing measured data for a particular explosive, the formula's representation of that explosive's gas-push process was improved. The time derivative of the velocity formula representing acceleration and/or pressure compares favorably to Jones-Wilkins-Lee equation-of-state model calculations performed using published JWL parameters.

  1. Engineering functionality gradients by dip coating process in acceleration mode.

    PubMed

    Faustini, Marco; Ceratti, Davide R; Louis, Benjamin; Boudot, Mickael; Albouy, Pierre-Antoine; Boissière, Cédric; Grosso, David

    2014-10-08

    In this work, unique functional devices exhibiting controlled gradients of properties are fabricated by dip-coating process in acceleration mode. Through this new approach, thin films with "on-demand" thickness graded profiles at the submillimeter scale are prepared in an easy and versatile way, compatible for large-scale production. The technique is adapted to several relevant materials, including sol-gel dense and mesoporous metal oxides, block copolymers, metal-organic framework colloids, and commercial photoresists. In the first part of the Article, an investigation on the effect of the dip coating speed variation on the thickness profiles is reported together with the critical roles played by the evaporation rate and by the viscosity on the fluid draining-induced film formation. In the second part, dip-coating in acceleration mode is used to induce controlled variation of functionalities by playing on structural, chemical, or dimensional variations in nano- and microsystems. In order to demonstrate the full potentiality and versatility of the technique, original graded functional devices are made including optical interferometry mirrors with bidirectional gradients, one-dimensional photonic crystals with a stop-band gradient, graded microfluidic channels, and wetting gradient to induce droplet motion.

  2. SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model

    NASA Astrophysics Data System (ADS)

    Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.

    2015-06-01

    SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.

  3. Hot deformation constitutive equation and processing map of Alloy 690

    NASA Astrophysics Data System (ADS)

    Feng, Han; Zhang, Songchuang; Ma, Mingjuan; Song, Zhigang

    The hot deformation behavior of alloy 690 was studied in the temperature range of 800-1300 C and strain rate range of 0.1-10 s-1 by hot compression tests in a Gleeble 1500+ thermal mechanical simulator. The results indicated that flow stress of alloy 690 is sensitive to deformation temperature and strain rate and peak stress increases with decreasing of temperature and increasing of strain rate. In addition, the hot deformation parameters of deformation activation were calculated and the apparent activation energy of this alloy is about 300 kJ/mol. The constitutive equation which can be used to relate peak stress to the absolute temperature and strain rate was obtained. It's further found that the processing maps exhibited two domains which are considered as the optimum windows for hot working. The microstructure observations of the specimens deformed in this domain showed the full dynamic recrystallization (DRX) structure. There was a flow instability domain in the processing map where hot working should be avoided.

  4. Evaluation of the Intel Xeon Phi Co-processor to accelerate the sensitivity map calculation for PET imaging

    NASA Astrophysics Data System (ADS)

    Dey, T.; Rodrigue, P.

    2015-07-01

    We aim to evaluate the Intel Xeon Phi coprocessor for acceleration of 3D Positron Emission Tomography (PET) image reconstruction. We focus on the sensitivity map calculation as one computational intensive part of PET image reconstruction, since it is a promising candidate for acceleration with the Many Integrated Core (MIC) architecture of the Xeon Phi. The computation of the voxels in the field of view (FoV) can be done in parallel and the 103 to 104 samples needed to calculate the detection probability of each voxel can take advantage of vectorization. We use the ray tracing kernels of the Embree project to calculate the hit points of the sample rays with the detector and in a second step the sum of the radiological path taking into account attenuation is determined. The core components are implemented using the Intel single instruction multiple data compiler (ISPC) to enable a portable implementation showing efficient vectorization either on the Xeon Phi and the Host platform. On the Xeon Phi, the calculation of the radiological path is also implemented in hardware specific intrinsic instructions (so-called `intrinsics') to allow manually-optimized vectorization. For parallelization either OpenMP and ISPC tasking (based on pthreads) are evaluated.Our implementation achieved a scalability factor of 0.90 on the Xeon Phi coprocessor (model 5110P) with 60 cores at 1 GHz. Only minor differences were found between parallelization with OpenMP and the ISPC tasking feature. The implementation using intrinsics was found to be about 12% faster than the portable ISPC version. With this version, a speedup of 1.43 was achieved on the Xeon Phi coprocessor compared to the host system (HP SL250s Gen8) equipped with two Xeon (E5-2670) CPUs, with 8 cores at 2.6 to 3.3 GHz each. Using a second Xeon Phi card the speedup could be further increased to 2.77. No significant differences were found between the results of the different Xeon Phi and the Host implementations. The examination

  5. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  6. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Seidl, P. A.; Ji, Q.; Feinberg, E.; Waldron, W. L.; Schenkel, T.; Ardanuc, S.; Vinayakumar, K. B.; Lal, A.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3 × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.

  7. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE PAGES

    Persaud, A.; Seidl, P. A.; Ji, Q.; ...

    2017-10-26

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  8. Staging of RF-accelerating Units in a MEMS-based Ion Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Persaud, A.; Seidl, P. A.; Ji, Q.

    Multiple Electrostatic Quadrupole Array Linear Accelerators (MEQALACs) provide an opportunity to realize compact radio- frequency (RF) accelerator structures that can deliver very high beam currents. MEQALACs have been previously realized with acceleration gap distances and beam aperture sizes of the order of centimeters. Through advances in Micro-Electro-Mechanical Systems (MEMS) fabrication, MEQALACs can now be scaled down to the sub-millimeter regime and batch processed on wafer substrates. In this paper we show first results from using three RF stages in a compact MEMS-based ion accelerator. The results presented show proof-of-concept with accelerator structures formed from printed circuit boards using a 3more » × 3 beamlet arrangement and noble gas ions at 10 keV. We present a simple model to describe the measured results. We also discuss some of the scaling behaviour of a compact MEQALAC. The MEMS-based approach enables a low-cost, highly versatile accelerator covering a wide range of currents (10 μA to 100 mA) and beam energies (100 keV to several MeV). Applications include ion-beam analysis, mass spectrometry, materials processing, and at very high beam powers, plasma heating.« less

  9. In-situ plasma processing to increase the accelerating gradients of SRF cavities

    DOE PAGES

    Doleans, Marc; Afanador, Ralph; Barnhart, Debra L.; ...

    2015-12-31

    A new in-situ plasma processing technique is being developed at the Spallation Neutron Source (SNS) to improve the performance of the cavities in operation. The technique utilizes a low-density reactive oxygen plasma at room temperature to remove top surface hydrocarbons. The plasma processing technique increases the work function of the cavity surface and reduces the overall amount of vacuum and electron activity during cavity operation; in particular it increases the field emission onset, which enables cavity operation at higher accelerating gradients. Experimental evidence also suggests that the SEY of the Nb surface decreases after plasma processing which helps mitigating multipactingmore » issues. This article discusses the main developments and results from the plasma processing R&D are presented and experimental results for in-situ plasma processing of dressed cavities in the SNS horizontal test apparatus.« less

  10. Phase quality map based on local multi-unwrapped results for two-dimensional phase unwrapping.

    PubMed

    Zhong, Heping; Tang, Jinsong; Zhang, Sen

    2015-02-01

    The efficiency of a phase unwrapping algorithm and the reliability of the corresponding unwrapped result are two key problems in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) or interferometric synthetic aperture sonar (InSAS) data. In this paper, a new phase quality map is designed and implemented in a graphic processing unit (GPU) environment, which greatly accelerates the unwrapping process of the quality-guided algorithm and enhances the correctness of the unwrapped result. In a local wrapped phase window, the center point is selected as the reference point, and then two unwrapped results are computed by integrating in two different simple ways. After the two local unwrapped results are computed, the total difference of the two unwrapped results is regarded as the phase quality value of the center point. In order to accelerate the computing process of the new proposed quality map, we have implemented it in a GPU environment. The wrapped phase data are first uploaded to the memory of a device, and then the kernel function is called in the device to compute the phase quality in parallel by blocks of threads. Unwrapping tests performed on the simulated and real InSAS data confirm the accuracy and efficiency of the proposed method.

  11. Introduction to Particle Acceleration in the Cosmos

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Horwitz, J. L.; Perez, J.; Quenby, J.

    2005-01-01

    Accelerated charged particles have been used on Earth since 1930 to explore the very essence of matter, for industrial applications, and for medical treatments. Throughout the universe nature employs a dizzying array of acceleration processes to produce particles spanning twenty orders of magnitude in energy range, while shaping our cosmic environment. Here, we introduce and review the basic physical processes causing particle acceleration, in astrophysical plasmas from geospace to the outer reaches of the cosmos. These processes are chiefly divided into four categories: adiabatic and other forms of non-stochastic acceleration, magnetic energy storage and stochastic acceleration, shock acceleration, and plasma wave and turbulent acceleration. The purpose of this introduction is to set the stage and context for the individual papers comprising this monograph.

  12. Impaired letter-string processing in developmental dyslexia: what visual-to-phonology code mapping disorder?

    PubMed

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-05-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia. Copyright © 2012 John Wiley & Sons, Ltd.

  13. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  14. Probing electron acceleration and x-ray emission in laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thaury, C.; Ta Phuoc, K.; Corde, S.

    2013-06-15

    While laser-plasma accelerators have demonstrated a strong potential in the acceleration of electrons up to giga-electronvolt energies, few experimental tools for studying the acceleration physics have been developed. In this paper, we demonstrate a method for probing the acceleration process. A second laser beam, propagating perpendicular to the main beam, is focused on the gas jet few nanosecond before the main beam creates the accelerating plasma wave. This second beam is intense enough to ionize the gas and form a density depletion, which will locally inhibit the acceleration. The position of the density depletion is scanned along the interaction lengthmore » to probe the electron injection and acceleration, and the betatron X-ray emission. To illustrate the potential of the method, the variation of the injection position with the plasma density is studied.« less

  15. Spatiotemporal processing of linear acceleration: primary afferent and central vestibular neuron responses

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Dickman, J. D.

    2000-01-01

    Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements

  16. Problem of Auroral Oval Mapping and Multiscale Auroral Structures

    NASA Astrophysics Data System (ADS)

    Antonova, Elizaveta; Stepanova, Marina; Kirpichev, Igor; Vovchenko, Vadim; Vorobjev, Viachislav; Yagodkina, Oksana

    The problem of the auroral oval mapping to the equatorial plane is reanalyzed taking into account the latest results of the analysis of plasma pressure distribution at low altitudes and at the equatorial plane. Statistical pictures of pressure distribution at low latitudes are obtained using data of DMSP observations. We obtain the statistical pictures of pressure distribution at the equatorial plane using data of THEMIS mission. Results of THEMIS observations demonstrate the existence of plasma ring surrounding the Earth at geocentric distances from ~6 till ~12Re. Plasma pressure in the ring is near to isotropic and its averaged values are larger than 0.2 nPa. We take into account that isotropic plasma pressure is constant along the field line and that the existence of field-aligned potential drops in the region of the acceleration of auroral electrons leads to pressure decrease at low altitudes. We show that most part of quite time auroral oval does not map to the real plasma sheet. It maps to the surrounding the Earth plasma ring. We also show that transverse currents in the plasma ring are closed inside the magnetosphere forming the high latitude continuation of the ordinary ring current. The obtained results are used for the explanation of ring like form of the auroral oval. We also analyze the processes of the formation of multiscale auroral structures including thin auroral arcs and discuss the difficulties of the theories of alfvenic acceleration of auroral electrons.

  17. Constitutive behavior and processing maps of low-expansion GH909 superalloy

    NASA Astrophysics Data System (ADS)

    Yao, Zhi-hao; Wu, Shao-cong; Dong, Jian-xin; Yu, Qiu-ying; Zhang, Mai-cang; Han, Guang-wei

    2017-04-01

    The hot deformation behavior of GH909 superalloy was studied systematically using isothermal hot compression tests in a temperature range of 960 to 1040°C and at strain rates from 0.02 to 10 s-1 with a height reduction as large as 70%. The relations considering flow stress, temperature, and strain rate were evaluated via power-law, hyperbolic sine, and exponential constitutive equations under different strain conditions. An exponential equation was found to be the most appropriate for process modeling. The processing maps for the superalloy were constructed for strains of 0.2, 0.4, 0.6, and 0.8 on the basis of the dynamic material model, and a total processing map that includes all the investigated strains was proposed. Metallurgical instabilities in the instability domain mainly located at higher strain rates manifested as adiabatic shear bands and cracking. The stability domain occurred at 960-1040°C and at strain rates less than 0.2 s-1; these conditions are recommended for optimum hot working of GH909 superalloy.

  18. Lightweight Hyperspectral Mapping System and a Novel Photogrammetric Processing Chain for UAV-based Sensing

    NASA Astrophysics Data System (ADS)

    Suomalainen, Juha; Franke, Jappe; Anders, Niels; Iqbal, Shahzad; Wenting, Philip; Becker, Rolf; Kooistra, Lammert

    2014-05-01

    We have developed a lightweight Hyperspectral Mapping System (HYMSY) and a novel processing chain for UAV based mapping. The HYMSY consists of a custom pushbroom spectrometer (range 450-950nm, FWHM 9nm, ~20 lines/s, 328 pixels/line), a consumer camera (collecting 16MPix raw image every 2 seconds), a GPS-Inertia Navigation System (GPS-INS), and synchronization and data storage units. The weight of the system at take-off is 2.0kg allowing us to mount it on a relatively small octocopter. The novel processing chain exploits photogrammetry in the georectification process of the hyperspectral data. At first stage the photos are processed in a photogrammetric software producing a high-resolution RGB orthomosaic, a Digital Surface Model (DSM), and photogrammetric UAV/camera position and attitude at the moment of each photo. These photogrammetric camera positions are then used to enhance the internal accuracy of GPS-INS data. These enhanced GPS-INS data are then used to project the hyperspectral data over the photogrammetric DSM, producing a georectified end product. The presented photogrammetric processing chain allows fully automated georectification of hyperspectral data using a compact GPS-INS unit while still producingin UAV use higher georeferencing accuracy than would be possible using the traditional processing method. During 2013, we have operated HYMSY on 150+ octocopter flights at 60+ sites or days. On typical flight we have produced for a 2-10ha area: a RGB orthoimagemosaic at 1-5cm resolution, a DSM in 5-10cm resolution, and hyperspectral datacube at 10-50cm resolution. The targets have mostly consisted of vegetated targets including potatoes, wheat, sugar beets, onions, tulips, coral reefs, and heathlands,. In this poster we present the Hyperspectral Mapping System and the photogrammetric processing chain with some of our first mapping results.

  19. Toward accelerating landslide mapping with interactive machine learning techniques

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Lachiche, Nicolas; Malet, Jean-Philippe; Kerle, Norman; Puissant, Anne

    2013-04-01

    Despite important advances in the development of more automated methods for landslide mapping from optical remote sensing images, the elaboration of inventory maps after major triggering events still remains a tedious task. Image classification with expert defined rules typically still requires significant manual labour for the elaboration and adaption of rule sets for each particular case. Machine learning algorithm, on the contrary, have the ability to learn and identify complex image patterns from labelled examples but may require relatively large amounts of training data. In order to reduce the amount of required training data active learning has evolved as key concept to guide the sampling for applications such as document classification, genetics and remote sensing. The general underlying idea of most active learning approaches is to initialize a machine learning model with a small training set, and to subsequently exploit the model state and/or the data structure to iteratively select the most valuable samples that should be labelled by the user and added in the training set. With relatively few queries and labelled samples, an active learning strategy should ideally yield at least the same accuracy than an equivalent classifier trained with many randomly selected samples. Our study was dedicated to the development of an active learning approach for landslide mapping from VHR remote sensing images with special consideration of the spatial distribution of the samples. The developed approach is a region-based query heuristic that enables to guide the user attention towards few compact spatial batches rather than distributed points resulting in time savings of 50% and more compared to standard active learning techniques. The approach was tested with multi-temporal and multi-sensor satellite images capturing recent large scale triggering events in Brazil and China and demonstrated balanced user's and producer's accuracies between 74% and 80%. The assessment also

  20. Operational SAR Data Processing in GIS Environments for Rapid Disaster Mapping

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2014-05-01

    The use of SAR data has become increasingly popular in recent years and in a wide array of industries. Having access to SAR can be highly important and critical especially for public safety. Updating a GIS with contemporary information from SAR data allows to deliver a reliable set of geospatial information to advance civilian operations, e.g. search and rescue missions. SAR imaging offers the great advantage, over its optical counterparts, of not being affected by darkness, meteorological conditions such as clouds, fog, etc., or smoke and dust, frequently associated with disaster zones. In this paper we present the operational processing of SAR data within a GIS environment for rapid disaster mapping. For this technique we integrated the SARscape modules for ENVI with ArcGIS®, eliminating the need to switch between software packages. Thereby the premier algorithms for SAR image analysis can be directly accessed from ArcGIS desktop and server environments. They allow processing and analyzing SAR data in almost real time and with minimum user interaction. This is exemplified by the November 2010 flash flood in the Veneto region, Italy. The Bacchiglione River burst its banks on Nov. 2nd after two days of heavy rainfall throughout the northern Italian region. The community of Bovolenta, 22 km SSE of Padova, was covered by several meters of water. People were requested to stay in their homes; several roads, highways sections and railroads had to be closed. The extent of this flooding is documented by a series of Cosmo-SkyMed acquisitions with a GSD of 2.5 m (StripMap mode). Cosmo-SkyMed is a constellation of four Earth observation satellites, allowing a very frequent coverage, which enables monitoring using a very high temporal resolution. This data is processed in ArcGIS using a single-sensor, multi-mode, multi-temporal approach consisting of 3 steps: (1) The single images are filtered with a Gamma DE-MAP filter. (2) The filtered images are geocoded using a reference

  1. Making clinical case-based learning in veterinary medicine visible: analysis of collaborative concept-mapping processes and reflections.

    PubMed

    Khosa, Deep K; Volet, Simone E; Bolton, John R

    2014-01-01

    The value of collaborative concept mapping in assisting students to develop an understanding of complex concepts across a broad range of basic and applied science subjects is well documented. Less is known about students' learning processes that occur during the construction of a concept map, especially in the context of clinical cases in veterinary medicine. This study investigated the unfolding collaborative learning processes that took place in real-time concept mapping of a clinical case by veterinary medical students and explored students' and their teacher's reflections on the value of this activity. This study had two parts. The first part investigated the cognitive and metacognitive learning processes of two groups of students who displayed divergent learning outcomes in a concept mapping task. Meaningful group differences were found in their level of learning engagement in terms of the extent to which they spent time understanding and co-constructing knowledge along with completing the task at hand. The second part explored students' and their teacher's views on the value of concept mapping as a learning and teaching tool. The students' and their teacher's perceptions revealed congruent and contrasting notions about the usefulness of concept mapping. The relevance of concept mapping to clinical case-based learning in veterinary medicine is discussed, along with directions for future research.

  2. Texture segmentation: do the processing units on the saliency map increase with eccentricity?

    PubMed

    Schade, Ursula; Meinecke, Cristina

    2011-01-01

    The saliency map is a computational model and has been constructed for simulating human saliency processing, e.g. pop-out target detection (e.g. Itti & Koch, 2000). In this study the spatial structure on the saliency map was investigated. It is proposed that the saliency map is structured into processing units whose size is increasing with retinal eccentricity. In two experiments the distance between a target in the stimulus and an irrelevant structure in the mask was varied systematically. Our findings had two main points. Firstly, in texture segmentation tasks the saliency signals from two texture irregularities interfere, when these irregularities appear within a critical spatial distance. Second, the critical distances increase with target eccentricity. The eccentricity-dependent critical distances can be interpreted as crowding effects. It is assumed that additionally to the target eccentricity, also the strength of a saliency signal can determine the spatial area of its impairing influence. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. A working environment for digital planetary data processing and mapping using ISIS and GRASS GIS

    USGS Publications Warehouse

    Frigeri, A.; Hare, T.; Neteler, M.; Coradini, A.; Federico, C.; Orosei, R.

    2011-01-01

    Since the beginning of planetary exploration, mapping has been fundamental to summarize observations returned by scientific missions. Sensor-based mapping has been used to highlight specific features from the planetary surfaces by means of processing. Interpretative mapping makes use of instrumental observations to produce thematic maps that summarize observations of actual data into a specific theme. Geologic maps, for example, are thematic interpretative maps that focus on the representation of materials and processes and their relative timing. The advancements in technology of the last 30 years have allowed us to develop specialized systems where the mapping process can be made entirely in the digital domain. The spread of networked computers on a global scale allowed the rapid propagation of software and digital data such that every researcher can now access digital mapping facilities on his desktop. The efforts to maintain planetary missions data accessible to the scientific community have led to the creation of standardized digital archives that facilitate the access to different datasets by software capable of processing these data from the raw level to the map projected one. Geographic Information Systems (GIS) have been developed to optimize the storage, the analysis, and the retrieval of spatially referenced Earth based environmental geodata; since the last decade these computer programs have become popular among the planetary science community, and recent mission data start to be distributed in formats compatible with these systems. Among all the systems developed for the analysis of planetary and spatially referenced data, we have created a working environment combining two software suites that have similar characteristics in their modular design, their development history, their policy of distribution and their support system. The first, the Integrated Software for Imagers and Spectrometers (ISIS) developed by the United States Geological Survey

  4. Trialability, observability and risk reduction accelerating individual innovation adoption decisions.

    PubMed

    Hayes, Kathryn J; Eljiz, Kathy; Dadich, Ann; Fitzgerald, Janna-Anneke; Sloan, Terry

    2015-01-01

    The purpose of this paper is to provide a retrospective analysis of computer simulation's role in accelerating individual innovation adoption decisions. The process innovation examined is Lean Systems Thinking, and the organizational context is the imaging department of an Australian public hospital. Intrinsic case study methods including observation, interviews with radiology and emergency personnel about scheduling procedures, mapping patient appointment processes and document analysis were used over three years and then complemented with retrospective interviews with key hospital staff. The multiple data sources and methods were combined in a pragmatic and reflexive manner to explore an extreme case that provides potential to act as an instructive template for effective change. Computer simulation of process change ideas offered by staff to improve patient-flow accelerated the adoption of the process changes, largely because animated computer simulation permitted experimentation (trialability), provided observable predictions of change results (observability) and minimized perceived risk. The difficulty of making accurate comparisons between time periods in a health care setting is acknowledged. This work has implications for policy, practice and theory, particularly for inducing the rapid diffusion of process innovations to address challenges facing health service organizations and national health systems. Originality/value - The research demonstrates the value of animated computer simulation in presenting the need for change, identifying options, and predicting change outcomes and is the first work to indicate the importance of trialability, observability and risk reduction in individual adoption decisions in health services.

  5. A remote sensing research agenda for mapping and monitoring biodiversity

    NASA Technical Reports Server (NTRS)

    Stoms, D. M.; Estes, J. E.

    1993-01-01

    A remote sensing research agenda designed to expand the knowledge of the spatial distribution of species richness and its ecological determinants and to predict its response to global change is proposed. Emphasis is placed on current methods of mapping species richness of both plants and animals, hypotheses concerning the biophysical factors believed to determine patterns of species richness, and anthropogenic processes causing the accelerating rate of extinctions. It is concluded that biodiversity should be incorporated more prominently into the global change and earth system science paradigms.

  6. Mapping common aphasia assessments to underlying cognitive processes and their neural substrates

    PubMed Central

    Lacey, Elizabeth H.; Skipper-Kallal, LM; Xing, S; Fama, ME; Turkeltaub, PE

    2017-01-01

    Background Understanding the relationships between clinical tests, the processes they measure, and the brain networks underlying them, is critical in order for clinicians to move beyond aphasia syndrome classification toward specification of individual language process impairments. Objective To understand the cognitive, language, and neuroanatomical factors underlying scores of commonly used aphasia tests. Methods 25 behavioral tests were administered to a group of 38 chronic left hemisphere stroke survivors and a high resolution MRI was obtained. Test scores were entered into a principal components analysis to extract the latent variables (factors) measured by the tests. Multivariate lesion-symptom mapping was used to localize lesions associated with the factor scores. Results The principal components analysis yielded four dissociable factors, which we labeled Word Finding/Fluency, Comprehension, Phonology/Working Memory Capacity, and Executive Function. While many tests loaded onto the factors in predictable ways, some relied heavily on factors not commonly associated with the tests. Lesion symptom mapping demonstrated discrete brain structures associated with each factor, including frontal, temporal, and parietal areas extending beyond the classical language network. Specific functions mapped onto brain anatomy largely in correspondence with modern neural models of language processing. Conclusions An extensive clinical aphasia assessment identifies four independent language functions, relying on discrete parts of the left middle cerebral artery territory. A better understanding of the processes underlying cognitive tests and the link between lesion and behavior may lead to improved aphasia diagnosis, and may yield treatments better targeted to an individual’s specific pattern of deficits and preserved abilities. PMID:28135902

  7. Mapping Common Aphasia Assessments to Underlying Cognitive Processes and Their Neural Substrates.

    PubMed

    Lacey, Elizabeth H; Skipper-Kallal, Laura M; Xing, Shihui; Fama, Mackenzie E; Turkeltaub, Peter E

    2017-05-01

    Understanding the relationships between clinical tests, the processes they measure, and the brain networks underlying them, is critical in order for clinicians to move beyond aphasia syndrome classification toward specification of individual language process impairments. To understand the cognitive, language, and neuroanatomical factors underlying scores of commonly used aphasia tests. Twenty-five behavioral tests were administered to a group of 38 chronic left hemisphere stroke survivors and a high-resolution magnetic resonance image was obtained. Test scores were entered into a principal components analysis to extract the latent variables (factors) measured by the tests. Multivariate lesion-symptom mapping was used to localize lesions associated with the factor scores. The principal components analysis yielded 4 dissociable factors, which we labeled Word Finding/Fluency, Comprehension, Phonology/Working Memory Capacity, and Executive Function. While many tests loaded onto the factors in predictable ways, some relied heavily on factors not commonly associated with the tests. Lesion symptom mapping demonstrated discrete brain structures associated with each factor, including frontal, temporal, and parietal areas extending beyond the classical language network. Specific functions mapped onto brain anatomy largely in correspondence with modern neural models of language processing. An extensive clinical aphasia assessment identifies 4 independent language functions, relying on discrete parts of the left middle cerebral artery territory. A better understanding of the processes underlying cognitive tests and the link between lesion and behavior may lead to improved aphasia diagnosis, and may yield treatments better targeted to an individual's specific pattern of deficits and preserved abilities.

  8. Rotation number of integrable symplectic mappings of the plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zolkin, Timofey; Nagaitsev, Sergei; Danilov, Viatcheslav

    2017-04-11

    Symplectic mappings are discrete-time analogs of Hamiltonian systems. They appear in many areas of physics, including, for example, accelerators, plasma, and fluids. Integrable mappings, a subclass of symplectic mappings, are equivalent to a Twist map, with a rotation number, constant along the phase trajectory. In this letter, we propose a succinct expression to determine the rotation number and present two examples. Similar to the period of the bounded motion in Hamiltonian systems, the rotation number is the most fundamental property of integrable maps and it provides a way to analyze the phase-space dynamics.

  9. Radial-Poloidal Mapping of the Energy Distribution of Electrons Accelerated by Lower Hybrid Waves in the Scrape-Off Layer

    NASA Astrophysics Data System (ADS)

    Gunn, J. P.; Petržílka, V.; Fuchs, V.; Ekedahl, A.; Goniche, M.; Hillaret, J.; Kočan, M.; Saint-Laurent, F.

    2009-11-01

    According to theory, Landau damping transfers the power carried by the high n//>50 components of the lower hybrid (LH) wave to thermal SOL electrons and stochastically accelerates them up to a few keV [1]. What amounts to a few percent of the injected LH power is thus transported along field lines and strikes plasma facing components, leading to the formation of well known "LH hot spots." We report on the first measurements of both the energy from 0 to 1 keV and the radial-poloidal distributions of the accelerated electrons using a retarding field analyzer. Two distinct electron populations are present : a cold, thermal population with temperatures between 10 and 30 eV, and a suprathermal component. Only partial attenuation of the electron flux was achieved at maximum applied voltage, indicating energies greater than 1 keV. Detailed 2D mapping of the hot spots was obtained by varying the safety factor stepwise during a single discharge. The radial width of the suprathermal electron beam at full power is rather large, at least about 5-6 cm, in contrast to Landau damping theory of the launched wave that predicts the radial width of the hot spots should not exceed a few millimetres [2]. The electron flux far from the grill is intermittent, with a typical burst rate of the order of 10 kHz.

  10. Performance and Environmental Test Results of the High Voltage Hall Accelerator Engineering Development Unit

    NASA Technical Reports Server (NTRS)

    Kamhawi, Hani; Haag, Thomas; Huang, Wensheng; Shastry, Rohit; Pinero, Luis; Peterson, Todd; Mathers, Alex

    2012-01-01

    NASA Science Mission Directorate's In-Space Propulsion Technology Program is sponsoring the development of a 3.5 kW-class engineering development unit Hall thruster for implementation in NASA science and exploration missions. NASA Glenn and Aerojet are developing a high fidelity high voltage Hall accelerator that can achieve specific impulse magnitudes greater than 2,700 seconds and xenon throughput capability in excess of 300 kilograms. Performance, plume mappings, thermal characterization, and vibration tests of the high voltage Hall accelerator engineering development unit have been performed. Performance test results indicated that at 3.9 kW the thruster achieved a total thrust efficiency and specific impulse of 58%, and 2,700 sec, respectively. Thermal characterization tests indicated that the thruster component temperatures were within the prescribed material maximum operating temperature limits during full power thruster operation. Finally, thruster vibration tests indicated that the thruster survived the 3-axes qualification full-level random vibration test series. Pre and post-vibration test performance mappings indicated almost identical thruster performance. Finally, an update on the development progress of a power processing unit and a xenon feed system is provided.

  11. Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska

    USGS Publications Warehouse

    Wesson, Robert L.; Boyd, Oliver S.; Mueller, Charles S.; Bufe, Charles G.; Frankel, Arthur D.; Petersen, Mark D.

    2007-01-01

    We present here time-independent probabilistic seismic hazard maps of Alaska and the Aleutians for peak ground acceleration (PGA) and 0.1, 0.2, 0.3, 0.5, 1.0 and 2.0 second spectral acceleration at probability levels of 2 percent in 50 years (annual probability of 0.000404), 5 percent in 50 years (annual probability of 0.001026) and 10 percent in 50 years (annual probability of 0.0021). These maps represent a revision of existing maps based on newly obtained data and assumptions reflecting best current judgments about methodology and approach. These maps have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States. A significant improvement relative to the 2002 methodology is the ability to include variable slip rate along a fault where appropriate. These maps incorporate new data, the responses to comments received at workshops held in Fairbanks and Anchorage, Alaska, in May, 2005, and comments received after draft maps were posted on the National Seismic Hazard Mapping Web Site. These maps will be proposed for adoption in future revisions to the International Building Code. In this documentation we describe the maps and in particular explain and justify changes that have been made relative to the 1999 maps. We are also preparing a series of experimental maps of time-dependent hazard that will be described in future documents.

  12. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing.

    PubMed

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-06-15

    Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available

  13. Accelerating DNA analysis applications on GPU clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different

  14. Short-term acclimation to warmer temperatures accelerates leaf carbon exchange processes across plant types.

    PubMed

    Smith, Nicholas G; Dukes, Jeffrey S

    2017-11-01

    While temperature responses of photosynthesis and plant respiration are known to acclimate over time in many species, few studies have been designed to directly compare process-level differences in acclimation capacity among plant types. We assessed short-term (7 day) temperature acclimation of the maximum rate of Rubisco carboxylation (V cmax ), the maximum rate of electron transport (J max ), the maximum rate of phosphoenolpyruvate carboxylase carboxylation (V pmax ), and foliar dark respiration (R d ) in 22 plant species that varied in lifespan (annual and perennial), photosynthetic pathway (C 3 and C 4 ), and climate of origin (tropical and nontropical) grown under fertilized, well-watered conditions. In general, acclimation to warmer temperatures increased the rate of each process. The relative increase in different photosynthetic processes varied by plant type, with C 3 species tending to preferentially accelerate CO 2 -limited photosynthetic processes and respiration and C 4 species tending to preferentially accelerate light-limited photosynthetic processes under warmer conditions. R d acclimation to warmer temperatures caused a reduction in temperature sensitivity that resulted in slower rates at high leaf temperatures. R d acclimation was similar across plant types. These results suggest that temperature acclimation of the biochemical processes that underlie plant carbon exchange is common across different plant types, but that acclimation to warmer temperatures tends to have a relatively greater positive effect on the processes most limiting to carbon assimilation, which differ by plant type. The acclimation responses observed here suggest that warmer conditions should lead to increased rates of carbon assimilation when water and nutrients are not limiting. © 2017 John Wiley & Sons Ltd.

  15. Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.

    PubMed

    Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei

    2013-04-01

    The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.

  16. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  17. Marshak Lectureship: The Turkish Accelerator Center, TAC

    NASA Astrophysics Data System (ADS)

    Yavas, Omer

    2012-02-01

    The Turkish Accelerator Center (TAC) project is comprised of five different electron and proton accelerator complexes, to be built over 15 years, with a phased approach. The Turkish Government funds the project. Currently there are 23 Universities in Turkey associated with the TAC project. The current funded project, which is to run until 2013 aims *To establish a superconducting linac based infra-red free electron laser and Bremsstrahlung Facility (TARLA) at the Golbasi Campus of Ankara University, *To establish the Institute of Accelerator Technologies in Ankara University, and *To complete the Technical Design Report of TAC. The proposed facilities are a 3^rd generation Synchrotron Radiation facility, SASE-FEL facility, a GeV scale Proton Accelerator facility and an electron-positron collider as a super charm factory. In this talk, an overview on the general status and road map of TAC project will be given. National and regional importance of TAC will be expressed and the structure of national and internatonal collaborations will be explained.

  18. Accelerators for E-beam and X-ray processing

    NASA Astrophysics Data System (ADS)

    Auslender, V. L.; Bryazgin, A. A.; Faktorovich, B. L.; Gorbunov, V. A.; Kokin, E. N.; Korobeinikov, M. V.; Krainov, G. S.; Lukin, A. N.; Maximov, S. A.; Nekhaev, V. E.; Panfilov, A. D.; Radchenko, V. N.; Tkachenko, V. O.; Tuvik, A. A.; Voronin, L. A.

    2002-03-01

    During last years the demand for pasteurization and desinsection of various food products (meat, chicken, sea products, vegetables, fruits, etc.) had increased. The treatment of these products in industrial scale requires the usage of powerful electron accelerators with energy 5-10 MeV and beam power at least 50 kW or more. The report describes the ILU accelerators with energy range up to 10 MeV and beam power up to 150 kW.The different irradiation schemes in electron beam and X-ray modes for various products are described. The design of the X-ray converter and 90° beam bending system are also given.

  19. Operation regimes of a dielectric laser accelerator

    NASA Astrophysics Data System (ADS)

    Hanuka, Adi; Schächter, Levi

    2018-04-01

    We investigate three operation regimes in dielectric laser driven accelerators: maximum efficiency, maximum charge, and maximum loaded gradient. We demonstrate, using a self-consistent approach, that loaded gradients of the order of 1 to 6 [GV/m], efficiencies of 20% to 80%, and electrons flux of 1014 [el/s] are feasible, without significant concerns regarding damage threshold fluence. The latter imposes that the total charge per squared wavelength is constant (a total of 106 per μm2). We conceive this configuration as a zero-order design that should be considered for the road map of future accelerators.

  20. Does Value Stream Mapping affect the structure, process, and outcome quality in care facilities? A systematic review.

    PubMed

    Nowak, Marina; Pfaff, Holger; Karbach, Ute

    2017-08-24

    Quality improvement within health and social care facilities is needed and has to be evidence-based and patient-centered. Value Stream Mapping, a method of Lean management, aims to increase the patients' value and quality of care by a visualization and quantification of the care process. The aim of this research is to examine the effectiveness of Value Stream Mapping on structure, process, and outcome quality in care facilities. A systematic review is conducted. PubMed, EBSCOhost, including Business Source Complete, Academic Search Complete, PSYCInfo, PSYNDX, SocINDEX with Full Text, Web of Knowledge, and EMBASE ScienceDirect are searched in February 2016. All peer-reviewed papers evaluating Value Stream Mapping and published in English or German from January 2000 are included. For data synthesis, all study results are categorized into Donabedian's model of structure, process, and outcome quality. To assess and interpret the effectiveness of Value Stream Mapping, the frequencies of the results statistically examined are considered. Of the 903 articles retrieved, 22 studies fulfill the inclusion criteria. Of these, 11 studies are used to answer the research question. Value Stream Mapping has positive effects on the time dimension of process and outcome quality. It seems to reduce non-value-added time (e.g., waiting time) and length of stay. All study designs are before and after studies without control, and methodologically sophisticated studies are missing. For a final conclusion about Value Stream Mapping's effectiveness, more research with improved methodology is needed. Despite this lack of evidence, Value Stream Mapping has the potential to improve quality of care on the time dimension. The contextual influence has to be investigated to make conclusions about the relationship between different quality domains when applying Value Stream Mapping. However, for using this review's conclusion, the limitation of including heterogeneous and potentially biased results

  1. High Gradient Accelerator Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temkin, Richard

    The goal of the MIT program of research on high gradient acceleration is the development of advanced acceleration concepts that lead to a practical and affordable next generation linear collider at the TeV energy level. Other applications, which are more near-term, include accelerators for materials processing; medicine; defense; mining; security; and inspection. The specific goals of the MIT program are: • Pioneering theoretical research on advanced structures for high gradient acceleration, including photonic structures and metamaterial structures; evaluation of the wakefields in these advanced structures • Experimental research to demonstrate the properties of advanced structures both in low-power microwave coldmore » test and high-power, high-gradient test at megawatt power levels • Experimental research on microwave breakdown at high gradient including studies of breakdown phenomena induced by RF electric fields and RF magnetic fields; development of new diagnostics of the breakdown process • Theoretical research on the physics and engineering features of RF vacuum breakdown • Maintaining and improving the Haimson / MIT 17 GHz accelerator, the highest frequency operational accelerator in the world, a unique facility for accelerator research • Providing the Haimson / MIT 17 GHz accelerator facility as a facility for outside users • Active participation in the US DOE program of High Gradient Collaboration, including joint work with SLAC and with Los Alamos National Laboratory; participation of MIT students in research at the national laboratories • Training the next generation of Ph. D. students in the field of accelerator physics.« less

  2. Mapping a Process of Negotiated Identity among Incarcerated Male Juvenile Offenders

    ERIC Educational Resources Information Center

    Abrams, Laura S.; Hyun, Anna

    2009-01-01

    Building on theories of youth identity transitions, this study maps a process of negotiated identity among incarcerated young men. Data are drawn from ethnographic study of three juvenile correctional institutions and longitudinal semistructured interviews with facility residents. Cross-case analysis of 10 cases that finds youth offenders adapted…

  3. Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, J.H.; Michelotti, M.D.; Riemer, N.

    2016-10-01

    Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less

  4. A Free Database of Auto-detected Full-sun Coronal Hole Maps

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2016-12-01

    We present a 4-yr (06/10/2010 to 08/18/14 at 6-hr cadence) database of full-sun synchronic EUV and coronal hole (CH) maps made available on a dedicated web site (http://www.predsci.com/chd). The maps are generated using STEREO/EUVI A&B 195Å and SDO/AIA 193Å images through an automated pipeline (Caplan et al, (2016) Ap.J. 823, 53).Specifically, the original data is preprocessed with PSF-deconvolution, a nonlinear limb-brightening correction, and a nonlinear inter-instrument intensity normalization. Coronal holes are then detected in the preprocessed images using a GPU-accelerated region growing segmentation algorithm. The final results from all three instruments are then merged and projected to form full-sun sine-latitude maps. All the software used in processing the maps is provided, which can easily be adapted for use with other instruments and channels. We describe the data pipeline and show examples from the database. We also detail recent CH-detection validation experiments using synthetic EUV emission images produced from global thermodynamic MHD simulations.

  5. MapEdit: solution to continuous raster map creation

    NASA Astrophysics Data System (ADS)

    Rančić, Dejan; Djordjevi-Kajan, Slobodanka

    2003-03-01

    The paper describes MapEdit, MS Windows TM software for georeferencing and rectification of scanned paper maps. The software produces continuous raster maps which can be used as background in geographical information systems. Process of continuous raster map creation using MapEdit "mosaicking" function is also described as well as the georeferencing and rectification algorithms which are used in MapEdit. Our approach for georeferencing and rectification using four control points and two linear transformations for each scanned map part, together with nearest neighbor resampling method, represents low cost—high speed solution that produce continuous raster maps with satisfactory quality for many purposes (±1 pixel). Quality assessment of several continuous raster maps at different scales that have been created using our software and methodology, has been undertaken and results are presented in the paper. For the quality control of the produced raster maps we referred to three wide adopted standards: US Standard for Digital Cartographic Data, National Standard for Spatial Data Accuracy and US National Map Accuracy Standard. The results obtained during the quality assessment process are given in the paper and show that our maps meat all three standards.

  6. GateKeeper: a new hardware architecture for accelerating pre-alignment in DNA short read mapping.

    PubMed

    Alser, Mohammed; Hassan, Hasan; Xin, Hongyi; Ergin, Oguz; Mutlu, Onur; Alkan, Can

    2017-11-01

    High throughput DNA sequencing (HTS) technologies generate an excessive number of small DNA segments -called short reads- that cause significant computational burden. To analyze the entire genome, each of the billions of short reads must be mapped to a reference genome based on the similarity between a read and 'candidate' locations in that reference genome. The similarity measurement, called alignment, formulated as an approximate string matching problem, is the computational bottleneck because: (i) it is implemented using quadratic-time dynamic programming algorithms and (ii) the majority of candidate locations in the reference genome do not align with a given read due to high dissimilarity. Calculating the alignment of such incorrect candidate locations consumes an overwhelming majority of a modern read mapper's execution time. Therefore, it is crucial to develop a fast and effective filter that can detect incorrect candidate locations and eliminate them before invoking computationally costly alignment algorithms. We propose GateKeeper, a new hardware accelerator that functions as a pre-alignment step that quickly filters out most incorrect candidate locations. GateKeeper is the first design to accelerate pre-alignment using Field-Programmable Gate Arrays (FPGAs), which can perform pre-alignment much faster than software. When implemented on a single FPGA chip, GateKeeper maintains high accuracy (on average >96%) while providing, on average, 90-fold and 130-fold speedup over the state-of-the-art software pre-alignment techniques, Adjacency Filter and Shifted Hamming Distance (SHD), respectively. The addition of GateKeeper as a pre-alignment step can reduce the verification time of the mrFAST mapper by a factor of 10. https://github.com/BilkentCompGen/GateKeeper. mohammedalser@bilkent.edu.tr or onur.mutlu@inf.ethz.ch or calkan@cs.bilkent.edu.tr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press

  7. HiCUP: pipeline for mapping and processing Hi-C data.

    PubMed

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies.

  8. Accelerated Brain Aging in Schizophrenia: A Longitudinal Pattern Recognition Study.

    PubMed

    Schnack, Hugo G; van Haren, Neeltje E M; Nieuwenhuis, Mireille; Hulshoff Pol, Hilleke E; Cahn, Wiepke; Kahn, René S

    2016-06-01

    Despite the multitude of longitudinal neuroimaging studies that have been published, a basic question on the progressive brain loss in schizophrenia remains unaddressed: Does it reflect accelerated aging of the brain, or is it caused by a fundamentally different process? The authors used support vector regression, a supervised machine learning technique, to address this question. In a longitudinal sample of 341 schizophrenia patients and 386 healthy subjects with one or more structural MRI scans (1,197 in total), machine learning algorithms were used to build models to predict the age of the brain and the presence of schizophrenia ("schizophrenia score"), based on the gray matter density maps. Age at baseline ranged from 16 to 67 years, and follow-up scans were acquired between 1 and 13 years after the baseline scan. Differences between brain age and chronological age ("brain age gap") and between schizophrenia score and healthy reference score ("schizophrenia gap") were calculated. Accelerated brain aging was calculated from changes in brain age gap between two consecutive measurements. The age prediction model was validated in an independent sample. In schizophrenia patients, brain age was significantly greater than chronological age at baseline (+3.36 years) and progressively increased during follow-up (+1.24 years in addition to the baseline gap). The acceleration of brain aging was not constant: it decreased from 2.5 years/year just after illness onset to about the normal rate (1 year/year) approximately 5 years after illness onset. The schizophrenia gap also increased during follow-up, but more pronounced variability in brain abnormalities at follow-up rendered this increase nonsignificant. The progressive brain loss in schizophrenia appears to reflect two different processes: one relatively homogeneous, reflecting accelerated aging of the brain and related to various measures of outcome, and a more variable one, possibly reflecting individual variation and

  9. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  10. Developing Process Maps as a Tool for a Surgical Infection Prevention Quality Improvement Initiative in Resource-Constrained Settings.

    PubMed

    Forrester, Jared A; Koritsanszky, Luca A; Amenu, Demisew; Haynes, Alex B; Berry, William R; Alemu, Seifu; Jiru, Fekadu; Weiser, Thomas G

    2018-06-01

    Surgical infections cause substantial morbidity and mortality in low-and middle-income countries (LMICs). To improve adherence to critical perioperative infection prevention standards, we developed Clean Cut, a checklist-based quality improvement program to improve compliance with best practices. We hypothesized that process mapping infection prevention activities can help clinicians identify strategies for improving surgical safety. We introduced Clean Cut at a tertiary hospital in Ethiopia. Infection prevention standards included skin antisepsis, ensuring a sterile field, instrument decontamination/sterilization, prophylactic antibiotic administration, routine swab/gauze counting, and use of a surgical safety checklist. Processes were mapped by a visiting surgical fellow and local operating theater staff to facilitate the development of contextually relevant solutions; processes were reassessed for improvements. Process mapping helped identify barriers to using alcohol-based hand solution due to skin irritation, inconsistent administration of prophylactic antibiotics due to variable delivery outside of the operating theater, inefficiencies in assuring sterility of surgical instruments through lack of confirmatory measures, and occurrences of retained surgical items through inappropriate guidelines, staffing, and training in proper routine gauze counting. Compliance with most processes improved significantly following organizational changes to align tasks with specific process goals. Enumerating the steps involved in surgical infection prevention using a process mapping technique helped identify opportunities for improving adherence and plotting contextually relevant solutions, resulting in superior compliance with antiseptic standards. Simplifying these process maps into an adaptable tool could be a powerful strategy for improving safe surgery delivery in LMICs. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  11. Topological data analysis of contagion maps for examining spreading processes on networks.

    PubMed

    Taylor, Dane; Klimm, Florian; Harrington, Heather A; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A; Mucha, Peter J

    2015-07-21

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges-for example, due to airline transportation or communication media-allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct 'contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  12. Topological data analysis of contagion maps for examining spreading processes on networks

    NASA Astrophysics Data System (ADS)

    Taylor, Dane; Klimm, Florian; Harrington, Heather A.; Kramár, Miroslav; Mischaikow, Konstantin; Porter, Mason A.; Mucha, Peter J.

    2015-07-01

    Social and biological contagions are influenced by the spatial embeddedness of networks. Historically, many epidemics spread as a wave across part of the Earth's surface; however, in modern contagions long-range edges--for example, due to airline transportation or communication media--allow clusters of a contagion to appear in distant locations. Here we study the spread of contagions on networks through a methodology grounded in topological data analysis and nonlinear dimension reduction. We construct `contagion maps' that use multiple contagions on a network to map the nodes as a point cloud. By analysing the topology, geometry and dimensionality of manifold structure in such point clouds, we reveal insights to aid in the modelling, forecast and control of spreading processes. Our approach highlights contagion maps also as a viable tool for inferring low-dimensional structure in networks.

  13. Soil mapping and process modeling for sustainable land use management: a brief historical review

    NASA Astrophysics Data System (ADS)

    Brevik, Eric C.; Pereira, Paulo; Muñoz-Rojas, Miriam; Miller, Bradley A.; Cerdà, Artemi; Parras-Alcántara, Luis; Lozano-García, Beatriz

    2017-04-01

    Basic soil management goes back to the earliest days of agricultural practices, approximately 9,000 BCE. Through time humans developed soil management techniques of ever increasing complexity, including plows, contour tillage, terracing, and irrigation. Spatial soil patterns were being recognized as early as 3,000 BCE, but the first soil maps didn't appear until the 1700s and the first soil models finally arrived in the 1880s (Brevik et al., in press). The beginning of the 20th century saw an increase in standardization in many soil science methods and wide-spread soil mapping in many parts of the world, particularly in developed countries. However, the classification systems used, mapping scale, and national coverage varied considerably from country to country. Major advances were made in pedologic modeling starting in the 1940s, and in erosion modeling starting in the 1950s. In the 1970s and 1980s advances in computing power, remote and proximal sensing, geographic information systems (GIS), global positioning systems (GPS), and statistics and spatial statistics among other numerical techniques significantly enhanced our ability to map and model soils (Brevik et al., 2016). These types of advances positioned soil science to make meaningful contributions to sustainable land use management as we moved into the 21st century. References Brevik, E., Pereira, P., Muñoz-Rojas, M., Miller, B., Cerda, A., Parras-Alcantara, L., Lozano-Garcia, B. Historical perspectives on soil mapping and process modelling for sustainable land use management. In: Pereira, P., Brevik, E., Muñoz-Rojas, M., Miller, B. (eds) Soil mapping and process modelling for sustainable land use management (In press). Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. 2016. Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274.

  14. Characteristics of four SPE groups with different origins and acceleration processes

    NASA Astrophysics Data System (ADS)

    Kim, R.-S.; Cho, K.-S.; Lee, J.; Bong, S.-C.; Joshi, A. D.; Park, Y.-D.

    2015-09-01

    Solar proton events (SPEs) can be categorized into four groups based on their associations with flare or CME inferred from onset timings as well as acceleration patterns using multienergy observations. In this study, we have investigated whether there are any typical characteristics of associated events and acceleration sites in each group using 42 SPEs from 1997 to 2012. We find the following: (i) if the proton acceleration starts from a lower energy, a SPE has a higher chance to be a strong event (> 5000 particle flux per unit (pfu)) even if its associated flare and/or CME are not so strong. The only difference between the SPEs associated with flare and CME is the location of the acceleration site. (ii) For the former (Group A), the sites are very low (˜ 1 Rs) and close to the western limb, while the latter (Group C) have relatively higher (mean = 6.05 Rs) and wider acceleration sites. (iii) When the proton acceleration starts from the higher energy (Group B), a SPE tends to be a relatively weak event (< 1000 pfu), although its associated CME is relatively stronger than previous groups. (iv) The SPEs categorized by the simultaneous acceleration in whole energy range within 10 min (Group D) tend to show the weakest proton flux (mean = 327 pfu) in spite of strong associated eruptions. Based on those results, we suggest that the different characteristics of SPEs are mainly due to the different conditions of magnetic connectivity and particle density, which are changed with longitude and height as well as their origin.

  15. Regional alveolar partial pressure of oxygen measurement with parallel accelerated hyperpolarized gas MRI.

    PubMed

    Kadlecek, Stephen; Hamedani, Hooman; Xu, Yinan; Emami, Kiarash; Xin, Yi; Ishii, Masaru; Rizi, Rahim

    2013-10-01

    Alveolar oxygen tension (Pao2) is sensitive to the interplay between local ventilation, perfusion, and alveolar-capillary membrane permeability, and thus reflects physiologic heterogeneity of healthy and diseased lung function. Several hyperpolarized helium ((3)He) magnetic resonance imaging (MRI)-based Pao2 mapping techniques have been reported, and considerable effort has gone toward reducing Pao2 measurement error. We present a new Pao2 imaging scheme, using parallel accelerated MRI, which significantly reduces measurement error. The proposed Pao2 mapping scheme was computer-simulated and was tested on both phantoms and five human subjects. Where possible, correspondence between actual local oxygen concentration and derived values was assessed for both bias (deviation from the true mean) and imaging artifact (deviation from the true spatial distribution). Phantom experiments demonstrated a significantly reduced coefficient of variation using the accelerated scheme. Simulation results support this observation and predict that correspondence between the true spatial distribution and the derived map is always superior using the accelerated scheme, although the improvement becomes less significant as the signal-to-noise ratio increases. Paired measurements in the human subjects, comparing accelerated and fully sampled schemes, show a reduced Pao2 distribution width for 41 of 46 slices. In contrast to proton MRI, acceleration of hyperpolarized imaging has no signal-to-noise penalty; its use in Pao2 measurement is therefore always beneficial. Comparison of multiple schemes shows that the benefit arises from a longer time-base during which oxygen-induced depolarization modifies the signal strength. Demonstration of the accelerated technique in human studies shows the feasibility of the method and suggests that measurement error is reduced here as well, particularly at low signal-to-noise levels. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  16. PREFACE: 3rd International Workshop on Materials Analysis and Processing in Magnetic Fields (MAP3)

    NASA Astrophysics Data System (ADS)

    Sakka, Yoshio; Hirota, Noriyuki; Horii, Shigeru; Ando, Tsutomu

    2009-07-01

    The 3rd International Workshop on Materials Analysis and Processing in Materials Fields (MAP3) was held on 14-16 May 2008 at the University of Tokyo, Japan. The first was held in March 2004 at the National High Magnetic Field Laboratory in Tallahassee, USA. Two years later the second took place in Grenoble, France. MAP3 was held at The University of Tokyo International Symposium, and jointly with MANA Workshop on Materials Processing by External Stimulation, and JSPS CORE Program of Construction of the World Center on Electromagnetic Processing of Materials. At the end of MAP3 it was decided that the next MAP4 will be held in Atlanta, USA in 2010. Processing in magnetic fields is a rapidly expanding research area with a wide range of promising applications in materials science. MAP3 focused on the magnetic field interactions involved in the study and processing of materials in all disciplines ranging from physics to chemistry and biology: Magnetic field effects on chemical, physical, and biological phenomena Magnetic field effects on electrochemical phenomena Magnetic field effects on thermodynamic phenomena Magnetic field effects on hydrodynamic phenomena Magnetic field effects on crystal growth Magnetic processing of materials Diamagnetic levitation Magneto-Archimedes effect Spin chemistry Application of magnetic fields to analytical chemistry Magnetic orientation Control of structure by magnetic fields Magnetic separation and purification Magnetic field-induced phase transitions Materials properties in high magnetic fields Development of NMR and MRI Medical application of magnetic fields Novel magnetic phenomena Physical property measurement by Magnetic fields High magnetic field generation> MAP3 consisted of 84 presentations including 16 invited talks. This volume of Journal of Physics: Conference Series contains the proceeding of MAP3 with 34 papers that provide a scientific record of the topics covered by the conference with the special topics (13 papers) in

  17. Baryon acoustic oscillation intensity mapping of dark energy.

    PubMed

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B; McDonald, Patrick

    2008-03-07

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called "dark energy." To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 10(9) individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  18. Phase locked multiple rings in the radiation pressure ion acceleration process

    NASA Astrophysics Data System (ADS)

    Wan, Y.; Hua, J. F.; Pai, C.-H.; Li, F.; Wu, Y. P.; Lu, W.; Zhang, C. J.; Xu, X. L.; Joshi, C.; Mori, W. B.

    2018-04-01

    Laser contrast plays a crucial role for obtaining high quality ion beams in the radiation pressure ion acceleration (RPA) process. Through one- and two-dimensional particle-in-cell (PIC) simulations, we show that a plasma with a bi-peak density profile can be produced from a thin foil on the effects of a picosecond prepulse, and it can then lead to distinctive modulations in the ion phase space (phase locked double rings) when the main pulse interacts with the target. These fascinating ion dynamics are mainly due to the trapping effect from the ponderomotive potential well of a formed moving standing wave (i.e. the interference between the incoming pulse and the pulse reflected by a slowly moving surface) at nodes, quite different from the standard RPA process. A theoretical model is derived to explain the underlying mechanism, and good agreements have been achieved with PIC simulations.

  19. Integrated condition monitoring of a fleet of offshore wind turbines with focus on acceleration streaming processing

    NASA Astrophysics Data System (ADS)

    Helsen, Jan; Gioia, Nicoletta; Peeters, Cédric; Jordaens, Pieter-Jan

    2017-05-01

    Particularly offshore there is a trend to cluster wind turbines in large wind farms, and in the near future to operate such a farm as an integrated power production plant. Predictability of individual turbine behavior across the entire fleet is key in such a strategy. Failure of turbine subcomponents should be detected well in advance to allow early planning of all necessary maintenance actions; Such that they can be performed during low wind and low electricity demand periods. In order to obtain the insights to predict component failure, it is necessary to have an integrated clean dataset spanning all turbines of the fleet for a sufficiently long period of time. This paper illustrates our big-data approach to do this. In addition, advanced failure detection algorithms are necessary to detect failures in this dataset. This paper discusses a multi-level monitoring approach that consists of a combination of machine learning and advanced physics based signal-processing techniques. The advantage of combining different data sources to detect system degradation is in the higher certainty due to multivariable criteria. In order to able to perform long-term acceleration data signal processing at high frequency a streaming processing approach is necessary. This allows the data to be analysed as the sensors generate it. This paper illustrates this streaming concept on 5kHz acceleration data. A continuous spectrogram is generated from the data-stream. Real-life offshore wind turbine data is used. Using this streaming approach for calculating bearing failure features on continuous acceleration data will support failure propagation detection.

  20. Mapping knowledge translation and innovation processes in Cancer Drug Development: the case of liposomal doxorubicin.

    PubMed

    Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Hector; Castaño, Victor M

    2014-09-03

    We explored how the knowledge translation and innovation processes are structured when theyresult in innovations, as in the case of liposomal doxorubicin research. In order to map the processes, a literature network analysis was made through Cytoscape and semantic analysis was performed by GOPubmed which is based in the controlled vocabularies MeSH (Medical Subject Headings) and GO (Gene Ontology). We found clusters related to different stages of the technological development (invention, innovation and imitation) and the knowledge translation process (preclinical, translational and clinical research), and we were able to map the historic emergence of Doxil as a paradigmatic nanodrug. This research could be a powerful methodological tool for decision-making and innovation management in drug delivery research.

  1. Multitemporal ALSM change detection, sediment delivery, and process mapping at an active earthflow

    USGS Publications Warehouse

    DeLong, Stephen B.; Prentice, Carol S.; Hilley, George E.; Ebert, Yael

    2012-01-01

    Remote mapping and measurement of surface processes at high spatial resolution is among the frontiers in Earth surface process research. Remote measurements that allow meter-scale mapping of landforms and quantification of landscape change can revolutionize the study of landscape evolution on human timescales. At Mill Gulch in northern California, USA, an active earthflow was surveyed in 2003 and 2007 by airborne laser swath mapping (ALSM), enabling meter-scale quantification of landscape change. We calculate four-year volumetric flux from the earthflow and compare it to long-term catchment average erosion rates from cosmogenic radionuclide inventories from adjacent watersheds. We also present detailed maps of changing features on the earthflow, from which we can derive velocity estimates and infer dominant process. These measurements rely on proper digital elevation model (DEM) generation and a simple surface-matching technique to align the multitemporal data in a manner that eliminates systematic error in either dataset. The mean surface elevation of the earthflow and an opposite slope that was directly influenced by the earthflow decreased 14 ± 1 mm/yr from 2003 to 2007. By making the conservative assumption that these features were the dominant contributor of sediment flux from the entire Mill Gulch drainage basin during this time interval, we calculate a minimum catchment-averaged erosion rate of 0·30 ± 0·02 mm/yr. Analysis of beryllium-10 (10Be) concentrations in fluvial sand from nearby Russian Gulch and the South Fork Gualala River provide catchment averaged erosion rates of 0·21 ± 0·04 and 0·23 ± 0·03 mm/yr respectively. From translated landscape features, we can infer surface velocities ranging from 0·5 m/yr in the wide upper ‘source’ portion of the flow to 5 m/yr in the narrow middle ‘transport’ portion of the flow. This study re-affirms the importance of mass wasting processes in the sediment budgets of

  2. Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameth W.; Kapur, Mohit

    2016-01-05

    A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.

  3. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    NASA Astrophysics Data System (ADS)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    Geo-hazards and their effects are distributed geographically over wide regions. The effective mapping and monitoring is essential for hazard assessment and mitigation. It is often best achieved using satellite imagery and new object-based image analysis approaches to identify and delineate geo-hazard objects (landslides, floods, forest fires, storm damages, etc.). At the moment, several local/national databases and platforms provide and publish data of different types of geo-hazards as well as web-based risk maps and decision support systems. Also, the European commission implemented the Copernicus Emergency Management Service (EMS) in 2015 that publishes information about natural and man-made disasters and risks. Currently, no platform for landslides or geo-hazards as such exists that enables the integration of the user in the mapping and monitoring process. In this study we introduce the concept of a spatial data infrastructure for object delineation, web-processing and service provision of landslide information with the focus on user interaction in all processes. A first prototype for the processing and mapping of landslides in Austria and Italy has been developed within the project Land@Slide, funded by the Austrian Research Promotion Agency FFG in the Austrian Space Applications Program ASAP. The spatial data infrastructure and its services for the mapping, processing and analysis of landslides can be extended to other regions and to all types of geo-hazards for analysis and delineation based on Earth Observation (EO) data. The architecture of the first prototypical spatial data infrastructure includes four main areas of technical components. The data tier consists of a file storage system and the spatial data catalogue for the management of EO-data, other geospatial data on geo-hazards, as well as descriptions and protocols for the data processing and analysis. An interface to extend the data integration from external sources (e.g. Sentinel-2 data) is planned

  4. Diffusive shock acceleration - Acceleration rate, magnetic-field direction and the diffusion limit

    NASA Technical Reports Server (NTRS)

    Jokipii, J. R.

    1992-01-01

    This paper reviews the concept of diffusive shock acceleration, showing that the acceleration of charged particles at a collisionless shock is a straightforward consequence of the standard cosmic-ray transport equation, provided that one treats the discontinuity at the shock correctly. This is true for arbitrary direction of the upstream magnetic field. Within this framework, it is shown that acceleration at perpendicular or quasi-perpendicular shocks is generally much faster than for parallel shocks. Paradoxically, it follows also that, for a simple scattering law, the acceleration is faster for less scattering or larger mean free path. Obviously, the mean free path can not become too large or the diffusion limit becomes inapplicable. Gradient and curvature drifts caused by the magnetic-field change at the shock play a major role in the acceleration process in most cases. Recent observations of the charge state of the anomalous component are shown to require the faster acceleration at the quasi-perpendicular solar-wind termination shock.

  5. Accelerated construction

    DOT National Transportation Integrated Search

    2004-01-01

    Accelerated Construction Technology Transfer (ACTT) is a strategic process that uses various innovative techniques, strategies, and technologies to minimize actual construction time, while enhancing quality and safety on today's large, complex multip...

  6. Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to the Enterprise

    DTIC Science & Technology

    2010-04-29

    Technology: From the Office Larry Smith Software Technology Support Center to the Enterprise 517 SMXS/MXDEA 6022 Fir Avenue Hill AFB, UT 84056 801...2010 to 00-00-2010 4. TITLE AND SUBTITLE Accelerating Project and Process Improvement using Advanced Software Simulation Technology: From the Office to

  7. A whole body vibration perception map and associated acceleration loads at the lower leg, hip and head.

    PubMed

    Sonza, Anelise; Völkel, Nina; Zaro, Milton A; Achaval, Matilde; Hennig, Ewald M

    2015-07-01

    Whole-body vibration (WBV) training has become popular in recent years. However, WBV may be harmful to the human body. The goal of this study was to determine the acceleration magnitudes at different body segments for different frequencies of WBV. Additionally, vibration sensation ratings by subjects served to create perception vibration magnitude and discomfort maps of the human body. In the first of two experiments, 65 young adults mean (± SD) age range of 23 (± 3.0) years, participated in WBV severity perception ratings, based on a Borg scale. Measurements were performed at 12 different frequencies, two intensities (3 and 5 mm amplitudes) of rotational mode WBV. On a separate day, a second experiment (n = 40) included vertical accelerometry of the head, hip and lower leg with the same WBV settings. The highest lower limb vibration magnitude perception based on the Borg scale was extremely intense for the frequencies between 21 and 25 Hz; somewhat hard for the trunk region (11-25 Hz) and fairly light for the head (13-25 Hz). The highest vertical accelerations were found at a frequency of 23 Hz at the tibia, 9 Hz at the hip and 13 Hz at the head. At 5 mm amplitude, 61.5% of the subjects reported discomfort in the foot region (21-25 Hz), 46.2% for the lower back (17, 19 and 21 Hz) and 23% for the abdominal region (9-13 Hz). The range of 3-7 Hz represents the safest frequency range with magnitudes less than 1 g(*)sec for all studied regions. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    NASA Astrophysics Data System (ADS)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  9. Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.

    PubMed

    Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar

    2014-07-30

    To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  10. Recommendations for improved and coherent acquisition and processing of backscatter data from seafloor-mapping sonars

    NASA Astrophysics Data System (ADS)

    Lamarche, Geoffroy; Lurton, Xavier

    2018-06-01

    Multibeam echosounders are becoming widespread for the purposes of seafloor bathymetry mapping, but the acquisition and the use of seafloor backscatter measurements, acquired simultaneously with the bathymetric data, are still insufficiently understood, controlled and standardized. This presents an obstacle to well-accepted, standardized analysis and application by end users. The Marine Geological and Biological Habitat Mapping group (Geohab.org) has long recognized the need for better coherence and common agreement on acquisition, processing and interpretation of seafloor backscatter data, and established the Backscatter Working Group (BSWG) in May 2013. This paper presents an overview of this initiative, the mandate, structure and program of the working group, and a synopsis of the BSWG Guidelines and Recommendations to date. The paper includes (1) an overview of the current status in sensors and techniques available in seafloor backscatter data from multibeam sonars; (2) the presentation of the BSWG structure and results; (3) recommendations to operators, end-users, sonar manufacturers, and software developers using sonar backscatter for seafloor-mapping applications, for best practice methods and approaches for data acquisition and processing; and (4) a discussion on the development needs for future systems and data processing. We propose for the first time a nomenclature of backscatter processing levels that affords a means to accurately and efficiently describe the data processing status, and to facilitate comparisons of final products from various origins.

  11. Vacuum Brazing of Accelerator Components

    NASA Astrophysics Data System (ADS)

    Singh, Rajvir; Pant, K. K.; Lal, Shankar; Yadav, D. P.; Garg, S. R.; Raghuvanshi, V. K.; Mundra, G.

    2012-11-01

    Commonly used materials for accelerator components are those which are vacuum compatible and thermally conductive. Stainless steel, aluminum and copper are common among them. Stainless steel is a poor heat conductor and not very common in use where good thermal conductivity is required. Aluminum and copper and their alloys meet the above requirements and are frequently used for the above purpose. The accelerator components made of aluminum and its alloys using welding process have become a common practice now a days. It is mandatory to use copper and its other grades in RF devices required for accelerators. Beam line and Front End components of the accelerators are fabricated from stainless steel and OFHC copper. Fabrication of components made of copper using welding process is very difficult and in most of the cases it is impossible. Fabrication and joining in such cases is possible using brazing process especially under vacuum and inert gas atmosphere. Several accelerator components have been vacuum brazed for Indus projects at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore using vacuum brazing facility available at RRCAT, Indore. This paper presents details regarding development of the above mentioned high value and strategic components/assemblies. It will include basics required for vacuum brazing, details of vacuum brazing facility, joint design, fixturing of the jobs, selection of filler alloys, optimization of brazing parameters so as to obtain high quality brazed joints, brief description of vacuum brazed accelerator components etc.

  12. Phase locked multiple rings in the radiation pressure ion acceleration process

    DOE PAGES

    Wan, Y.; Hua, J. F.; Pai, C. -H.; ...

    2018-03-05

    Laser contrast plays a crucial role for obtaining high quality ion beams in the radiation pressure ion acceleration (RPA) process. Through one- and two-dimensional particle-in-cell (PIC) simulations, we show that a plasma with a bi-peak density profile can be produced from a thin foil on the effects of a picosecond prepulse, and it can then lead to distinctive modulations in the ion phase space (phase locked double rings) when the main pulse interacts with the target. These fascinating ion dynamics are mainly due to the trapping effect from the ponderomotive potential well of a formed moving standing wave (i.e. themore » interference between the incoming pulse and the pulse reflected by a slowly moving surface) at nodes, quite different from the standard RPA process. Here, a theoretical model is derived to explain the underlying mechanism, and good agreements have been achieved with PIC simulations.« less

  13. Phase locked multiple rings in the radiation pressure ion acceleration process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Y.; Hua, J. F.; Pai, C. -H.

    Laser contrast plays a crucial role for obtaining high quality ion beams in the radiation pressure ion acceleration (RPA) process. Through one- and two-dimensional particle-in-cell (PIC) simulations, we show that a plasma with a bi-peak density profile can be produced from a thin foil on the effects of a picosecond prepulse, and it can then lead to distinctive modulations in the ion phase space (phase locked double rings) when the main pulse interacts with the target. These fascinating ion dynamics are mainly due to the trapping effect from the ponderomotive potential well of a formed moving standing wave (i.e. themore » interference between the incoming pulse and the pulse reflected by a slowly moving surface) at nodes, quite different from the standard RPA process. Here, a theoretical model is derived to explain the underlying mechanism, and good agreements have been achieved with PIC simulations.« less

  14. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  15. Development of Acceleration Sensor and Acceleration Evaluation System for Super-Low-Range Frequencies

    NASA Astrophysics Data System (ADS)

    Asano, Shogo; Matsumoto, Hideki

    2001-05-01

    This paper describes the development process for acceleration sensors used on automobiles and an acceleration evaluation system designed specifically for acceleration at super-low-range frequencies. The features of the newly developed sensor are as follows. 1) Original piezo-bimorph design based on a disc-center-fixed structure achieves pyroeffect cancelling and stabilization of sensor characteristics and enables the detection of the acceleration of 0.0009 G at the super-low-range-frequency of 0.03 Hz. 2) The addition of a self-diagnostic function utilizing the characteristics of piezoceramics enables constant monitoring of sensor failure. The frequency range of acceleration for accurate vehicle motion control is considered to be from DC to about 50 Hz. However, the measurement of acceleration in the super-low-range frequency near DC has been difficult because of mechanical and electrical noise interruption. This has delayed the development of the acceleration sensor for automotive use. We have succeeded in the development of an acceleration evaluation system for super-low-range frequencies from 0.015 Hz to 2 Hz with detection of the acceleration range from 0.0002 G (0.2 gal) to 1 G, as well as the development of a piezoelectric-type acceleration sensor for automotive use.

  16. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Baryon Acoustic Oscillation Intensity Mapping of Dark Energy

    NASA Astrophysics Data System (ADS)

    Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B.; McDonald, Patrick

    2008-03-01

    The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called “dark energy.” To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 109 individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.

  18. Ground Test of the Urine Processing Assembly for Accelerations and Transfer Functions

    NASA Technical Reports Server (NTRS)

    Houston, Janice; Almond, Deborah F. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of the ground test of the urine processing assembly for accelerations and transfer functions. Details are given on the test setup, test data, data analysis, analytical results, and microgravity assessment. The conclusions of the tests include the following: (1) the single input/multiple output method is useful if the data is acquired by tri-axial accelerometers and inputs can be considered uncorrelated; (2) tying coherence with the matrix yields higher confidence in results; (3) the WRS#2 rack ORUs need to be isolated; (4) and future work includes a plan for characterizing performance of isolation materials.

  19. Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2017-06-13

    We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.

  20. Acceleration of a trailing positron bunch in a plasma wakefield accelerator

    DOE PAGES

    Doche, A.; Beekman, C.; Corde, S.; ...

    2017-10-27

    High gradients of energy gain and high energy efficiency are necessary parameters for compact, cost-efficient and high-energy particle colliders. Plasma Wakefield Accelerators (PWFA) offer both, making them attractive candidates for next-generation colliders. Here in these devices, a charge-density plasma wave is excited by an ultra-relativistic bunch of charged particles (the drive bunch). The energy in the wave can be extracted by a second bunch (the trailing bunch), as this bunch propagates in the wake of the drive bunch. While a trailing electron bunch was accelerated in a plasma with more than a gigaelectronvolt of energy gain, accelerating a trailing positronmore » bunch in a plasma is much more challenging as the plasma response can be asymmetric for positrons and electrons. We report the demonstration of the energy gain by a distinct trailing positron bunch in a plasma wakefield accelerator, spanning nonlinear to quasi-linear regimes, and unveil the beam loading process underlying the accelerator energy efficiency. A positron bunch is used to drive the plasma wake in the experiment, though the quasi-linear wake structure could as easily be formed by an electron bunch or a laser driver. Finally, the results thus mark the first acceleration of a distinct positron bunch in plasma-based particle accelerators.« less

  1. Acceleration of a trailing positron bunch in a plasma wakefield accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doche, A.; Beekman, C.; Corde, S.

    High gradients of energy gain and high energy efficiency are necessary parameters for compact, cost-efficient and high-energy particle colliders. Plasma Wakefield Accelerators (PWFA) offer both, making them attractive candidates for next-generation colliders. Here in these devices, a charge-density plasma wave is excited by an ultra-relativistic bunch of charged particles (the drive bunch). The energy in the wave can be extracted by a second bunch (the trailing bunch), as this bunch propagates in the wake of the drive bunch. While a trailing electron bunch was accelerated in a plasma with more than a gigaelectronvolt of energy gain, accelerating a trailing positronmore » bunch in a plasma is much more challenging as the plasma response can be asymmetric for positrons and electrons. We report the demonstration of the energy gain by a distinct trailing positron bunch in a plasma wakefield accelerator, spanning nonlinear to quasi-linear regimes, and unveil the beam loading process underlying the accelerator energy efficiency. A positron bunch is used to drive the plasma wake in the experiment, though the quasi-linear wake structure could as easily be formed by an electron bunch or a laser driver. Finally, the results thus mark the first acceleration of a distinct positron bunch in plasma-based particle accelerators.« less

  2. Theory of unfolded cyclotron accelerator

    NASA Astrophysics Data System (ADS)

    Rax, J.-M.; Robiche, J.

    2010-10-01

    An acceleration process based on the interaction between an ion, a tapered periodic magnetic structure, and a circularly polarized oscillating electric field is identified and analyzed, and its potential is evaluated. A Hamiltonian analysis is developed in order to describe the interplay between the cyclotron motion, the electric acceleration, and the magnetic modulation. The parameters of this universal class of magnetic modulation leading to continuous acceleration without Larmor radius increase are expressed analytically. Thus, this study provides the basic scaling of what appears as a compact unfolded cyclotron accelerator.

  3. An Endogenous Accelerator for Viral Gene Expression Confers a Fitness Advantage

    PubMed Central

    Teng, Melissa W.; Bolovan-Fritts, Cynthia; Dar, Roy D.; Womack, Andrew; Simpson, Michael L.; Shenk, Thomas; Weinberger, Leor S.

    2012-01-01

    Many signaling circuits face a fundamental tradeoff between accelerating their response speed while maintaining final levels below a cytotoxic threshold. Here, we describe a transcriptional circuitry that dynamically converts signaling inputs into faster rates without amplifying final equilibrium levels. Using time-lapse microscopy, we find that transcriptional activators accelerate human cytomegalovirus (CMV) gene expression in single cells without amplifying steady-state expression levels, and this acceleration generates a significant replication advantage. We map the accelerator to a highly self-cooperative transcriptional negative-feedback loop (Hill coefficient ~ 7) generated by homo-multimerization of the virus’s essential transactivator protein IE2 at nuclear PML bodies. Eliminating the IE2-accelerator circuit reduces transcriptional strength through mislocalization of incoming viral genomes away from PML bodies and carries a heavy fitness cost. In general, accelerators may provide a mechanism for signal-transduction circuits to respond quickly to external signals without increasing steady-state levels of potentially cytotoxic molecules. PMID:23260143

  4. Pulsed electromagnetic gas acceleration

    NASA Technical Reports Server (NTRS)

    Jahn, R. G.; Vonjaskowsky, W. F.; Clark, K. E.

    1974-01-01

    Detailed measurements of the axial velocity profile and electromagnetic structure of a high power, quasi-steady MPD discharge are used to formulate a gasdynamic model of the acceleration process. Conceptually dividing the accelerated plasma into an inner flow and an outer flow, it is found that more than two-thirds of the total power in the plasma is deposited in the inner flow, accelerating it to an exhaust velocity of 12.5 km/sec. The outer flow, which is accelerated to a velocity of only 6.2 km/sec, appears to provide a current conduction path between the inner flow and the anode. Related cathode studies have shown that the critical current for the onset of terminal voltage fluctuations, which was recently shown to be a function of the cathode area, appears to reach an asymptote for cathodes of very large surface area. Detailed floating potential measurements show that the fluctuations are confined to the vicinity of the cathode and hence reflect a cathode emission process rather than a fundamental limit on MPD performance.

  5. Acceleration Modes and Transitions in Pulsed Plasma Accelerators

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Greve, Christine M.

    2018-01-01

    Pulsed plasma accelerators typically operate by storing energy in a capacitor bank and then discharging this energy through a gas, ionizing and accelerating it through the Lorentz body force. Two plasma accelerator types employing this general scheme have typically been studied: the gas-fed pulsed plasma thruster and the quasi-steady magnetoplasmadynamic (MPD) accelerator. The gas-fed pulsed plasma accelerator is generally represented as a completely transient device discharging in approximately 1-10 microseconds. When the capacitor bank is discharged through the gas, a current sheet forms at the breech of the thruster and propagates forward under a j (current density) by B (magnetic field) body force, entraining propellant it encounters. This process is sometimes referred to as detonation-mode acceleration because the current sheet representation approximates that of a strong shock propagating through the gas. Acceleration of the initial current sheet ceases when either the current sheet reaches the end of the device and is ejected or when the current in the circuit reverses, striking a new current sheet at the breech and depriving the initial sheet of additional acceleration. In the quasi-steady MPD accelerator, the pulse is lengthened to approximately 1 millisecond or longer and maintained at an approximately constant level during discharge. The time over which the transient phenomena experienced during startup typically occur is short relative to the overall discharge time, which is now long enough for the plasma to assume a relatively steady-state configuration. The ionized gas flows through a stationary current channel in a manner that is sometimes referred to as the deflagration-mode of operation. The plasma experiences electromagnetic acceleration as it flows through the current channel towards the exit of the device. A device that had a short pulse length but appeared to operate in a plasma acceleration regime different from the gas-fed pulsed plasma

  6. Cast dielectric composite linear accelerator

    DOEpatents

    Sanders, David M [Livermore, CA; Sampayan, Stephen [Manteca, CA; Slenes, Kirk [Albuquerque, NM; Stoller, H M [Albuquerque, NM

    2009-11-10

    A linear accelerator having cast dielectric composite layers integrally formed with conductor electrodes in a solventless fabrication process, with the cast dielectric composite preferably having a nanoparticle filler in an organic polymer such as a thermosetting resin. By incorporating this cast dielectric composite the dielectric constant of critical insulating layers of the transmission lines of the accelerator are increased while simultaneously maintaining high dielectric strengths for the accelerator.

  7. Mapping process and age of Quaternary deposits on Santa Rosa Island, Channel Islands National Park, California

    NASA Astrophysics Data System (ADS)

    Schmidt, K. M.; Minor, S. A.; Bedford, D.

    2016-12-01

    Employing a geomorphic process-age classification scheme, we mapped the Quaternary surficial geology of Santa Rosa (SRI) within the Channel Islands National Park. This detailed (1:12,000 scale) map represents upland erosional transport processes and alluvial, fluvial, eolian, beach, marine terrace, mass wasting, and mixed depositional processes. Mapping was motivated through an agreement with the National Park Service and is intended to aid natural resource assessments, including post-grazing disturbance recovery and identification of mass wasting and tectonic hazards. We obtained numerous detailed geologic field observations, fossils for faunal identification as age control, and materials for numeric dating. This GPS-located field information provides ground truth for delineating map units and faults using GIS-based datasets- high-resolution (sub-meter) aerial imagery, LiDAR-based DEMs and derivative raster products. Mapped geologic units denote surface processes and Quaternary faults constrain deformation kinematics and rates, which inform models of landscape change. Significant findings include: 1) Flights of older Pleistocene (>120 ka) and possibly Pliocene marine terraces were identified beneath younger alluvial and eolian deposits at elevations as much as 275 m above modern sea level. Such elevated terraces suggest that SRI was a smaller, more submerged island in the late Neogene and (or) early Pleistocene prior to tectonic uplift. 2) Structural and geomorphic observations made along the potentially seismogenic SRI fault indicate a protracted slip history during the late Neogene and Quaternary involving early normal slip, later strike slip, and recent reverse slip. These changes in slip mode explain a marked contrast in island physiography across the fault. 3) Many of the steeper slopes are dramatically stripped of regolith, with exposed bedrock and deeply incised gullies, presumably due effects related to past grazing practices. 4) Surface water presence is

  8. Searching for the missing pieces between the hospital and primary care: mapping the patient process during care transitions.

    PubMed

    Johnson, Julie K; Farnan, Jeanne M; Barach, Paul; Hesselink, Gijs; Wollersheim, Hub; Pijnenborg, Loes; Kalkman, Cor; Arora, Vineet M

    2012-12-01

    Safe patient transitions depend on effective communication and a functioning care coordination process. Evidence suggests that primary care physicians are not satisfied with communication at transition points between inpatient and ambulatory care, and that communication often is not provided in a timely manner, omits essential information, or contains ambiguities that put patients at risk. Our aim was to demonstrate how process mapping can illustrate current handover practices between ambulatory and inpatient care settings, identify existing barriers and facilitators to effective transitions of care, and highlight potential areas for quality improvement. We conducted focus group interviews to facilitate a process mapping exercise with clinical teams in six academic health centres in the USA, Poland, Sweden, Italy, Spain and the Netherlands. At a high level, the process of patient admission to the hospital through the emergency department, inpatient care, and discharge back in the community were comparable across sites. In addition, the process maps highlighted similar barriers to providing information to primary care physicians, inaccurate or incomplete information on referral and discharge, a lack of time and priority to collaborate with counterpart colleagues, and a lack of feedback to clinicians involved in the handovers. Process mapping is effective in bringing together key stakeholders and makes explicit the mental models that frame their understanding of the clinical process. Exploring the barriers and facilitators to safe and reliable patient transitions highlights opportunities for further improvement work and illustrates ideas for best practices that might be transferrable to other settings.

  9. Auroral particle acceleration: An example of a universal plasma process

    NASA Astrophysics Data System (ADS)

    Haerendel, G.

    1980-06-01

    The occurrence of discrete and narrow auroral arcs is attributed to a sudden release of magnetic tensions set up in a magnetospheric-ionospheric current circuit of high strength. At altitudes of several 1000 km the condition of frozen in magnetic fields can be broken temporarily in thin regions corresponding to the observed width of auroral arcs. This implies magnetic field-aligned potential drops of several kilovolts supported by certain anomalous transport processes which can only be maintained in a quasi-stationary fashion if the current density exceeds a critical limit. The region of field aligned potential drops is structured by two pairs of standing waves which are generalized Alfven waves of large amplitude across which the parallel electric field has a finite jump. The waves are emitted from the leading edge of the acceleration region which propagates slowly into the stressed magnetic field.

  10. Crossed-beam velocity map imaging of collisional autoionization processes

    NASA Astrophysics Data System (ADS)

    Delmdahl, Ralph F.; Bakker, Bernard L. G.; Parker, David H.

    2000-11-01

    Applying the velocity map imaging technique Penning ion formation as well as generation of associative ions is observed in autoionizing collisions of metastable neon atoms (Ne* 2p5 3s 3P2,0) with ground state argon targets in a crossed molecular beam experiment. Metastable neon reactants are obtained by nozzle expansion through a dc discharge ring. The quality of the obtained results clearly demonstrates the suitability of this new, particularly straightforward experimental approach with respect to angle and kinetic energy resolved investigations of Penning processes in crossed-beam studies which are known to provide the highest level of detail.

  11. Does the process map influence the outcome of quality improvement work? A comparison of a sequential flow diagram and a hierarchical task analysis diagram.

    PubMed

    Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan

    2010-01-07

    Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In

  12. Revision of Primary Series Maps

    USGS Publications Warehouse

    ,

    2000-01-01

    In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.

  13. CUDA-Accelerated Geodesic Ray-Tracing for Fiber Tracking

    PubMed Central

    van Aart, Evert; Sepasian, Neda; Jalba, Andrei; Vilanova, Anna

    2011-01-01

    Diffusion Tensor Imaging (DTI) allows to noninvasively measure the diffusion of water in fibrous tissue. By reconstructing the fibers from DTI data using a fiber-tracking algorithm, we can deduce the structure of the tissue. In this paper, we outline an approach to accelerating such a fiber-tracking algorithm using a Graphics Processing Unit (GPU). This algorithm, which is based on the calculation of geodesics, has shown promising results for both synthetic and real data, but is limited in its applicability by its high computational requirements. We present a solution which uses the parallelism offered by modern GPUs, in combination with the CUDA platform by NVIDIA, to significantly reduce the execution time of the fiber-tracking algorithm. Compared to a multithreaded CPU implementation of the same algorithm, our GPU mapping achieves a speedup factor of up to 40 times. PMID:21941525

  14. Demonstration of wetland vegetation mapping in Florida from computer-processed satellite and aircraft multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Butera, M. K.

    1979-01-01

    The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.

  15. [Effect of pilot UASB-SFSBR-MAP process for the large scale swine wastewater treatment].

    PubMed

    Wang, Liang; Chen, Chong-Jun; Chen, Ying-Xu; Wu, Wei-Xiang

    2013-03-01

    In this paper, a treatment process consisted of UASB, step-fed sequencing batch reactor (SFSBR) and magnesium ammonium phosphate precipitation reactor (MAP) was built to treat the large scale swine wastewater, which aimed at overcoming drawbacks of conventional anaerobic-aerobic treatment process and SBR treatment process, such as the low denitrification efficiency, high operating costs and high nutrient losses and so on. Based on the treatment process, a pilot engineering was constructed. It was concluded from the experiment results that the removal efficiency of COD, NH4(+) -N and TP reached 95.1%, 92.7% and 88.8%, the recovery rate of NH4(+) -N and TP by MAP process reached 23.9% and 83.8%, the effluent quality was superior to the discharge standard of pollutants for livestock and poultry breeding (GB 18596-2001), mass concentration of COD, TN, NH4(+) -N, TP and SS were not higher than 135, 116, 43, 7.3 and 50 mg x L(-1) respectively. The process developed was reliable, kept self-balance of carbon source and alkalinity, reached high nutrient recovery efficiency. And the operating cost was equal to that of the traditional anaerobic-aerobic treatment process. So the treatment process could provide a high value of application and dissemination and be fit for the treatment pf the large scale swine wastewater in China.

  16. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  17. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial

  18. Electron acceleration by turbulent plasmoid reconnection

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Büchner, J.; Widmer, F.; Muñoz, P. A.

    2018-04-01

    In space and astrophysical plasmas, like in planetary magnetospheres, as that of Mercury, energetic electrons are often found near current sheets, which hint at electron acceleration by magnetic reconnection. Unfortunately, electron acceleration by reconnection is not well understood yet, in particular, acceleration by turbulent plasmoid reconnection. We have investigated electron acceleration by turbulent plasmoid reconnection, described by MHD simulations, via test particle calculations. In order to avoid resolving all relevant turbulence scales down to the dissipation scales, a mean-field turbulence model is used to describe the turbulence of sub-grid scales and their effects via a turbulent electromotive force (EMF). The mean-field model describes the turbulent EMF as a function of the mean values of current density, vorticity, magnetic field as well as of the energy, cross-helicity, and residual helicity of the turbulence. We found that, mainly around X-points of turbulent reconnection, strongly enhanced localized EMFs most efficiently accelerated electrons and caused the formation of power-law spectra. Magnetic-field-aligned EMFs, caused by the turbulence, dominate the electron acceleration process. Scaling the acceleration processes to parameters of the Hermean magnetotail, electron energies up to 60 keV can be reached by turbulent plasmoid reconnection through the thermal plasma.

  19. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    PubMed

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  20. Documentation for the 2014 update of the United States national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.

    2014-01-01

    The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.

  1. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion

    PubMed Central

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  2. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion.

    PubMed

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-08-06

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.

  3. Exercise Versus +Gz Acceleration Training

    NASA Technical Reports Server (NTRS)

    Greenleaf, John E.; Simonson, S. R.; Stocks, J. M.; Evans, J. M.; Knapp, C. F.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    Decreased working capacity and "orthostatic" intolerance are two major problems for astronauts during and after landing from spaceflight in a return vehicle. The purpose was to test the hypotheses that (1) supine-passive-acceleration training, supine-interval-exercise plus acceleration training, and supine exercise plus acceleration training will improve orthostatic tolerance (OT) in ambulatory men; and that (2) addition of aerobic exercise conditioning will not influence this enhanced OT from that of passive-acceleration training. Seven untrained men (24-38 yr) underwent 3 training regimens (30 min/d x 5d/wk x 3wk on the human-powered centrifuge - HPC): (a) Passive acceleration (alternating +1.0 Gz to 50% Gzmax); (b) Exercise acceleration (alternating 40% - 90% V02max leg cycle exercise plus 50% of HPCmax acceleration); and (c) Combined intermittent exercise-acceleration at 40% to 90% HPCmax. Maximal supine exercise workloads increased (P < 0.05) by 8.3% with Passive, by 12.6% with Exercise, and by 15.4% with Combined; but maximal V02 and HR were unchanged in all groups. Maximal endurance (time to cessation) was unchanged with Passive, but increased (P < 0.05) with Exercise and Combined. Resting pre-tilt HR was elevated by 12.9% (P < 0.05) only after Passive training, suggesting that exercise training attenuated this HR response. All resting pre-tilt blood pressures (SBP, DBP, MAP) were not different pre- vs. post-training. Post-training tilt-tolerance time and HR were increased (P < 0.05) only with Passive training by 37.8% and by 29.1%, respectively. Thus, addition of exercise training attenuated the increased Passive tilt tolerance. Resting (pre-tilt) and post-tilt cardiac R-R interval, stroke volume, end-diastolic volume, and cardiac output were all uniformly reduced (P < 0.05) while peripheral resistance was uniformly increased (P < 0.05) pre-and post-training for the three regimens indicating no effect of any training regimen on those cardiovascular

  4. Analysis of Large-Scale Resurfacing Processes on Mercury: Mapping the Derain (H-10) Quadrangle

    NASA Astrophysics Data System (ADS)

    Whitten, J. L.; Ostrach, L. R.; Fassett, C. I.

    2018-05-01

    The Derain (H-10) Quadrangle of Mercury contains a large region of "average" crustal materials, with minimal smooth plains and basin ejecta, allowing the relative contribution of volcanic and impact processes to be assessed through geologic mapping.

  5. A Bayesian and Physics-Based Ground Motion Parameters Map Generation System

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.

    2014-12-01

    We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the

  6. Implementing Dementia Care Mapping to develop person-centred care: results of a process evaluation within the Leben-QD II trial.

    PubMed

    Quasdorf, Tina; Riesner, Christine; Dichter, Martin Nikolaus; Dortmann, Olga; Bartholomeyczik, Sabine; Halek, Margareta

    2017-03-01

    To evaluate Dementia Care Mapping implementation in nursing homes. Dementia Care Mapping, an internationally applied method for supporting and enhancing person-centred care for people with dementia, must be successfully implemented into care practice for its effective use. Various factors influence the implementation of complex interventions such as Dementia Care Mapping; few studies have examined the specific factors influencing Dementia Care Mapping implementation. A convergent parallel mixed-methods design embedded in a quasi-experimental trial was used to assess Dementia Care Mapping implementation success and influential factors. From 2011-2013, nine nursing units in nine different nursing homes implemented either Dementia Care Mapping (n = 6) or a periodic quality of life measurement using the dementia-specific instrument QUALIDEM (n = 3). Diverse data (interviews, n = 27; questionnaires, n = 112; resident records, n = 81; and process documents) were collected. Each data set was separately analysed and then merged to comprehensively portray the implementation process. Four nursing units implemented the particular intervention without deviating from the preplanned intervention. Translating Dementia Care Mapping results into practice was challenging. Necessary organisational preconditions for Dementia Care Mapping implementation included well-functioning networks, a dementia-friendly culture and flexible organisational structures. Involved individuals' positive attitudes towards Dementia Care Mapping also facilitated implementation. Precisely planning the intervention and its implementation, recruiting champions who supported Dementia Care Mapping implementation and having well-qualified, experienced project coordinators were essential to the implementation process. For successful Dementia Care Mapping implementation, it must be embedded in a systematic implementation strategy considering the specific setting. Organisational preconditions may need to

  7. India Solar Resource Data: Enhanced Data for Accelerated Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.

  8. Accelerating Exploitation of Low-grade Intelligence through Semantic Text Processing of Social Media

    DTIC Science & Technology

    2013-06-01

    importance as an information source. The brevity of social media content (e.g., 140 characters per tweet) combined with the increasing usage of mobile...platform imports unstructured text from a variety of sources and then maps the text to an existing ontology of frames (FrameNet, https...framenet.icsi.berkeley.edu/fndrupal/) during a process of Semantic Role Labeling ( SRL ). FrameNet is a structured language model grounded in the theory of Frame

  9. Micro structure processing on plastics by accelerated hydrogen molecular ions

    NASA Astrophysics Data System (ADS)

    Hayashi, H.; Hayakawa, S.; Nishikawa, H.

    2017-08-01

    A proton has 1836 times the mass of an electron and is the lightest nucleus to be used for accelerator in material modification. We can setup accelerator with the lowest acceleration voltage. It is preferable characteristics of Proton Beam Writer (PBW) for industrial applications. On the contrary ;proton; has the lowest charge among all nuclei and the potential impact to material is lowest. The object of this research is to improve productivity of the PBW for industry application focusing on hydrogen molecular ions. These ions are generated in the same ion source by ionizing hydrogen molecule. There is no specific ion source requested and it is suitable for industrial use. We demonstrated three dimensional (3D) multilevel micro structures on polyester base FPC (Flexible Printed Circuits) using proton, H2+ and H3+. The reactivity of hydrogen molecular ions is much higher than that of proton and coincident with the level of expectation. We can apply this result to make micro devices of 3D multilevel structures on FPC.

  10. Microscopic Processes On Radiation from Accelerated Particles in Relativistic Jets

    NASA Technical Reports Server (NTRS)

    Nishikawa, K.-I.; Hardee, P. E.; Mizuno, Y.; Medvedev, M.; Zhang, B.; Sol, H.; Niemiec, J.; Pohl, M.; Nordlund, A.; Fredriksen, J.; hide

    2009-01-01

    Nonthermal radiation observed from astrophysical systems containing relativistic jets and shocks, e.g., gamma-ray bursts (GRBs), active galactic nuclei (AGNs), and Galactic microquasar systems usually have power-law emission spectra. Recent PIC simulations of relativistic electron-ion (electro-positron) jets injected into a stationary medium show that particle acceleration occurs within the downstream jet. In the collisionless relativistic shock particle acceleration is due to plasma waves and their associated instabilities (e.g., the Buneman instability, other two-streaming instability, and the Weibel (filamentation) instability) created in the shocks are responsible for particle (electron, positron, and ion) acceleration. The simulation results show that the Weibel instability is responsible for generating and amplifying highly nonuniform, small-scale magnetic fields. These magnetic fields contribute to the electron's transverse deflection behind the jet head. The jitter'' radiation from deflected electrons has different properties than synchrotron radiation which is calculated in a uniform magnetic field. This jitter radiation may be important to understanding the complex time evolution and/or spectral structure in gamma-ray bursts, relativistic jets, and supernova remnants.

  11. Efficient particle acceleration in shocks

    NASA Astrophysics Data System (ADS)

    Heavens, A. F.

    1984-10-01

    A self-consistent non-linear theory of acceleration of particles by shock waves is developed, using an extension of the two-fluid hydrodynamical model by Drury and Völk. The transport of the accelerated particles is governed by a diffusion coefficient which is initially assumed to be independent of particle momentum, to obtain exact solutions for the spectrum. It is found that steady-state shock structures with high acceleration efficiency are only possible for shocks with Mach numbers less than about 12. A more realistic diffusion coefficient is then considered, and this maximum Mach number is reduced to about 6. The efficiency of the acceleration process determines the relative importance of the non-relativistic and relativistic particles in the distribution of accelerated particles, and this determines the effective specific heat ratio.

  12. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  13. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  14. Column ratio mapping: a processing technique for atomic resolution high-angle annular dark-field (HAADF) images.

    PubMed

    Robb, Paul D; Craven, Alan J

    2008-12-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  15. Soil mapping and processes models to support climate change mitigation and adaptation strategies: a review

    NASA Astrophysics Data System (ADS)

    Muñoz-Rojas, Miriam; Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Jordan, Antonio

    2017-04-01

    As agreed in Paris in December 2015, global average temperature is to be limited to "well below 2 °C above pre-industrial levels" and efforts will be made to "limit the temperature increase to 1.5 °C above pre-industrial levels. Thus, reducing greenhouse gas emissions (GHG) in all sectors becomes critical and appropriate sustainable land management practices need to be taken (Pereira et al., 2017). Mitigation strategies focus on reducing the rate and magnitude of climate change by reducing its causes. Complementary to mitigation, adaptation strategies aim to minimise impacts and maximize the benefits of new opportunities. The adoption of both practices will require developing system models to integrate and extrapolate anticipated climate changes such as global climate models (GCMs) and regional climate models (RCMs). Furthermore, integrating climate models driven by socio-economic scenarios in soil process models has allowed the investigation of potential changes and threats in soil characteristics and functions in future climate scenarios. One of the options with largest potential for climate change mitigation is sequestering carbon in soils. Therefore, the development of new methods and the use of existing tools for soil carbon monitoring and accounting have therefore become critical in a global change context. For example, soil C maps can help identify potential areas where management practices that promote C sequestration will be productive and guide the formulation of policies for climate change mitigation and adaptation strategies. Despite extensive efforts to compile soil information and map soil C, many uncertainties remain in the determination of soil C stocks, and the reliability of these estimates depends upon the quality and resolution of the spatial datasets used for its calculation. Thus, better estimates of soil C pools and dynamics are needed to advance understanding of the C balance and the potential of soils for climate change mitigation. Here

  16. Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit

    PubMed Central

    Lawrie, David S.

    2017-01-01

    Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689

  17. Charge-Transfer Processes in Warm Dense Matter: Selective Spectral Filtering for Laser-Accelerated Ion Beams

    NASA Astrophysics Data System (ADS)

    Braenzel, J.; Barriga-Carrasco, M. D.; Morales, R.; Schnürer, M.

    2018-05-01

    We investigate, both experimentally and theoretically, how the spectral distribution of laser accelerated carbon ions can be filtered by charge exchange processes in a double foil target setup. Carbon ions at multiple charge states with an initially wide kinetic energy spectrum, from 0.1 to 18 MeV, were detected with a remarkably narrow spectral bandwidth after they had passed through an ultrathin and partially ionized foil. With our theoretical calculations, we demonstrate that this process is a consequence of the evolution of the carbon ion charge states in the second foil. We calculated the resulting spectral distribution separately for each ion species by solving the rate equations for electron loss and capture processes within a collisional radiative model. We determine how the efficiency of charge transfer processes can be manipulated by controlling the ionization degree of the transfer matter.

  18. Jupiter radio bursts and particle acceleration

    NASA Technical Reports Server (NTRS)

    Desch, Michael D.

    1994-01-01

    Particle acceleration processes are important in understanding many of the Jovian radio and plasma wave emissions. However, except for the high-energy electrons that generate synchrotron emission following inward diffusion from the outer magnetosphere, acceleration processes in Jupiter's magnetosphere and between Jupiter and Io are poorly understood. We discuss very recent observations from the Ulysses spacecraft of two new Jovian radio and plamas wave emissions in which particle acceleration processes are important and have been addressed directly by complementary investigations. First, radio bursts known as quasi-periodic bursts have been observed in close association with a population of highly energetic electrons. Second, a population of much lower energy (keV range) electrons on auroral field lines can be shown to be responsible for the first observation of a Jovian plasma wave emission known as auroral hiss.

  19. Journey to the Edges: Social Structures and Neural Maps of Intergroup Processes

    PubMed Central

    Fiske, Susan T.

    2013-01-01

    This article explores boundaries of the intellectual map of intergroup processes, going to the macro (social structure) boundary and the micro (neural systems) boundary. Both are illustrated by with my own and others’ work on social structures and on neural structures related to intergroup processes. Analyzing the impact of social structures on intergroup processes led to insights about distinct forms of sexism and underlies current work on forms of ageism. The stereotype content model also starts with the social structure of intergroup relations (interdependence and status) and predicts images, emotions, and behaviors. Social structure has much to offer the social psychology of intergroup processes. At the other, less explored boundary, social neuroscience addresses the effects of social contexts on neural systems relevant to intergroup processes. Both social structural and neural analyses circle back to traditional social psychology as converging indicators of intergroup processes. PMID:22435843

  20. Artificial seismic acceleration

    USGS Publications Warehouse

    Felzer, Karen R.; Page, Morgan T.; Michael, Andrew J.

    2015-01-01

    In their 2013 paper, Bouchon, Durand, Marsan, Karabulut, 3 and Schmittbuhl (BDMKS) claim to see significant accelerating seismicity before M 6.5 interplate mainshocks, but not before intraplate mainshocks, reflecting a preparatory process before large events. We concur with the finding of BDMKS that their interplate dataset has significantly more fore- shocks than their intraplate dataset; however, we disagree that the foreshocks are predictive of large events in particular. Acceleration in stacked foreshock sequences has been seen before and has been explained by the cascade model, in which earthquakes occasionally trigger aftershocks larger than themselves4. In this model, the time lags between the smaller mainshocks and larger aftershocks follow the inverse power law common to all aftershock sequences, creating an apparent acceleration when stacked (see Supplementary Information).

  1. Cloud-based computation for accelerating vegetation mapping and change detection at regional to national scales

    Treesearch

    Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts

    2015-01-01

    Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...

  2. Acceleration modules in linear induction accelerators

    NASA Astrophysics Data System (ADS)

    Wang, Shao-Heng; Deng, Jian-Jun

    2014-05-01

    The Linear Induction Accelerator (LIA) is a unique type of accelerator that is capable of accelerating kilo-Ampere charged particle current to tens of MeV energy. The present development of LIA in MHz bursting mode and the successful application into a synchrotron have broadened LIA's usage scope. Although the transformer model is widely used to explain the acceleration mechanism of LIAs, it is not appropriate to consider the induction electric field as the field which accelerates charged particles for many modern LIAs. We have examined the transition of the magnetic cores' functions during the LIA acceleration modules' evolution, distinguished transformer type and transmission line type LIA acceleration modules, and re-considered several related issues based on transmission line type LIA acceleration module. This clarified understanding should help in the further development and design of LIA acceleration modules.

  3. Study on acceleration processes of the radiation belt electrons through interaction with sub-packet chorus waves in parallel propagation

    NASA Astrophysics Data System (ADS)

    Hiraga, R.; Omura, Y.

    2017-12-01

    By recent observations, chorus waves include fine structures such as amplitude fluctuations (i.e. sub-packet structure), and it has not been verified in detail yet how energetic electrons are efficiently accelerated under the wave features. In this study, we firstly focus on the acceleration process of a single electron: how it experiences the efficient energy increase by interaction with sub-packet chorus waves in parallel propagation along the Earth's magnetic field. In order to reproduce the chorus waves as seen by the latest observations by Van Allen Probes (Foster et al. 2017), the wave model amplitude in our simulation is structured such that when the wave amplitude nonlinearly grows to reach the optimum amplitude, it starts decreasing until crossing the threshold. Once it crosses the threshold, the wave dissipates and a new wave rises to repeat the nonlinear growth and damping in the same manner. The multiple occurrence of this growth-damping cycle forms a saw tooth-like amplitude variation called sub-packet. This amplitude variation also affects the wave frequency behavior which is derived by the chorus wave equations as a function of the wave amplitude (Omura et al. 2009). It is also reasonable to assume that when a wave packet diminishes and the next wave rises, it has a random phase independent of the previous wave. This randomness (discontinuity) in phase variation is included in the simulation. Through interaction with such waves, dynamics of energetic electrons were tracked. As a result, some electrons underwent an efficient acceleration process defined as successive entrapping, in which an electron successfully continues to surf the trapping potential generated by consecutive wave packets. When successive entrapping occurs, an electron trapped and de-trapped (escape the trapping potential) by a single wave packet falls into another trapping potential generated by the next wave sub-packet and continuously accelerated. The occurrence of successive

  4. REVIEWS OF TOPICAL PROBLEMS: Acceleration of cosmic rays by shock waves

    NASA Astrophysics Data System (ADS)

    Berezhko, E. G.; Krymskiĭ, G. F.

    1988-01-01

    Theoretical work on various processes by which shock waves accelerate cosmic rays is reviewed. The most efficient of these processes, Fermi acceleration, is singled out for special attention. A linear theory for this process is presented. The results found on the basis of nonlinear models of Fermi acceleration, which incorporate the modification of the structure caused by the accelerated particles, are reported. There is a discussion of various possibilities for explaining the generation of high-energy particles observed in interplanetary and interstellar space on the basis of a Fermi acceleration mechanism. The acceleration by shock waves from supernova explosions is discussed as a possible source of galactic cosmic rays. The most important unresolved questions in the theory of acceleration of charged particles by shock waves are pointed out.

  5. Microstructures and Mechanical Properties of Commercially Pure Ti Processed by Rotationally Accelerated Shot Peening

    PubMed Central

    Huang, Zhaowen; Cao, Yang; Nie, Jinfeng; Zhou, Hao; Li, Yusheng

    2018-01-01

    Gradient structured materials possess good combinations of strength and ductility, rendering the materials attractive in industrial applications. In this research, a surface nanocrystallization (SNC) technique, rotationally accelerated shot peening (RASP), was employed to produce a gradient nanostructured pure Ti with a deformation layer that had a thickness of 2000 μm, which is thicker than those processed by conventional SNC techniques. It is possible to fabricate a gradient structured Ti workpiece without delamination. Moreover, based on the microstructural features, the microstructure of the processed sample can be classified into three regions, from the center to the surface of the RASP-processed sample: (1) a twinning-dominated core region; (2) a “twin intersection”-dominated twin transition region; and (3) the nanostructured region, featuring nanograins. A microhardness gradient was detected from the RASP-processed Ti. The surface hardness was more than twice that of the annealed Ti sample. The RASP-processed Ti sample exhibited a good combination of yield strength and uniform elongation, which may be attributed to the high density of deformation twins and a strong back stress effect. PMID:29498631

  6. Correlation of Noise Signature to Pulsed Power Events at the HERMES III Accelerator.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Barbara; Joseph, Nathan Ryan; Salazar, Juan Diego

    2016-11-01

    The HERMES III accelerator, which is located at Sandia National Laboratories' Tech Area IV, is the largest pulsed gamma X-ray source in the world. The accelerator is made up of 20 inductive cavities that are charged to 1 MV each by complex pulsed power circuitry. The firing time of the machine components ranges between the microsecond and nanosecond timescales. This results in a variety of electromagnetic frequencies when the accelerator fires. Testing was done to identify the HERMES electromagnetic noise signal and to map it to the various accelerator trigger events. This report will show the measurement methods used tomore » capture the noise spectrum produced from the machine and correlate this noise signature with machine events.« less

  7. Lecture Notes on Topics in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Alex W.

    These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures ismore » not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.« less

  8. The effect of stochastic re-acceleration on the energy spectrum of shock-accelerated protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afanasiev, Alexandr; Vainio, Rami; Kocharov, Leon

    2014-07-20

    The energy spectra of particles in gradual solar energetic particle (SEP) events do not always have a power-law form attributed to the diffusive shock acceleration mechanism. In particular, the observed spectra in major SEP events can take the form of a broken (double) power law. In this paper, we study the effect of a process that can modify the power-law spectral form produced by the diffusive shock acceleration: the stochastic re-acceleration of energetic protons by enhanced Alfvénic turbulence in the downstream region of a shock wave. There are arguments suggesting that this process can be important when the shock propagatesmore » in the corona. We consider a coronal magnetic loop traversed by a shock and perform Monte Carlo simulations of interactions of shock-accelerated protons with Alfvén waves in the loop. The wave-particle interactions are treated self-consistently, so the finiteness of the available turbulent energy is taken into account. The initial energy spectrum of particles is taken to be a power law. The simulations reveal that the stochastic re-acceleration leads either to the formation of a spectrum that is described in a wide energy range by a power law (although the resulting power-law index is different from the initial one) or to a broken power-law spectrum. The resulting spectral form is determined by the ratio of the energy density of shock-accelerated protons to the wave energy density in the shock's downstream region.« less

  9. Accelerating Commercial Remote Sensing

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Through the Visiting Investigator Program (VIP) at Stennis Space Center, Community Coffee was able to use satellites to forecast coffee crops in Guatemala. Using satellite imagery, the company can produce detailed maps that separate coffee cropland from wild vegetation and show information on the health of specific crops. The data can control coffee prices and eventually may be used to optimize application of fertilizers, pesticides and irrigation. This would result in maximal crop yields, minimal pollution and lower production costs. VIP is a mechanism involving NASA funding designed to accelerate the growth of commercial remote sensing by promoting general awareness and basic training in the technology.

  10. Modeling of a self-healing process in blast furnace slag cement exposed to accelerated carbonation

    NASA Astrophysics Data System (ADS)

    Zemskov, Serguey V.; Ahmad, Bilal; Copuroglu, Oguzhan; Vermolen, Fred J.

    2013-02-01

    In the current research, a mathematical model for the post-damage improvement of the carbonated blast furnace slag cement (BFSC) exposed to accelerated carbonation is constructed. The study is embedded within the framework of investigating the effect of using lightweight expanded clay aggregate, which is incorporated into the impregnation of the sodium mono-fluorophosphate (Na-MFP) solution. The model of the self-healing process is built under the assumption that the position of the carbonation front changes in time where the rate of diffusion of Na-MFP into the carbonated cement matrix and the reaction rates of the free phosphate and fluorophosphate with the components of the cement are comparable to the speed of the carbonation front under accelerated carbonation conditions. The model is based on an initial-boundary value problem for a system of partial differential equations which is solved using a Galerkin finite element method. The results obtained are discussed and generalized to a three-dimensional case.

  11. Process for Generating Engine Fuel Consumption Map: Ricardo Cooled EGR Boost 24-bar Standard Car Engine Tier 2 Fuel

    EPA Pesticide Factsheets

    This document summarizes the process followed to utilize the fuel consumption map of a Ricardo modeled engine and vehicle fuel consumption data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.

  12. Breakthrough: Fermilab Accelerator Technology

    ScienceCinema

    None

    2018-02-07

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  13. Breakthrough: Fermilab Accelerator Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-04-23

    There are more than 30,000 particle accelerators in operation around the world. At Fermilab, scientists are collaborating with other laboratories and industry to optimize the manufacturing processes for a new type of powerful accelerator that uses superconducting niobium cavities. Experimenting with unique polishing materials, a Fermilab team has now developed an efficient and environmentally friendly way of creating cavities that can propel particles with more than 30 million volts per meter.

  14. Acceleration of high-pressure-ratio single-spool turbojet engine as determined from component performance characteristics I : effect of air bleed at compressor outlet

    NASA Technical Reports Server (NTRS)

    Rebeske, John J , Jr; Rohlik, Harold E

    1953-01-01

    An analytical investigation was made to determine from component performance characteristics the effect of air bleed at the compressor outlet on the acceleration characteristics of a typical high-pressure-ratio single-spool turbojet engine. Consideration of several operating lines on the compressor performance map with two turbine-inlet temperatures showed that for a minimum acceleration time the turbine-inlet temperature should be the maximum allowable, and the operating line on the compressor map should be as close to the surge region as possible throughout the speed range. Operation along such a line would require a continuously varying bleed area. A relatively simple two-step area bleed gives only a small increase in acceleration time over a corresponding variable-area bleed. For the modes of operation considered, over 84 percent of the total acceleration time was required to accelerate through the low-speed range ; therefore, better low-speed compressor performance (higher pressure ratios and efficiencies) would give a significant reduction in acceleration time.

  15. An algorithm for automated layout of process description maps drawn in SBGN.

    PubMed

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  16. An algorithm for automated layout of process description maps drawn in SBGN

    PubMed Central

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Motivation: Evolving technology has increased the focus on genomics. The combination of today’s advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. Results: We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. Availability and implementation: An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). Contact: ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26363029

  17. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  18. The Louisiana Accelerated Schools Project First Year Evaluation Report.

    ERIC Educational Resources Information Center

    St. John, Edward P.; And Others

    The Louisiana Accelerated Schools Project (LASP) is a statewide network of schools that are changing from the traditional mode of schooling for at-risk students, which stresses remediation, to one of acceleration, which stresses accelerated learning for all students. The accelerated schools process provides a systematic approach to the…

  19. Detailed Modeling of Physical Processes in Electron Sources for Accelerator Applications

    NASA Astrophysics Data System (ADS)

    Chubenko, Oksana; Afanasev, Andrei

    2017-01-01

    At present, electron sources are essential in a wide range of applications - from common technical use to exploring the nature of matter. Depending on the application requirements, different methods and materials are used to generate electrons. State-of-the-art accelerator applications set a number of often-conflicting requirements for electron sources (e.g., quantum efficiency vs. polarization, current density vs. lifetime, etc). Development of advanced electron sources includes modeling and design of cathodes, material growth, fabrication of cathodes, and cathode testing. The detailed simulation and modeling of physical processes is required in order to shed light on the exact mechanisms of electron emission and to develop new-generation electron sources with optimized efficiency. The purpose of the present work is to study physical processes in advanced electron sources and develop scientific tools, which could be used to predict electron emission from novel nano-structured materials. In particular, the area of interest includes bulk/superlattice gallium arsenide (bulk/SL GaAs) photo-emitters and nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) photo/field-emitters. Work supported by The George Washington University and Euclid TechLabs LLC.

  20. Blocking the association of HDAC4 with MAP1S accelerates autophagy clearance of mutant Huntingtin

    PubMed Central

    Yue, Fei; Li, Wenjiao; Zou, Jing; Chen, Qi; Xu, Guibin; Huang, Hai; Xu, Zhen; Zhang, Sheng; Gallinari, Paola; Wang, Fen; McKeehan, Wallace L.; Liu, Leyuan

    2015-01-01

    Autophagy controls and executes the turnover of abnormally aggregated proteins. MAP1S interacts with the autophagy marker LC3 and positively regulates autophagy flux. HDAC4 associates with the aggregation-prone mutant huntingtin protein (mHTT) that causes Huntington's disease, and colocalizes with it in cytosolic inclusions. It was suggested HDAC4 interacts with MAP1S in a yeast two-hybrid screening. Here, we found that MAP1S interacts with HDAC4 via a HDAC4-binding domain (HBD). HDAC4 destabilizes MAP1S, suppresses autophagy flux and promotes the accumulation of mHTT aggregates. This occurs by an increase in the deacetylation of the acetylated MAP1S. Either suppression of HDAC4 with siRNA or overexpression of the MAP1S HBD leads to stabilization of MAP1S, activation of autophagy flux and clearance of mHTT aggregates. Therefore, specific interruption of the HDAC4-MAP1S interaction with short peptides or small molecules to enhance autophagy flux may relieve the toxicity of mHTT associated with Huntington's disease and improve symptoms of HD patients. PMID:26540094

  1. GRIM-Filter: Fast seed location filtering in DNA read mapping using processing-in-memory technologies.

    PubMed

    Kim, Jeremie S; Senol Cali, Damla; Xin, Hongyi; Lee, Donghyuk; Ghose, Saugata; Alser, Mohammed; Hassan, Hasan; Ergin, Oguz; Alkan, Can; Mutlu, Onur

    2018-05-09

    Seed location filtering is critical in DNA read mapping, a process where billions of DNA fragments (reads) sampled from a donor are mapped onto a reference genome to identify genomic variants of the donor. State-of-the-art read mappers 1) quickly generate possible mapping locations for seeds (i.e., smaller segments) within each read, 2) extract reference sequences at each of the mapping locations, and 3) check similarity between each read and its associated reference sequences with a computationally-expensive algorithm (i.e., sequence alignment) to determine the origin of the read. A seed location filter comes into play before alignment, discarding seed locations that alignment would deem a poor match. The ideal seed location filter would discard all poor match locations prior to alignment such that there is no wasted computation on unnecessary alignments. We propose a novel seed location filtering algorithm, GRIM-Filter, optimized to exploit 3D-stacked memory systems that integrate computation within a logic layer stacked under memory layers, to perform processing-in-memory (PIM). GRIM-Filter quickly filters seed locations by 1) introducing a new representation of coarse-grained segments of the reference genome, and 2) using massively-parallel in-memory operations to identify read presence within each coarse-grained segment. Our evaluations show that for a sequence alignment error tolerance of 0.05, GRIM-Filter 1) reduces the false negative rate of filtering by 5.59x-6.41x, and 2) provides an end-to-end read mapper speedup of 1.81x-3.65x, compared to a state-of-the-art read mapper employing the best previous seed location filtering algorithm. GRIM-Filter exploits 3D-stacked memory, which enables the efficient use of processing-in-memory, to overcome the memory bandwidth bottleneck in seed location filtering. We show that GRIM-Filter significantly improves the performance of a state-of-the-art read mapper. GRIM-Filter is a universal seed location filter that can be

  2. St. Louis area earthquake hazards mapping project; seismic and liquefaction hazard maps

    USGS Publications Warehouse

    Cramer, Chris H.; Bauer, Robert A.; Chung, Jae-won; Rogers, David; Pierce, Larry; Voigt, Vicki; Mitchell, Brad; Gaunt, David; Williams, Robert; Hoffman, David; Hempen, Gregory L.; Steckel, Phyllis; Boyd, Oliver; Watkins, Connor M.; Tucker, Kathleen; McCallister, Natasha

    2016-01-01

    We present probabilistic and deterministic seismic and liquefaction hazard maps for the densely populated St. Louis metropolitan area that account for the expected effects of surficial geology on earthquake ground shaking. Hazard calculations were based on a map grid of 0.005°, or about every 500 m, and are thus higher in resolution than any earlier studies. To estimate ground motions at the surface of the model (e.g., site amplification), we used a new detailed near‐surface shear‐wave velocity model in a 1D equivalent‐linear response analysis. When compared with the 2014 U.S. Geological Survey (USGS) National Seismic Hazard Model, which uses a uniform firm‐rock‐site condition, the new probabilistic seismic‐hazard estimates document much more variability. Hazard levels for upland sites (consisting of bedrock and weathered bedrock overlain by loess‐covered till and drift deposits), show up to twice the ground‐motion values for peak ground acceleration (PGA), and similar ground‐motion values for 1.0 s spectral acceleration (SA). Probabilistic ground‐motion levels for lowland alluvial floodplain sites (generally the 20–40‐m‐thick modern Mississippi and Missouri River floodplain deposits overlying bedrock) exhibit up to twice the ground‐motion levels for PGA, and up to three times the ground‐motion levels for 1.0 s SA. Liquefaction probability curves were developed from available standard penetration test data assuming typical lowland and upland water table levels. A simplified liquefaction hazard map was created from the 5%‐in‐50‐year probabilistic ground‐shaking model. The liquefaction hazard ranges from low (60% of area expected to liquefy) in the lowlands. Because many transportation routes, power and gas transmission lines, and population centers exist in or on the highly susceptible lowland alluvium, these areas in the St. Louis region are at significant potential risk from seismically induced liquefaction and associated

  3. Mapping racism.

    PubMed

    Moss, Donald B

    2006-01-01

    The author uses the metaphor of mapping to illuminate a structural feature of racist thought, locating the degraded object along vertical and horizontal axes. These axes establish coordinates of hierarchy and of distance. With the coordinates in place, racist thought begins to seem grounded in natural processes. The other's identity becomes consolidated, and parochialism results. The use of this kind of mapping is illustrated via two patient vignettes. The author presents Freud's (1905, 1927) views in relation to such a "mapping" process, as well as Adorno's (1951) and Baldwin's (1965). Finally, the author conceptualizes the crucial status of primitivity in the workings of racist thought.

  4. Particle Acceleration in Active Galactic Nuclei

    NASA Technical Reports Server (NTRS)

    Miller, James A.

    1997-01-01

    The high efficiency of energy generation inferred from radio observations of quasars and X-ray observations of Seyfert active galactic nuclei (AGNs) is apparently achieved only by the gravitational conversion of the rest mass energy of accreting matter onto supermassive black holes. Evidence for the acceleration of particles to high energies by a central engine is also inferred from observations of apparent superluminal motion in flat spectrum, core-dominated radio sources. This phenomenon is widely attributed to the ejection of relativistic bulk plasma from the nuclei of active galaxies, and accounts for the existence of large scale radio jets and lobes at large distances from the central regions of radio galaxies. Reports of radio jets and superluminal motion from galactic black hole candidate X-ray sources indicate that similar processes are operating in these sources. Observations of luminous, rapidly variable high-energy radiation from active galactic nuclei (AGNs) with the Compton Gamma Ray Observatory show directly that particles are accelerated to high energies in a compact environment. The mechanisms which transform the gravitational potential energy of the infalling matter into nonthermal particle energy in galactic black hole candidates and AGNs are not conclusively identified, although several have been proposed. These include direct acceleration by static electric fields (resulting from, for example, magnetic reconnection), shock acceleration, and energy extraction from the rotational energy of Kerr black holes. The dominant acceleration mechanism(s) operating in the black hole environment can only be determined, of course, by a comparison of model predictions with observations. The purpose of the work proposed for this grant was to investigate stochastic particle acceleration through resonant interactions with plasma waves that populate the magnetosphere surrounding an accreting black hole. Stochastic acceleration has been successfully applied to the

  5. Accelerating the commercialization of university technologies for military healthcare applications: the role of the proof of concept process

    NASA Astrophysics Data System (ADS)

    Ochoa, Rosibel; DeLong, Hal; Kenyon, Jessica; Wilson, Eli

    2011-06-01

    The von Liebig Center for Entrepreneurism and Technology Advancement at UC San Diego (vonliebig.ucsd.edu) is focused on accelerating technology transfer and commercialization through programs and education on entrepreneurism. Technology Acceleration Projects (TAPs) that offer pre-venture grants and extensive mentoring on technology commercialization are a key component of its model which has been developed over the past ten years with the support of a grant from the von Liebig Foundation. In 2010, the von Liebig Entrepreneurism Center partnered with the U.S. Army Telemedicine and Advanced Technology Research Center (TATRC), to develop a regional model of Technology Acceleration Program initially focused on military research to be deployed across the nation to increase awareness of military medical needs and to accelerate the commercialization of novel technologies to treat the patient. Participants to these challenges are multi-disciplinary teams of graduate students and faculty in engineering, medicine and business representing universities and research institutes in a region, selected via a competitive process, who receive commercialization assistance and funding grants to support translation of their research discoveries into products or services. To validate this model, a pilot program focused on commercialization of wireless healthcare technologies targeting campuses in Southern California has been conducted with the additional support of Qualcomm, Inc. Three projects representing three different universities in Southern California were selected out of forty five applications from ten different universities and research institutes. Over the next twelve months, these teams will conduct proof of concept studies, technology development and preliminary market research to determine the commercial feasibility of their technologies. This first regional program will help build the needed tools and processes to adapt and replicate this model across other regions in the

  6. Land use/land cover mapping using multi-scale texture processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Wong, S. N.; Sarker, M. L. R.

    2014-02-01

    Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.

  7. How to generate a sound-localization map in fish

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  8. Mode-selective mapping and control of vectorial nonlinear-optical processes in multimode photonic-crystal fibers.

    PubMed

    Hu, Ming-Lie; Wang, Ching-Yue; Song, You-Jian; Li, Yan-Feng; Chai, Lu; Serebryannikov, Evgenii; Zheltikov, Aleksei

    2006-02-06

    We demonstrate an experimental technique that allows a mapping of vectorial nonlinear-optical processes in multimode photonic-crystal fibers (PCFs). Spatial and polarization modes of PCFs are selectively excited in this technique by varying the tilt angle of the input beam and rotating the polarization of the input field. Intensity spectra of the PCF output plotted as a function of the input field power and polarization then yield mode-resolved maps of nonlinear-optical interactions in multimode PCFs, facilitating the analysis and control of nonlinear-optical transformations of ultrashort laser pulses in such fibers.

  9. EIDOSCOPE: particle acceleration at plasma boundaries

    NASA Astrophysics Data System (ADS)

    Vaivads, A.; Andersson, G.; Bale, S. D.; Cully, C. M.; De Keyser, J.; Fujimoto, M.; Grahn, S.; Haaland, S.; Ji, H.; Khotyaintsev, Yu. V.; Lazarian, A.; Lavraud, B.; Mann, I. R.; Nakamura, R.; Nakamura, T. K. M.; Narita, Y.; Retinò, A.; Sahraoui, F.; Schekochihin, A.; Schwartz, S. J.; Shinohara, I.; Sorriso-Valvo, L.

    2012-04-01

    We describe the mission concept of how ESA can make a major contribution to the Japanese Canadian multi-spacecraft mission SCOPE by adding one cost-effective spacecraft EIDO (Electron and Ion Dynamics Observatory), which has a comprehensive and optimized plasma payload to address the physics of particle acceleration. The combined mission EIDOSCOPE will distinguish amongst and quantify the governing processes of particle acceleration at several important plasma boundaries and their associated boundary layers: collisionless shocks, plasma jet fronts, thin current sheets and turbulent boundary layers. Particle acceleration and associated cross-scale coupling is one of the key outstanding topics to be addressed in the Plasma Universe. The very important science questions that only the combined EIDOSCOPE mission will be able to tackle are: 1) Quantitatively, what are the processes and efficiencies with which both electrons and ions are selectively injected and subsequently accelerated by collisionless shocks? 2) How does small-scale electron and ion acceleration at jet fronts due to kinetic processes couple simultaneously to large scale acceleration due to fluid (MHD) mechanisms? 3) How does multi-scale coupling govern acceleration mechanisms at electron, ion and fluid scales in thin current sheets? 4) How do particle acceleration processes inside turbulent boundary layers depend on turbulence properties at ion/electron scales? EIDO particle instruments are capable of resolving full 3D particle distribution functions in both thermal and suprathermal regimes and at high enough temporal resolution to resolve the relevant scales even in very dynamic plasma processes. The EIDO spin axis is designed to be sun-pointing, allowing EIDO to carry out the most sensitive electric field measurements ever accomplished in the outer magnetosphere. Combined with a nearby SCOPE Far Daughter satellite, EIDO will form a second pair (in addition to SCOPE Mother-Near Daughter) of closely

  10. Mapping landscape corridors

    Treesearch

    Peter Vogt; Kurt H. Riitters; Marcin Iwanowski; Christine Estreguil; Jacek Kozak; Pierre Soille

    2007-01-01

    Corridors are important geographic features for biological conservation and biodiversity assessment. The identification and mapping of corridors is usually based on visual interpretations of movement patterns (functional corridors) or habitat maps (structural corridors). We present a method for automated corridor mapping with morphological image processing, and...

  11. Tools for developing a quality management program: proactive tools (process mapping, value stream mapping, fault tree analysis, and failure mode and effects analysis).

    PubMed

    Rath, Frank

    2008-01-01

    This article examines the concepts of quality management (QM) and quality assurance (QA), as well as the current state of QM and QA practices in radiotherapy. A systematic approach incorporating a series of industrial engineering-based tools is proposed, which can be applied in health care organizations proactively to improve process outcomes, reduce risk and/or improve patient safety, improve through-put, and reduce cost. This tool set includes process mapping and process flowcharting, failure modes and effects analysis (FMEA), value stream mapping, and fault tree analysis (FTA). Many health care organizations do not have experience in applying these tools and therefore do not understand how and when to use them. As a result there are many misconceptions about how to use these tools, and they are often incorrectly applied. This article describes these industrial engineering-based tools and also how to use them, when they should be used (and not used), and the intended purposes for their use. In addition the strengths and weaknesses of each of these tools are described, and examples are given to demonstrate the application of these tools in health care settings.

  12. Exploring the Interactive Patterns of Concept Map-Based Online Discussion: A Sequential Analysis of Users' Operations, Cognitive Processing, and Knowledge Construction

    ERIC Educational Resources Information Center

    Wu, Sheng-Yi; Chen, Sherry Y.; Hou, Huei-Tse

    2016-01-01

    Concept maps can be used as a cognitive tool to assist learners' knowledge construction. However, in a concept map-based online discussion environment, studies that take into consideration learners' manipulative actions of composing concept maps, cognitive process among learners' discussion, and social knowledge construction at the same time are…

  13. Hybrid optical acoustic seafloor mapping

    NASA Astrophysics Data System (ADS)

    Inglis, Gabrielle

    The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement

  14. Does MRI scan acceleration affect power to track brain change?

    PubMed

    Ching, Christopher R K; Hua, Xue; Hibar, Derrek P; Ward, Chadwick P; Gunter, Jeffrey L; Bernstein, Matt A; Jack, Clifford R; Weiner, Michael W; Thompson, Paul M

    2015-01-01

    The Alzheimer's Disease Neuroimaging Initiative recently implemented accelerated T1-weighted structural imaging to reduce scan times. Faster scans may reduce study costs and patient attrition by accommodating people who cannot tolerate long scan sessions. However, little is known about how scan acceleration affects the power to detect longitudinal brain change. Using tensor-based morphometry, no significant difference was detected in numerical summaries of atrophy rates from accelerated and nonaccelerated scans in subgroups of patients with Alzheimer's disease, early or late mild cognitive impairment, or healthy controls over a 6- and 12-month scan interval. Whole-brain voxelwise mapping analyses revealed some apparent regional differences in 6-month atrophy rates when comparing all subjects irrespective of diagnosis (n = 345). No such whole-brain difference was detected for the 12-month scan interval (n = 156). Effect sizes for structural brain changes were not detectably different in accelerated versus nonaccelerated data. Scan acceleration may influence brain measures but has minimal effects on tensor-based morphometry-derived atrophy measures, at least over the 6- and 12-month intervals examined here. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators

    NASA Astrophysics Data System (ADS)

    Bingham, Robert

    2009-02-01

    This special issue on laser and plasma accelerators illustrates the rapid advancement and diverse applications of laser and plasma accelerators. Plasma is an attractive medium for particle acceleration because of the high electric field it can sustain, with studies of acceleration processes remaining one of the most important areas of research in both laboratory and astrophysical plasmas. The rapid advance in laser and accelerator technology has led to the development of terawatt and petawatt laser systems with ultra-high intensities and short sub-picosecond pulses, which are used to generate wakefields in plasma. Recent successes include the demonstration by several groups in 2004 of quasi-monoenergetic electron beams by wakefields in the bubble regime with the GeV energy barrier being reached in 2006, and the energy doubling of the SLAC high-energy electron beam from 42 to 85 GeV. The electron beams generated by the laser plasma driven wakefields have good spatial quality with energies ranging from MeV to GeV. A unique feature is that they are ultra-short bunches with simulations showing that they can be as short as a few femtoseconds with low-energy spread, making these beams ideal for a variety of applications ranging from novel high-brightness radiation sources for medicine, material science and ultrafast time-resolved radiobiology or chemistry. Laser driven ion acceleration experiments have also made significant advances over the last few years with applications in laser fusion, nuclear physics and medicine. Attention is focused on the possibility of producing quasi-mono-energetic ions with energies ranging from hundreds of MeV to GeV per nucleon. New acceleration mechanisms are being studied, including ion acceleration from ultra-thin foils and direct laser acceleration. The application of wakefields or beat waves in other areas of science such as astrophysics and particle physics is beginning to take off, such as the study of cosmic accelerators considered

  16. Mapping dominant runoff processes: an evaluation of different approaches using similarity measures and synthetic runoff simulations

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Buss, Rahel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-07-01

    The identification of landscapes with similar hydrological behaviour is useful for runoff and flood predictions in small ungauged catchments. An established method for landscape classification is based on the concept of dominant runoff process (DRP). The various DRP-mapping approaches differ with respect to the time and data required for mapping. Manual approaches based on expert knowledge are reliable but time-consuming, whereas automatic GIS-based approaches are easier to implement but rely on simplifications which restrict their application range. To what extent these simplifications are applicable in other catchments is unclear. More information is also needed on how the different complexities of automatic DRP-mapping approaches affect hydrological simulations. In this paper, three automatic approaches were used to map two catchments on the Swiss Plateau. The resulting maps were compared to reference maps obtained with manual mapping. Measures of agreement and association, a class comparison, and a deviation map were derived. The automatically derived DRP maps were used in synthetic runoff simulations with an adapted version of the PREVAH hydrological model, and simulation results compared with those from simulations using the reference maps. The DRP maps derived with the automatic approach with highest complexity and data requirement were the most similar to the reference maps, while those derived with simplified approaches without original soil information differed significantly in terms of both extent and distribution of the DRPs. The runoff simulations derived from the simpler DRP maps were more uncertain due to inaccuracies in the input data and their coarse resolution, but problems were also linked with the use of topography as a proxy for the storage capacity of soils. The perception of the intensity of the DRP classes also seems to vary among the different authors, and a standardised definition of DRPs is still lacking. Furthermore, we argue not to use

  17. High-Resolution Regional Biomass Map of Siberia from Glas, Palsar L-Band Radar and Landsat Vcf Data

    NASA Astrophysics Data System (ADS)

    Sun, G.; Ranson, K.; Montesano, P.; Zhang, Z.; Kharuk, V.

    2015-12-01

    The Arctic-Boreal zone is known be warming at an accelerated rate relative to other biomes. The taiga or boreal forest covers over 16 x106 km2 of Arctic North America, Scandinavia, and Eurasia. A large part of the northern Boreal forests are in Russia's Siberia, as area with recent accelerated climate warming. During the last two decades we have been working on characterization of boreal forests in north-central Siberia using field and satellite measurements. We have published results of circumpolar biomass using field plots, airborne (PALS, ACTM) and spaceborne (GLAS) lidar data with ASTER DEM, LANDSAT and MODIS land cover classification, MODIS burned area and WWF's ecoregion map. Researchers from ESA and Russia have also been working on biomass (or growing stock) mapping in Siberia. For example, they developed a pan-boreal growing stock volume map at 1-kilometer scale using hyper-temporal ENVISAT ASAR ScanSAR backscatter data. Using the annual PALSAR mosaics from 2007 to 2010 growing stock volume maps were retrieved based on a supervised random forest regression approach. This method is being used in the ESA/Russia ZAPAS project for Central Siberia Biomass mapping. Spatially specific biomass maps of this region at higher resolution are desired for carbon cycle and climate change studies. In this study, our work focused on improving resolution ( 50 m) of a biomass map based on PALSAR L-band data and Landsat Vegetation Canopy Fraction products. GLAS data were carefully processed and screened using land cover classification, local slope, and acquisition dates. The biomass at remaining footprints was estimated using a model developed from field measurements at GLAS footprints. The GLAS biomass samples were then aggregated into 1 Mg/ha bins of biomass and mean VCF and PALSAR backscatter and textures were calculated for each of these biomass bins. The resulted biomass/signature data was used to train a random forest model for biomass mapping of entire region from 50o

  18. Naval EarthMap Observer: overview and data processing

    NASA Astrophysics Data System (ADS)

    Bowles, Jeffrey H.; Davis, Curtiss O.; Carney, Megan; Clamons, Dean; Gao, Bo-Cai; Gillis, David; Kappus, Mary E.; Lamela, G.; Montes, Marcos J.; Palmadesso, Peter J.; Rhea, J.; Snyder, William A.

    1999-12-01

    We present an overview of the Naval EarthMap Observer (NEMO) spacecraft and then focus on the processing of NEMO data both on-board the spacecraft and on the ground. The NEMO spacecraft provides for Joint Naval needs and demonstrates the use of hyperspectral imagery for the characterization of the littoral environment and for littoral ocean model development. NEMO is being funded jointly by the U.S. government and commercial partners. The Coastal Ocean Imaging Spectrometer (COIS) is the primary instrument on the NEMO and covers the spectral range from 400 to 2500 nm at 10-nm resolution with either 30 or 60 m work GSD. The hyperspectral data is processed on-board the NEMO using NRL's Optical Real-time Automated Spectral Identification System (ORASIS) algorithm that provides for real time analysis, feature extraction and greater than 10:1 data compression. The high compression factor allows for ground coverage of greater than 106 km2/day. Calibration of the sensor is done with a combination of moon imaging, using an onboard light source and vicarious calibration using a number of earth sites being monitored for that purpose. The data will be atmospherically corrected using ATREM. Algorithms will also be available to determine water clarity, bathymetry and bottom type.

  19. Improving particle beam acceleration in plasmas

    NASA Astrophysics Data System (ADS)

    C. de Sousa, M.; L. Caldas, I.

    2018-04-01

    The dynamics of wave-particle interactions in magnetized plasmas restricts the wave amplitude to moderate values for particle beam acceleration from rest energy. We analyze how a perturbing invariant robust barrier modifies the phase space of the system and enlarges the wave amplitude interval for particle acceleration. For low values of the wave amplitude, the acceleration becomes effective for particles with initial energy close to the rest energy. For higher values of the wave amplitude, the robust barrier controls chaos in the system and restores the acceleration process. We also determine the best position for the perturbing barrier in phase space in order to increase the final energy of the particles.

  20. Accelerator system and method of accelerating particles

    NASA Technical Reports Server (NTRS)

    Wirz, Richard E. (Inventor)

    2010-01-01

    An accelerator system and method that utilize dust as the primary mass flux for generating thrust are provided. The accelerator system can include an accelerator capable of operating in a self-neutralizing mode and having a discharge chamber and at least one ionizer capable of charging dust particles. The system can also include a dust particle feeder that is capable of introducing the dust particles into the accelerator. By applying a pulsed positive and negative charge voltage to the accelerator, the charged dust particles can be accelerated thereby generating thrust and neutralizing the accelerator system.

  1. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  2. How semantics can inform the geological mapping process and support intelligent queries

    NASA Astrophysics Data System (ADS)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario

    2017-04-01

    The geologic mapping process requires the organization of data according to the general knowledge about the objects, namely the geologic units, and to the objectives of a graphic representation of such objects in a map, following an established model of geotectonic evolution. Semantics can greatly help such a process in two concerns: the provision of a terminological base to name and classify the objects of the map; on the other, the implementation of a machine-readable encoding of the geologic knowledge base supports the application of reasoning mechanisms and the derivation of novel properties and relations about the objects of the map. The OntoGeonous initiative has built a terminological base of geological knowledge in a machine-readable format, following the Semantic Web tenets and the Linked Data paradigm. The major knowledge sources of the OntoGeonous initiative are GeoScience Markup Language schemata and vocabularies (through its last version, GeoSciML 4, 2015, published by the IUGS CGI Commission) and the INSPIRE "Data Specification on Geology" directives (an operative simplification of GeoSciML, published by INSPIRE Thematic Working Group Geology of the European Commission). The Linked Data paradigm has been exploited by linking (without replicating, to avoid inconsistencies) the already existing machine-readable encoding for some specific domains, such as the lithology domain (vocabulary Simple Lithology) and the geochronologic time scale (ontology "gts"). Finally, for the upper level knowledge, shared across several geologic domains, we have resorted to NASA SWEET ontology. The OntoGeonous initiative has also produced a wiki that explains how the geologic knowledge has been encoded from shared geoscience vocabularies (https://www.di.unito.it/wikigeo/). In particular, the sections dedicated to axiomatization will support the construction of an appropriate data base schema that can be then filled with the objects of the map. This contribution will discuss

  3. Principles of Induction Accelerators

    NASA Astrophysics Data System (ADS)

    Briggs*, Richard J.

    The basic concepts involved in induction accelerators are introduced in this chapter. The objective is to provide a foundation for the more detailed coverage of key technology elements and specific applications in the following chapters. A wide variety of induction accelerators are discussed in the following chapters, from the high current linear electron accelerator configurations that have been the main focus of the original developments, to circular configurations like the ion synchrotrons that are the subject of more recent research. The main focus in the present chapter is on the induction module containing the magnetic core that plays the role of a transformer in coupling the pulsed power from the modulator to the charged particle beam. This is the essential common element in all these induction accelerators, and an understanding of the basic processes involved in its operation is the main objective of this chapter. (See [1] for a useful and complementary presentation of the basic principles in induction linacs.)

  4. Interactive remote data processing using Pixelize Wavelet Filtration (PWF-method) and PeriodMap analysis

    NASA Astrophysics Data System (ADS)

    Sych, Robert; Nakariakov, Valery; Anfinogentov, Sergey

    Wavelet analysis is suitable for investigating waves and oscillating in solar atmosphere, which are limited in both time and frequency. We have developed an algorithms to detect this waves by use the Pixelize Wavelet Filtration (PWF-method). This method allows to obtain information about the presence of propagating and non-propagating waves in the data observation (cube images), and localize them precisely in time as well in space. We tested the algorithm and found that the results of coronal waves detection are consistent with those obtained by visual inspection. For fast exploration of the data cube, in addition, we applied early-developed Period- Map analysis. This method based on the Fast Fourier Transform and allows on initial stage quickly to look for "hot" regions with the peak harmonic oscillations and determine spatial distribution at the significant harmonics. We propose the detection procedure of coronal waves separate on two parts: at the first part, we apply the PeriodMap analysis (fast preparation) and than, at the second part, use information about spatial distribution of oscillation sources to apply the PWF-method (slow preparation). There are two possible algorithms working with the data: in automatic and hands-on operation mode. Firstly we use multiply PWF analysis as a preparation narrowband maps at frequency subbands multiply two and/or harmonic PWF analysis for separate harmonics in a spectrum. Secondly we manually select necessary spectral subband and temporal interval and than construct narrowband maps. For practical implementation of the proposed methods, we have developed the remote data processing system at Institute of Solar-Terrestrial Physics, Irkutsk. The system based on the data processing server - http://pwf.iszf.irk.ru. The main aim of this resource is calculation in remote access through the local and/or global network (Internet) narrowband maps of wave's sources both in whole spectral band and at significant harmonics. In addition

  5. Stability and perturbations of countable Markov maps

    NASA Astrophysics Data System (ADS)

    Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas

    2018-04-01

    Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann

  6. Acceleration of runaway electrons and Joule heating in solar flares

    NASA Technical Reports Server (NTRS)

    Holman, G. D.

    1985-01-01

    The electric field acceleration of electrons out of a thermal plasma and the simultaneous Joule heating of the plasma are studied. Acceleration and heating timescales are derived and compared, and upper limits are obtained on the acceleration volume and the rate at which electrons can be accelerated. These upper limits, determined by the maximum magnetic field strength observed in flaring regions, place stringent restrictions upon the acceleration process. The role of the plasma resistivity in these processes is examined, and possible sources of anomalous resistivity are summarized. The implications of these results for the microwave and hard X-ray emission from solar flares are examined.

  7. Acceleration of runaway electrons and Joule heating in solar flares

    NASA Technical Reports Server (NTRS)

    Holman, G. D.

    1984-01-01

    The electric field acceleration of electrons out of a thermal plasma and the simultaneous Joule heating of the plasma are studied. Acceleration and heating timescales are derived and compared, and upper limits are obtained on the acceleration volume and the rate at which electrons can be accelerated. These upper limits, determined by the maximum magnetic field strength observed in flaring regions, place stringent restrictions upon the acceleration process. The role of the plasma resistivity in these processes is examined, and possible sources of anomalous resistivity are summarized. The implications of these results for the microwave and hard X-ray emission from solar flares are examined.

  8. Double dissociation between syntactic gender and picture naming processing: a brain stimulation mapping study.

    PubMed

    Vidorreta, Jose Garbizu; Garcia, Roser; Moritz-Gasser, Sylvie; Duffau, Hugues

    2011-03-01

    Neural foundations of syntactic gender processing remain poorly understood. We used electrostimulation mapping in nine right-handed awake patients during surgery for a glioma within the left hemisphere, to study whether the cortico-subcortical structures involved in naming versus syntactic gender processing are common or distinct. In French, the article determines the grammatical gender. Thus, the patient was asked to perform a picture naming task and to give the appropriate article for each picture, with and without stimulation. Cortical stimulation elicited reproducible syntactic gender disturbances in six patients, in the inferior frontal gyrus (three cases), and in the posterior middle temporal gyrus (three cases). Interestingly, no naming disorders were generated during stimulation of the syntactic sites, while cortical areas inducing naming disturbances never elicited grammatical gender errors when stimulated. Moreover, at the subcortical level, stimulation of the white matter lateral to the caudate nucleus induced gender errors in three patients, with no naming disorders. Using cortico-subcortical electrical mapping in awake patients, we demonstrate for the first time (1) a double dissociation between syntactic gender and naming processing, supporting independent network model rather than serial theory, (2) the involvement of the left inferior frontal gyrus, especially the pars triangularis, and the posterior left middle temporal gyrus in grammatical gender processing, (3) the existence of white matter pathways, likely a sub-part of the left superior longitudinal fasciculus, underlying a large-scale distributed cortico-subcortical circuit which might selectively sub-serve syntactic gender processing, even if interconnected with parallel sub-networks involved in naming (semantic and phonological) processing. Copyright © 2010 Wiley-Liss, Inc.

  9. a New Protocol for Texture Mapping Process and 2d Representation of Rupestrian Architecture

    NASA Astrophysics Data System (ADS)

    Carnevali, L.; Carpiceci, M.; Angelini, A.

    2018-05-01

    The development of the survey techniques for architecture and archaeology requires a general review in the methods used for the representation of numerical data. The possibilities offered by data processing allow to find new paths for studying issues connected to the drawing discipline. The research project aimed at experimenting different approaches for the representation of the rupestrian architecture and the texture mapping process. The nature of the rupestrian architecture does not allow a traditional representation of sections and projections of edges and outlines. The paper presents a method, the Equidistant Multiple Sections (EMS), inspired by cartography and based on the use of isohipses generated from different geometric plane. A specific paragraph is dedicated to the texture mapping process for unstructured surface models. One of the main difficulty in the image projection consists in the recognition of homologous points between image and point cloud, above all in the areas with most deformations. With the aid of the "virtual scan" tool a different procedure was developed for improving the correspondences of the image. The result show a sensible improvement of the entire process above all for the architectural vaults. A detailed study concerned the unfolding of the straight line surfaces; the barrel vault of the analyzed chapel has been unfolded for observing the paintings in the real shapes out of the morphological context.

  10. Satellite Data Visualization, Processing and Mapping using VIIRS Imager Data

    NASA Astrophysics Data System (ADS)

    Phyu, A. N.

    2016-12-01

    A satellite is a manmade machine that is launched into space and orbits the Earth. These satellites are used for various purposes for examples: Environmental satellites help us monitor and protect our environment; Navigation (GPS) satellites provides accurate time and position information: and Communication satellites allows us the interact with each other over long distances. Suomi NPP is part of the constellation of Joint Polar Satellite System (JPSS) fleet of satellites which is an Environmental satellite that carries the Visual Infrared Imaging Radiometer Suite (VIIRS) instrument. VIIRS is a scanning radiometer that takes high resolution images of the Earth. VIIRS takes visible, infrared and radiometric measurements of the land, oceans, atmosphere and cryosphere. These high resolution images provide information that helps weather prediction and environmental forecasting of extreme events such as forest fires, ice jams, thunder storms and hurricane. This project will describe how VIIRS instrument data is processed, mapped, and visualized using variety of software and application. It will focus on extreme events like Hurricane Sandy and demonstrate how to use the satellite to map the extent of a storm. Data from environmental satellites such as Suomi NPP-VIIRS is important for monitoring climate change, sea level rise, land surface temperature changes as well as extreme weather events.

  11. Microzonation Mapping Of The Yanbu Industrial City, Western Saudi Arabia: A Multicriteria Decision Analysis Approach

    NASA Astrophysics Data System (ADS)

    Moustafa, Sayed, Sr.; Alarifi, Nassir S.; Lashin, Aref A.

    2016-04-01

    Urban areas along the western coast of Saudi Arabia are susceptible to natural disasters and environmental damages due to lack of planning. To produce a site-specific microzonation map of the rapidly growing Yanbu industrial city, spatial distribution of different hazard entities are assessed using the Analytical Hierarchal Process (AHP) together with Geographical Information System (GIS). For this purpose six hazard parameter layers are considered, namely; fundamental frequency, site amplification, soil strength in terms of effective shear-wave velocity, overburden sediment thickness, seismic vulnerability index and peak ground acceleration. The weight and rank values are determined during AHP and are assigned to each layer and its corresponding classes, respectively. An integrated seismic microzonation map was derived using GIS platform. Based on the derived map, the study area is classified into five hazard categories: very low, low, moderate high, and very high. The western and central parts of the study area, as indicated from the derived microzonation map, are categorized as a high hazard zone as compared to other surrounding places. The produced microzonation map of the current study is envisaged as a first-level assessment of the site specific hazards in the Yanbu city area, which can be used as a platform by different stakeholders in any future land-use planning and environmental hazard management.

  12. Modeling of Particle Acceleration at Multiple Shocks via Diffusive Shock Acceleration: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Parker, L. Neergaard; Zank, G. P.

    2013-01-01

    Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box. We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (E(sub max)) appropriate for quasi-parallel and quasi-perpendicular shocks and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  13. Planck 2015 results. VIII. High Frequency Instrument data processing: Calibration and maps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bertincourt, B.; Bielewicz, P.; Bock, J. J.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leahy, J. P.; Lellouch, E.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Moreno, R.; Morgante, G.; Mortlock, D.; Moss, A.; Mottet, S.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vibert, L.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    This paper describes the processing applied to the cleaned, time-ordered information obtained from the Planck High Frequency Instrument (HFI) with the aim of producing photometrically calibrated maps in temperature and (for the first time) in polarization. The data from the entire 2.5-year HFI mission include almost five full-sky surveys. HFI observes the sky over a broad range of frequencies, from 100 to 857 GHz. To obtain the best accuracy on the calibration over such a large range, two different photometric calibration schemes have been used. The 545 and 857 GHz data are calibrated using models of planetary atmospheric emission. The lower frequencies (from 100 to 353 GHz) are calibrated using the time-variable cosmological microwave background dipole, which we call the orbital dipole. This source of calibration only depends on the satellite velocity with respect to the solar system. Using a CMB temperature of TCMB = 2.7255 ± 0.0006 K, it permits an independent measurement of the amplitude of the CMB solar dipole (3364.3 ± 1.5 μK), which is approximatively 1σ higher than the WMAP measurement with a direction that is consistent between the two experiments. We describe the pipeline used to produce the maps ofintensity and linear polarization from the HFI timelines, and the scheme used to set the zero level of the maps a posteriori. We also summarize the noise characteristics of the HFI maps in the 2015 Planck data release and present some null tests to assess their quality. Finally, we discuss the major systematic effects and in particular the leakage induced by flux mismatch between the detectors that leads to spurious polarization signal.

  14. Iconicity as structure mapping

    PubMed Central

    Emmorey, Karen

    2014-01-01

    Linguistic and psycholinguistic evidence is presented to support the use of structure-mapping theory as a framework for understanding effects of iconicity on sign language grammar and processing. The existence of structured mappings between phonological form and semantic mental representations has been shown to explain the nature of metaphor and pronominal anaphora in sign languages. With respect to processing, it is argued that psycholinguistic effects of iconicity may only be observed when the task specifically taps into such structured mappings. In addition, language acquisition effects may only be observed when the relevant cognitive abilities are in place (e.g. the ability to make structural comparisons) and when the relevant conceptual knowledge has been acquired (i.e. information key to processing the iconic mapping). Finally, it is suggested that iconicity is better understood as a structured mapping between two mental representations than as a link between linguistic form and human experience. PMID:25092669

  15. Future HEP Accelerators: The US Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha; Shiltsev, Vladimir

    2015-11-02

    Accelerator technology has advanced tremendously since the introduction of accelerators in the 1930s, and particle accelerators have become indispensable instruments in high energy physics (HEP) research to probe Nature at smaller and smaller distances. At present, accelerator facilities can be classified into Energy Frontier colliders that enable direct discoveries and studies of high mass scale particles and Intensity Frontier accelerators for exploration of extremely rare processes, usually at relatively low energies. The near term strategies of the global energy frontier particle physics community are centered on fully exploiting the physics potential of the Large Hadron Collider (LHC) at CERN throughmore » its high-luminosity upgrade (HL-LHC), while the intensity frontier HEP research is focused on studies of neutrinos at the MW-scale beam power accelerator facilities, such as Fermilab Main Injector with the planned PIP-II SRF linac project. A number of next generation accelerator facilities have been proposed and are currently under consideration for the medium- and long-term future programs of accelerator-based HEP research. In this paper, we briefly review the post-LHC energy frontier options, both for lepton and hadron colliders in various regions of the world, as well as possible future intensity frontier accelerator facilities.« less

  16. Engaging Stakeholders through Participatory Mapping and Spatial Analysis in a Scenarios Process for Alaska's North Slope

    NASA Astrophysics Data System (ADS)

    Fradkin, B.; Vargas, J. C.; Lee, O. A.; Emperador, S.

    2016-12-01

    A scenarios process was conducted for Alaska's North Slope to consider the wide range of drivers of change and uncertainties that could contribute to shifts in research and monitoring needs over the next 25 years. The project team, consisting of specialists in participatory scenarios and academic researchers, developed an interactive approach that helped facilitate the exploration of a range of plausible changes in the region. Over two years, the team designed and executed a series of workshops to capitalize on the collective expertise of researchers, resource managers, industry representatives, and traditional and local knowledge holders on the North Slope. The goal of this process was to evaluate three energy and resource development scenarios, which incorporated biophysical and socioeconomic drivers, to assess the implications of development on high-priority biophysical resources and the subsistence lifestyle and well-being of its Inupiat residents. Due to the diversity of the stakeholders engaged in the process, the workshop materials and activities had to be carefully designed and executed, in order to provide an adequate platform for discussion of each scenario component, as well as generating products that would provide management-relevant information to the NSSI and its member entities. Each workshop implemented a participatory mapping component, which relied on the best available geospatial datasets to generate informational maps that enabled participants to effectively consider a wide range of variables and outcomes for each of the selected scenarios. In addition, the map sketches produced in each workshop were digitized and incorporated into a spatial analysis that evaluated the level of agreement between stakeholder groups, as well as evaluating the geographic overlap of development features and anticipated implications with terrestrial and marine habitats, subsistence hunting zones, and sensitive landscape elements such as permafrost. This presentation

  17. Tuning maps for setpoint changes and load disturbance upsets in a three capacity process under multivariable control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Smith, Ira C.

    1991-01-01

    Tuning maps are an aid in the controller tuning process because they provide a convenient way for the plant operator to determine the consequences of adjusting different controller parameters. In this application the maps provide a graphical representation of the effect of varying the gains in the state feedback matrix on startup and load disturbance transients for a three capacity process. Nominally, the three tank system, represented in diagonal form, has a Proportional-Integral control on each loop. Cross coupling is then introduced between the loops by using non-zero off-diagonal proportional parameters. Changes in transient behavior due to setpoint and load changes are examined by varying the gains of the cross coupling terms.

  18. Concept Mapping

    ERIC Educational Resources Information Center

    Technology & Learning, 2005

    2005-01-01

    Concept maps are graphical ways of working with ideas and presenting information. They reveal patterns and relationships and help students to clarify their thinking, and to process, organize and prioritize. Displaying information visually--in concept maps, word webs, or diagrams--stimulates creativity. Being able to think logically teaches…

  19. New Targets for New Accelerators

    NASA Astrophysics Data System (ADS)

    Frentz, Bryce; Manukyan, Khachatur; Aprahamian, Ani

    2013-10-01

    New accelerators, such as the 5 MV Sta Ana accelerator at the University of Notre Dame, will produce more powerful beams up to 100's of μAmps. These accelerators require a complete rethinking of target preparation since the high intensity of such beams would melt conventional targets. Traditionally, accelerator targets are made with a tantalum backing because of its high atomic mass. However, tantalum is brittle, a poor conductor, and, if produced commercially, often contains impurities (e.g. fluorine) that produce undesirable background and reaction products. Tungsten, despite its brittle structure and poor conductivity, has a high atomic mass and lacks impurities, making it a more desirable backing. In conjunction with tungsten's properties, copper is robust and a far superior thermal conductor. We describe a new method of reactive joining that we developed for creating targets that use the advantageous properties of both tungsten and copper. This process involved placing a reactive mixture between tungsten and copper and applying a load force. The mixture is then ignited, and while under pressure, the system produces conditions to join the materials. We present our investigation to optimize the process of reactive joining, as well as some of the final target's properties. This work was supported by the National Science Foundation under Grant PHY-1068192.

  20. Factors and processes causing accelerated decomposition in human cadavers - An overview.

    PubMed

    Zhou, Chong; Byard, Roger W

    2011-01-01

    Artefactually enhanced putrefactive and autolytic changes may be misinterpreted as indicating a prolonged postmortem interval and throw doubt on the veracity of witness statements. Review of files from Forensic Science SA and the literature revealed a number of external and internal factors that may be responsible for accelerating these processes. Exogenous factors included exposure to elevated environmental temperatures, both outdoors and indoors, exacerbated by increased humidity or fires. Situations indoor involved exposure to central heating, hot water, saunas and electric blankets. Deaths within motor vehicles were also characterized by enhanced decomposition. Failure to quickly or adequately refrigerate bodies may also lead to early decomposition. Endogenous factors included fever, infections, illicit and prescription drugs, obesity and insulin-dependent diabetes mellitus. When these factors or conditions are identified at autopsy less significance should, therefore, be attached to changes of decomposition as markers of time since death. Copyright © 2010 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Christiansen, Ove

    2018-06-01

    We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.

  2. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  3. A Graphics Processing Unit Accelerated Motion Correction Algorithm and Modular System for Real-time fMRI

    PubMed Central

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R. Todd; Papademetris, Xenophon

    2013-01-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project (www.bioimagesuite.org). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences. PMID:23319241

  4. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    PubMed

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  5. Beamlets from stochastic acceleration

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Carbone, Vincenzo

    2008-09-01

    We investigate the dynamics of a realization of the stochastic Fermi acceleration mechanism. The model consists of test particles moving between two oscillating magnetic clouds and differs from the usual Fermi-Ulam model in two ways. (i) Particles can penetrate inside clouds before being reflected. (ii) Particles can radiate a fraction of their energy during the process. Since the Fermi mechanism is at work, particles are stochastically accelerated, even in the presence of the radiated energy. Furthermore, due to a kind of resonance between particles and oscillating clouds, the probability density function of particles is strongly modified, thus generating beams of accelerated particles rather than a translation of the whole distribution function to higher energy. This simple mechanism could account for the presence of beamlets in some space plasma physics situations.

  6. GPU-Accelerated Voxelwise Hepatic Perfusion Quantification

    PubMed Central

    Wang, H; Cao, Y

    2012-01-01

    Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645

  7. Process for Generating Engine Fuel Consumption Map: Future Atkinson Engine with Cooled EGR and Cylinder Deactivation

    EPA Pesticide Factsheets

    This document summarizes the process followed to utilize GT-POWER modeled engine and laboratory engine dyno test data to generate a full engine fuel consumption map which can be used by EPA's ALPHA vehicle simulations.

  8. Two Step Acceleration Process of Electrons in the Outer Van Allen Radiation Belt by Time Domain Electric Field Bursts and Large Amplitude Chorus Waves

    NASA Astrophysics Data System (ADS)

    Agapitov, O. V.; Mozer, F.; Artemyev, A.; Krasnoselskikh, V.; Lejosne, S.

    2014-12-01

    A huge number of different non-linear structures (double layers, electron holes, non-linear whistlers, etc) have been observed by the electric field experiment on the Van Allen Probes in conjunction with relativistic electron acceleration in the Earth's outer radiation belt. These structures, found as short duration (~0.1 msec) quasi-periodic bursts of electric field in the high time resolution electric field waveform, have been called Time Domain Structures (TDS). They can quite effectively interact with radiation belt electrons. Due to the trapping of electrons into these non-linear structures, they are accelerated up to ~10 keV and their pitch angles are changed, especially for low energies (˜1 keV). Large amplitude electric field perturbations cause non-linear resonant trapping of electrons into the effective potential of the TDS and these electrons are then accelerated in the non-homogeneous magnetic field. These locally accelerated electrons create the "seed population" of several keV electrons that can be accelerated by coherent, large amplitude, upper band whistler waves to MeV energies in this two step acceleration process. All the elements of this chain acceleration mechanism have been observed by the Van Allen Probes.

  9. India Solar Resource Data: Enhanced Data for Accelerated Deployment (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.

  10. Accelerated T1ρ acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study.

    PubMed

    Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan

    2016-03-01

    Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.

  11. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  12. An adaptive spatio-temporal Gaussian filter for processing cardiac optical mapping data.

    PubMed

    Pollnow, S; Pilia, N; Schwaderlapp, G; Loewe, A; Dössel, O; Lenis, G

    2018-06-04

    Optical mapping is widely used as a tool to investigate cardiac electrophysiology in ex vivo preparations. Digital filtering of fluorescence-optical data is an important requirement for robust subsequent data analysis and still a challenge when processing data acquired from thin mammalian myocardium. Therefore, we propose and investigate the use of an adaptive spatio-temporal Gaussian filter for processing optical mapping signals from these kinds of tissue usually having low signal-to-noise ratio (SNR). We demonstrate how filtering parameters can be chosen automatically without additional user input. For systematic comparison of this filter with standard filtering methods from the literature, we generated synthetic signals representing optical recordings from atrial myocardium of a rat heart with varying SNR. Furthermore, all filter methods were applied to experimental data from an ex vivo setup. Our developed filter outperformed the other filter methods regarding local activation time detection at SNRs smaller than 3 dB which are typical noise ratios expected in these signals. At higher SNRs, the proposed filter performed slightly worse than the methods from literature. In conclusion, the proposed adaptive spatio-temporal Gaussian filter is an appropriate tool for investigating fluorescence-optical data with low SNR. The spatio-temporal filter parameters were automatically adapted in contrast to the other investigated filters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. An intra-specific consensus genetic map of pigeonpea [Cajanus cajan (L.) Millspaugh] derived from six mapping populations.

    PubMed

    Bohra, Abhishek; Saxena, Rachit K; Gnanesh, B N; Saxena, Kulbhushan; Byregowda, M; Rathore, Abhishek; Kavikishor, P B; Cook, Douglas R; Varshney, Rajeev K

    2012-10-01

    Pigeonpea (Cajanus cajan L.) is an important food legume crop of rainfed agriculture. Owing to exposure of the crop to a number of biotic and abiotic stresses, the crop productivity has remained stagnant for almost last five decades at ca. 750 kg/ha. The availability of a cytoplasmic male sterility (CMS) system has facilitated the development and release of hybrids which are expected to enhance the productivity of pigeonpea. Recent advances in genomics and molecular breeding such as marker-assisted selection (MAS) offer the possibility to accelerate hybrid breeding. Molecular markers and genetic maps are pre-requisites for deploying MAS in breeding. However, in the case of pigeonpea, only one inter- and two intra-specific genetic maps are available so far. Here, four new intra-specific genetic maps comprising 59-140 simple sequence repeat (SSR) loci with map lengths ranging from 586.9 to 881.6 cM have been constructed. Using these four genetic maps together with two recently published intra-specific genetic maps, a consensus map was constructed, comprising of 339 SSR loci spanning a distance of 1,059 cM. Furthermore, quantitative trait loci (QTL) analysis for fertility restoration (Rf) conducted in three mapping populations identified four major QTLs explaining phenotypic variances up to 24 %. To the best of our knowledge, this is the first report on construction of a consensus genetic map in pigeonpea and on the identification of QTLs for fertility restoration. The developed consensus genetic map should serve as a reference for developing new genetic maps as well as correlating with the physical map in pigeonpea to be developed in near future. The availability of more informative markers in the bins harbouring QTLs for sterility mosaic disease (SMD) and Rf will facilitate the selection of the most suitable markers for genetic analysis and molecular breeding applications in pigeonpea.

  14. Mapping biological process relationships and disease perturbations within a pathway network.

    PubMed

    Stoney, Ruth; Robertson, David L; Nenadic, Goran; Schwartz, Jean-Marc

    2018-01-01

    Molecular interaction networks are routinely used to map the organization of cellular function. Edges represent interactions between genes, proteins, or metabolites. However, in living cells, molecular interactions are dynamic, necessitating context-dependent models. Contextual information can be integrated into molecular interaction networks through the inclusion of additional molecular data, but there are concerns about completeness and relevance of this data. We developed an approach for representing the organization of human cellular processes using pathways as the nodes in a network. Pathways represent spatial and temporal sets of context-dependent interactions, generating a high-level network when linked together, which incorporates contextual information without the need for molecular interaction data. Analysis of the pathway network revealed linked communities representing functional relationships, comparable to those found in molecular networks, including metabolism, signaling, immunity, and the cell cycle. We mapped a range of diseases onto this network and find that pathways associated with diseases tend to be functionally connected, highlighting the perturbed functions that result in disease phenotypes. We demonstrated that disease pathways cluster within the network. We then examined the distribution of cancer pathways and showed that cancer pathways tend to localize within the signaling, DNA processes and immune modules, although some cancer-associated nodes are found in other network regions. Altogether, we generated a high-confidence functional network, which avoids some of the shortcomings faced by conventional molecular models. Our representation provides an intuitive functional interpretation of cellular organization, which relies only on high-quality pathway and Gene Ontology data. The network is available at https://data.mendeley.com/datasets/3pbwkxjxg9/1.

  15. Sodium hyaluronate accelerates the healing process in tooth sockets of rats.

    PubMed

    Mendes, Renato M; Silva, Gerluza A B; Lima, Miguel F; Calliari, Marcelo V; Almeida, Alvair P; Alves, José B; Ferreira, Anderson J

    2008-12-01

    In this study we evaluated the effects of sodium hyaluronate (HY) in the healing process of tooth sockets of rats. Immediately after the extraction of the upper first molars of male Holtzman rats, right sockets were treated with 1% HY gel (approximately 0.1 ml), while left sockets were used as control (blood clot). The animals were sacrificed at 2, 7, and 21 days after tooth extraction and upper maxillaries processed for histological and morphometric analysis of the apical and medium thirds of the sockets. Carbopol, an inert gel, was used to evaluate the mechanical effect of gel injection into sockets. Expression of bone morphogenetic protein-2 (BMP-2) and osteopontin (OPN) was determined by immunohistochemistry at 1, 2, 3, 4, 5, and 7 days after tooth extraction. Histological analysis showed that HY treatment induced earlier trabecular bone deposition resulting in a bone matrix more organized at 7 and 21 days after tooth extraction. Also, HY elicited significant increase in the amount of bone trabeculaes at 7 and 21 days after tooth extraction (percentage of trabecular bone area at 7 days: 13.21+/-4.66% vs. 2.58+/-1.36% in the apical third of control sockets) and in the vessels counting at 7 days. Conversely, the number of cell nuclei was decreased in HY-treated sockets. Additionally, expression of BMP-2 and OPN was enhanced in HY-treated sockets compared with control sockets. These findings suggest that HY accelerates the healing process in tooth sockets of rats stimulating the expression of osteogenic proteins.

  16. Prospects for Accelerator Technology

    NASA Astrophysics Data System (ADS)

    Todd, Alan

    2011-02-01

    Accelerator technology today is a greater than US$5 billion per annum business. Development of higher-performance technology with improved reliability that delivers reduced system size and life cycle cost is expected to significantly increase the total accelerator technology market and open up new application sales. Potential future directions are identified and pitfalls in new market penetration are considered. Both of the present big market segments, medical radiation therapy units and semiconductor ion implanters, are approaching the "maturity" phase of their product cycles, where incremental development rather than paradigm shifts is the norm, but they should continue to dominate commercial sales for some time. It is anticipated that large discovery-science accelerators will continue to provide a specialty market beset by the unpredictable cycles resulting from the scale of the projects themselves, coupled with external political and economic drivers. Although fraught with differing market entry difficulties, the security and environmental markets, together with new, as yet unrealized, industrial material processing applications, are expected to provide the bulk of future commercial accelerator technology growth.

  17. Trends for Electron Beam Accelerator Applications in Industry

    NASA Astrophysics Data System (ADS)

    Machi, Sueo

    2011-02-01

    Electron beam (EB) accelerators are major pieces of industrial equipment used for many commercial radiation processing applications. The industrial use of EB accelerators has a history of more than 50 years and is still growing in terms of both its economic scale and new applications. Major applications involve the modification of polymeric materials to create value-added products, such as heat-resistant wires, heat-shrinkable sheets, automobile tires, foamed plastics, battery separators and hydrogel wound dressing. The surface curing of coatings and printing inks is a growing application for low energy electron accelerators, resulting in an environmentally friendly and an energy-saving process. Recently there has been the acceptance of the use of EB accelerators in lieu of the radioactive isotope cobalt-60 as a source for sterilizing disposable medical products. Environmental protection by the use of EB accelerators is a new and important field of application. A commercial plant for the cleaning flue gases from a coal-burning power plant is in operation in Poland, employing high power EB accelerators. In Korea, a commercial plant uses EB to clean waste water from a dye factory.

  18. Use of an Annular Silicon Drift Detector (SDD) Versus a Conventional SDD Makes Phase Mapping a Practical Solution for Rare Earth Mineral Characterization.

    PubMed

    Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald

    2018-06-04

    A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.

  19. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units

    PubMed Central

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618

  20. Mapping interference resolution across task domains: A shared control process in left inferior frontal gyrus

    PubMed Central

    Nelson, James K.; Reuter-Lorenz, Patricia A.; Persson, Jonas; Sylvester, Ching-Yune C.; Jonides, John

    2009-01-01

    Work in functional neuroimaging has mapped interference resolution processing onto left inferior frontal regions for both verbal working memory and a variety of semantic processing tasks. The proximity of the identified regions from these different tasks suggests the existence of a common, domain-general interference resolution mechanism. The current research specifically tests this idea in a within-subject design using fMRI to assess the activation associated with variable selection requirements in a semantic retrieval task (verb generation) and a verbal working memory task with a trial-specific proactive interference manipulation (recent-probes). High interference trials on both tasks were associated with activity in the midventrolateral region of the left inferior frontal gyrus, and the regions activated in each task strongly overlapped. The results indicate that an elemental component of executive control associated with interference resolution during retrieval from working memory and from semantic memory can be mapped to a common portion of the left inferior frontal gyrus. PMID:19111526

  1. Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian

    Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less

  2. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  3. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  4. Using a local low rank plus sparse reconstruction to accelerate dynamic hyperpolarized 13C imaging using the bSSFP sequence

    NASA Astrophysics Data System (ADS)

    Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D.; Shang, Hong; Shin, Peter J.; Larson, Peder E. Z.; Vigneron, Daniel B.

    2018-05-01

    Acceleration of dynamic 2D (T2 Mapping) and 3D hyperpolarized 13C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2-13C]pyruvate, [1-13C]lactate, [13C, 15N2]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T2 maps.

  5. CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.

    PubMed

    Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng

    2017-01-01

    Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.

  6. Development and comparison of processing maps of Mg-3Sn-1Ca alloy from data obtained in tension versus compression

    NASA Astrophysics Data System (ADS)

    Rao, K. P.; Suresh, K.; Prasad, Y. V. R. K.; Hort, N.

    2018-01-01

    The hot workability of extruded Mg-3Sn-1Ca alloy has been evaluated by developing processing maps with flow stress data from compression and tensile tests with a view to find the effect of the applied state-of-stress. The processing maps developed at a strain of 0.2 are essentially similar irrespective of the mode of deformation - compression or tension, and exhibit three domains in the temperature ranges: (1) 350 - 425 °C, and (2) 450 - 550 °C and (3) 400 - 500 °C, the first two occurring at lower strain rates and the third occurring at higher strain rates. In all the three domains, dynamic recrystallization occurs and is caused by non-basal slip and controlled by lattice self-diffusion in the first and second domains and grain boundary self-diffusion in the third domain. The state-of-stress imposed on the specimen (compression or tension) does not have any significant effect on the processing maps.

  7. AMS implications of charge-changing during acceleration

    NASA Astrophysics Data System (ADS)

    Knies, D. L.; Grabowski, K. S.; Cetina, C.; Demoranville, L. T.; Dougherty, M. R.; Mignerey, A. C.; Taylor, C. L.

    2007-08-01

    The NRL Accelerator Mass Spectrometer facility was recently reconfigured to incorporate a modified Cameca IMS 6f Secondary Ion Mass Spectrometer as a high-performance ion source. The NRL accelerator facility supplants the mass spectrometer portion of the IMS 6f instrument. As part of the initial testing of the combined instrument, charge-state scans were performed under various conditions. These provided the basis for studying the effects of terminal gas pressure on the process of charge-changing during acceleration. A combined system of transmission-micro-channel plate and energy detector was found to remove ghost beams produced from Pd charge-changing events in the accelerator tube.

  8. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE PAGES

    Toth, Alex; Ellis, J. Austin; Evans, Tom; ...

    2017-10-26

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  9. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Ellis, J. Austin; Evans, Tom

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  10. Geological Mapping of Fortuna Tessera (V-2): Venus and Earth's Archean Process Comparisons

    NASA Technical Reports Server (NTRS)

    Head, James W.; Hurwitz,D. M.; Ivanov, M. A.; Basilevsky, A. T.; Kumar, P. Senthil

    2008-01-01

    The geological features, structures, thermal conditions, interpreted processes, and outstanding questions related to both the Earth's Archean and Venus share many similarities and we are using a problem-oriented approach to Venus mapping, guided by insight from the Archean record of the Earth, to gain new insight into the evolution of Venus and Earth's Archean. The Earth's preserved and well-documented Archean record provides important insight into high heat-flux tectonic and magmatic environments and structures and the surface of Venus reveals the current configuration and recent geological record of analogous high-temperature environments unmodified by subsequent several billion years of segmentation and overprinting, as on Earth. Elsewhere we have addressed the nature of the Earth's Archean, the similarities to and differences from Venus, and the specific Venus and Earth-Archean problems on which progress might be made through comparison. Here we present the major goals of the Venus-Archean comparison and show how preliminary mapping of the geology of the V-2 Fortuna Tessera quadrangle is providing insight on these problems. We have identified five key themes and questions common to both the Archean and Venus, the assessment of which could provide important new insights into the history and processes of both planets.

  11. Dissociable effects of reward and expectancy during evaluative feedback processing revealed by topographic ERP mapping analysis.

    PubMed

    Gheza, Davide; Paul, Katharina; Pourtois, Gilles

    2017-11-24

    Evaluative feedback provided during performance monitoring (PM) elicits either a positive or negative deflection ~250-300ms after its onset in the event-related potential (ERP) depending on whether the outcome is reward-related or not, as well as expected or not. However, it remains currently unclear whether these two deflections reflect a unitary process, or rather dissociable effects arising from non-overlapping brain networks. To address this question, we recorded 64-channel EEG in healthy adult participants performing a standard gambling task where valence and expectancy were manipulated in a factorial design. We analyzed the feedback-locked ERP data using a conventional ERP analysis, as well as an advanced topographic ERP mapping analysis supplemented with distributed source localization. Results reveal two main topographies showing opposing valence effects, and being differently modulated by expectancy. The first one was short-lived and sensitive to no-reward irrespective of expectancy. Source-estimation associated with this topographic map comprised mainly regions of the dorsal anterior cingulate cortex. The second one was primarily driven by reward, had a prolonged time-course and was monotonically influenced by expectancy. Moreover, this reward-related topographical map was best accounted for by intracranial generators estimated in the posterior cingulate cortex. These new findings suggest the existence of dissociable brain systems depending on feedback valence and expectancy. More generally, they inform about the added value of using topographic ERP mapping methods, besides conventional ERP measurements, to characterize qualitative changes occurring in the spatio-temporal dynamic of reward processing during PM. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  13. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  14. Intervention mapping: a process for developing theory- and evidence-based health education programs.

    PubMed

    Bartholomew, L K; Parcel, G S; Kok, G

    1998-10-01

    The practice of health education involves three major program-planning activities: needs assessment, program development, and evaluation. Over the past 20 years, significant enhancements have been made to the conceptual base and practice of health education. Models that outline explicit procedures and detailed conceptualization of community assessment and evaluation have been developed. Other advancements include the application of theory to health education and promotion program development and implementation. However, there remains a need for more explicit specification of the processes by which one uses theory and empirical findings to develop interventions. This article presents the origins, purpose, and description of Intervention Mapping, a framework for health education intervention development. Intervention Mapping is composed of five steps: (1) creating a matrix of proximal program objectives, (2) selecting theory-based intervention methods and practical strategies, (3) designing and organizing a program, (4) specifying adoption and implementation plans, and (5) generating program evaluation plans.

  15. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    PubMed

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed

  16. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the

  17. Challenges in making a seismic hazard map for Alaska and the Aleutians

    USGS Publications Warehouse

    Wesson, R.L.; Boyd, O.S.; Mueller, C.S.; Frankel, A.D.; Freymueller, J.T.

    2008-01-01

    We present a summary of the data and analyses leading to the revision of the time-independent probabilistic seismic hazard maps of Alaska and the Aleutians. These maps represent a revision of existing maps based on newly obtained data, and reflect best current judgments about methodology and approach. They have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States, and will be proposed for adoption in future revisions to the International Building Code. We present example maps for peak ground acceleration, 0.2 s spectral amplitude (SA), and 1.0 s SA at a probability level of 2% in 50 years (annual probability of 0.000404). In this summary, we emphasize issues encountered in preparation of the maps that motivate or require future investigation and research.

  18. Depletion of CD4 T Lymphocytes at the time of infection with M. avium subsp. paratuberculosis does not accelerate disease progression

    USDA-ARS?s Scientific Manuscript database

    A calf model was used to determine if the depletion of CD4 T cells prior to inoculation of Mycobacterium avium subsp. paratuberculosis (Map) would delay development of an immune response to Map and accelerate disease progression. Ileal cannulas were surgically implanted in 5 bull calves at two month...

  19. Accelerated West Antarctic ice mass loss continues to outpace East Antarctic gains

    NASA Astrophysics Data System (ADS)

    Harig, Christopher; Simons, Frederik J.

    2015-04-01

    While multiple data sources have confirmed that Antarctica is losing ice at an accelerating rate, different measurement techniques estimate the details of its geographically highly variable mass balance with different levels of accuracy, spatio-temporal resolution, and coverage. Some scope remains for methodological improvements using a single data type. In this study we report our progress in increasing the accuracy and spatial resolution of time-variable gravimetry from the Gravity Recovery and Climate Experiment (GRACE). We determine the geographic pattern of ice mass change in Antarctica between January 2003 and June 2014, accounting for glacio-isostatic adjustment (GIA) using the IJ05_R2 model. Expressing the unknown signal in a sparse Slepian basis constructed by optimization to prevent leakage out of the regions of interest, we use robust signal processing and statistical estimation methods. Applying those to the latest time series of monthly GRACE solutions we map Antarctica's mass loss in space and time as well as can be recovered from satellite gravity alone. Ignoring GIA model uncertainty, over the period 2003-2014, West Antarctica has been losing ice mass at a rate of - 121 ± 8 Gt /yr and has experienced large acceleration of ice mass losses along the Amundsen Sea coast of - 18 ± 5 Gt /yr2, doubling the mass loss rate in the past six years. The Antarctic Peninsula shows slightly accelerating ice mass loss, with larger accelerated losses in the southern half of the Peninsula. Ice mass gains due to snowfall in Dronning Maud Land have continued to add about half the amount of West Antarctica's loss back onto the continent over the last decade. We estimate the overall mass losses from Antarctica since January 2003 at - 92 ± 10 Gt /yr.

  20. Laser acceleration

    NASA Astrophysics Data System (ADS)

    Tajima, T.; Nakajima, K.; Mourou, G.

    2017-02-01

    The fundamental idea of Laser Wakefield Acceleration (LWFA) is reviewed. An ultrafast intense laser pulse drives coherent wakefield with a relativistic amplitude robustly supported by the plasma. While the large amplitude of wakefields involves collective resonant oscillations of the eigenmode of the entire plasma electrons, the wake phase velocity ˜ c and ultrafastness of the laser pulse introduce the wake stability and rigidity. A large number of worldwide experiments show a rapid progress of this concept realization toward both the high-energy accelerator prospect and broad applications. The strong interest in this has been spurring and stimulating novel laser technologies, including the Chirped Pulse Amplification, the Thin Film Compression, the Coherent Amplification Network, and the Relativistic Mirror Compression. These in turn have created a conglomerate of novel science and technology with LWFA to form a new genre of high field science with many parameters of merit in this field increasing exponentially lately. This science has triggered a number of worldwide research centers and initiatives. Associated physics of ion acceleration, X-ray generation, and astrophysical processes of ultrahigh energy cosmic rays are reviewed. Applications such as X-ray free electron laser, cancer therapy, and radioisotope production etc. are considered. A new avenue of LWFA using nanomaterials is also emerging.

  1. Astrophysical particle acceleration mechanisms in colliding magnetized laser-produced plasmas

    DOE PAGES

    Fox, W.; Park, J.; Deng, W.; ...

    2017-08-11

    Significant particle energization is observed to occur in numerous astrophysical environments, and in the standard models, this acceleration occurs alongside energy conversion processes including collisionless shocks or magnetic reconnection. Recent platforms for laboratory experiments using magnetized laser-produced plasmas have opened opportunities to study these particle acceleration processes in the laboratory. Through fully kinetic particle-in-cell simulations, we investigate acceleration mechanisms in experiments with colliding magnetized laser-produced plasmas, with geometry and parameters matched to recent high-Mach number reconnection experiments with externally controlled magnetic fields. 2-D simulations demonstrate significant particle acceleration with three phases of energization: first, a “direct” Fermi acceleration driven bymore » approaching magnetized plumes; second, x-line acceleration during magnetic reconnection of anti-parallel fields; and finally, an additional Fermi energization of particles trapped in contracting and relaxing magnetic islands produced by reconnection. Furthermore, the relative effectiveness of these mechanisms depends on plasma and magnetic field parameters of the experiments.« less

  2. Geologic Mapping, Volcanic Stages and Magmatic Processes in Hawaiian Volcanoes

    NASA Astrophysics Data System (ADS)

    Sinton, J. M.

    2005-12-01

    rise to various Hawaiian lithologies. This analysis indicates that the important magmatic process that links geologic mapping to volcanic stage is thermal state of the volcano, as manifest by depth of magma evolution. The only criterion for rejuvenation volcanism is the presence of a significant time break (more than several hundred thousand years) preceding eruption.

  3. SWIMRT: A graphical user interface using the sliding window algorithm to construct a fluence map machine file

    PubMed Central

    Chow, James C.L.; Grigorov, Grigor N.; Yazdani, Nuri

    2006-01-01

    A custom‐made computer program, SWIMRT, to construct “multileaf collimator (MLC) machine” file for intensity‐modulated radiotherapy (IMRT) fluence maps was developed using MATLAB® and the sliding window algorithm. The user can either import a fluence map with a graphical file format created by an external treatment‐planning system such as Pinnacle3 or create his or her own fluence map using the matrix editor in the program. Through comprehensive calibrations of the dose and the dimension of the imported fluence field, the user can use associated image‐processing tools such as field resizing and edge trimming to modify the imported map. When the processed fluence map is suitable, a “MLC machine” file is generated for our Varian 21 EX linear accelerator with a 120‐leaf Millennium MLC. This machine file is transferred to the MLC console of the LINAC to control the continuous motions of the leaves during beam irradiation. An IMRT field is then irradiated with the 2D intensity profiles, and the irradiated profiles are compared to the imported or modified fluence map. This program was verified and tested using film dosimetry to address the following uncertainties: (1) the mechanical limitation due to the leaf width and maximum traveling speed, and (2) the dosimetric limitation due to the leaf leakage/transmission and penumbra effect. Because the fluence map can be edited, resized, and processed according to the requirement of a study, SWIMRT is essential in studying and investigating the IMRT technique using the sliding window algorithm. Using this program, future work on the algorithm may include redistributing the time space between segmental fields to enhance the fluence resolution, and readjusting the timing of each leaf during delivery to avoid small fields. Possible clinical utilities and examples for SWIMRT are given in this paper. PACS numbers: 87.53.Kn, 87.53.St, 87.53.Uv PMID:17533330

  4. The acceleration of particles at propagating interplanetary shocks

    NASA Astrophysics Data System (ADS)

    Prinsloo, P. L.; Strauss, R. D. T.

    2017-12-01

    Enhancements of charged energetic particles are often observed at Earth following the eruption of coronal mass ejections (CMEs) on the Sun. These enhancements are thought to arise from the acceleration of those particles at interplanetary shocks forming ahead of CMEs, propagating into the heliosphere. In this study, we model the acceleration of these energetic particles by solving a set of stochastic differential equations formulated to describe their transport and including the effects of diffusive shock acceleration. The study focuses on how acceleration at halo-CME-driven shocks alter the energy spectra of non-thermal particles, while illustrating how this acceleration process depends on various shock and transport parameters. We finally attempt to establish the relative contributions of different seed populations of energetic particles in the inner heliosphere to observed intensities during selected acceleration events.

  5. SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, J; Kim, Y

    Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less

  6. Using a local low rank plus sparse reconstruction to accelerate dynamic hyperpolarized 13C imaging using the bSSFP sequence.

    PubMed

    Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D; Shang, Hong; Shin, Peter J; Larson, Peder E Z; Vigneron, Daniel B

    2018-05-01

    Acceleration of dynamic 2D (T 2 Mapping) and 3D hyperpolarized 13 C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm 2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2- 13 C]pyruvate, [1- 13 C]lactate, [ 13 C,  15 N 2 ]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T 2 maps. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Accelerating 3D Hall MHD Magnetosphere Simulations with Graphics Processing Units

    NASA Astrophysics Data System (ADS)

    Bard, C.; Dorelli, J.

    2017-12-01

    The resolution required to simulate planetary magnetospheres with Hall magnetohydrodynamics result in program sizes approaching several hundred million grid cells. These would take years to run on a single computational core and require hundreds or thousands of computational cores to complete in a reasonable time. However, this requires access to the largest supercomputers. Graphics processing units (GPUs) provide a viable alternative: one GPU can do the work of roughly 100 cores, bringing Hall MHD simulations of Ganymede within reach of modest GPU clusters ( 8 GPUs). We report our progress in developing a GPU-accelerated, three-dimensional Hall magnetohydrodynamic code and present Hall MHD simulation results for both Ganymede (run on 8 GPUs) and Mercury (56 GPUs). We benchmark our Ganymede simulation with previous results for the Galileo G8 flyby, namely that adding the Hall term to ideal MHD simulations changes the global convection pattern within the magnetosphere. Additionally, we present new results for the G1 flyby as well as initial results from Hall MHD simulations of Mercury and compare them with the corresponding ideal MHD runs.

  8. Probabilistic seismic hazard maps for Sinai Peninsula, Egypt

    NASA Astrophysics Data System (ADS)

    Deif, A.; Abou Elenean, K.; El Hadidy, M.; Tealeb, A.; Mohamed, A.

    2009-09-01

    Sinai experienced the largest Egyptian earthquake with moment magnitude (Mw) 7.2 in 1995 in the Gulf of Aqaba, 350 km from Cairo. It is characterized by the presence of many tourist projects in addition to different natural resources. The aim of the current study is to present, for the first time, the probabilistic spectral hazard maps for Sinai. Revised earthquake catalogues for Sinai and its surroundings, from 112 BC to 2006 AD with magnitude equal or greater than 3.0, are used to calculate seismic hazard in the region of interest between 27°N and 31.5°N and 32°E and 36°E. We declustered these catalogues to include only independent events. The catalogues were tested for the completeness of different magnitude ranges. 28 seismic source zones are used to define the seismicity. The recurrence rates and the maximum earthquakes across these zones were also determined from these modified catalogues. Strong ground motion relations for rock are used to produce 5% damped spectral acceleration values for four different periods (0.2, 0.5, 1.0 and 2.0 s) to define the uniform response spectra at each site (grid of 0.2° × 0.2° all over the area). Maps showing spectral acceleration values at 0.2, 0.5, 1.0 and 2.0 s periods as well as peak ground acceleration (PGA) for the return period of 475 years (equivalent to 90% probability on non-exceedence in 50 years) are presented. In addition, Uniform Hazard Spectra (UHS) at 25 different periods for the four main cities (Hurghda, Sharm El-Sheikh, Nuweibaa and Suez) are graphed. The highest hazard is found in the Gulf of Aqaba with maximum spectral accelerations 356 cm s-2 at a period of 0.22 s for a return period of 475 years.

  9. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  10. A high density physical map of chromosome 1BL supports evolutionary studies, map-based cloning and sequencing in wheat

    PubMed Central

    2013-01-01

    Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011

  11. Modeling of Particle Acceleration at Multiple Shocks Via Diffusive Shock Acceleration: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Parker, L. N.; Zank, G. P.

    2013-12-01

    Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  12. Evansville Area Earthquake Hazards Mapping Project (EAEHMP) - Progress Report, 2008

    USGS Publications Warehouse

    Boyd, Oliver S.; Haase, Jennifer L.; Moore, David W.

    2009-01-01

    Maps of surficial geology, deterministic and probabilistic seismic hazard, and liquefaction potential index have been prepared by various members of the Evansville Area Earthquake Hazard Mapping Project for seven quadrangles in the Evansville, Indiana, and Henderson, Kentucky, metropolitan areas. The surficial geologic maps feature 23 types of surficial geologic deposits, artificial fill, and undifferentiated bedrock outcrop and include alluvial and lake deposits of the Ohio River valley. Probabilistic and deterministic seismic hazard and liquefaction hazard mapping is made possible by drawing on a wealth of information including surficial geologic maps, water well logs, and in-situ testing profiles using the cone penetration test, standard penetration test, down-hole shear wave velocity tests, and seismic refraction tests. These data were compiled and collected with contributions from the Indiana Geological Survey, Kentucky Geological Survey, Illinois State Geological Survey, United States Geological Survey, and Purdue University. Hazard map products are in progress and are expected to be completed by the end of 2009, with a public roll out in early 2010. Preliminary results suggest that there is a 2 percent probability that peak ground accelerations of about 0.3 g will be exceeded in much of the study area within 50 years, which is similar to the 2002 USGS National Seismic Hazard Maps for a firm rock site value. Accelerations as high as 0.4-0.5 g may be exceeded along the edge of the Ohio River basin. Most of the region outside of the river basin has a low liquefaction potential index (LPI), where the probability that LPI is greater than 5 (that is, there is a high potential for liquefaction) for a M7.7 New Madrid type event is only 20-30 percent. Within the river basin, most of the region has high LPI, where the probability that LPI is greater than 5 for a New Madrid type event is 80-100 percent.

  13. Temporal mapping and analysis

    NASA Technical Reports Server (NTRS)

    O'Hara, Charles G. (Inventor); Shrestha, Bijay (Inventor); Vijayaraj, Veeraraghavan (Inventor); Mali, Preeti (Inventor)

    2011-01-01

    A compositing process for selecting spatial data collected over a period of time, creating temporal data cubes from the spatial data, and processing and/or analyzing the data using temporal mapping algebra functions. In some embodiments, the temporal data cube is creating a masked cube using the data cubes, and computing a composite from the masked cube by using temporal mapping algebra.

  14. Combinatorial materials synthesis and high-throughput screening: an integrated materials chip approach to mapping phase diagrams and discovery and optimization of functional materials.

    PubMed

    Xiang, X D

    Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.

  15. Accelerators as Authentic Training Experiences for Nascent Entrepreneurs

    ERIC Educational Resources Information Center

    Miles, Morgan P.; de Vries, Huibert; Harrison, Geoff; Bliemel, Martin; de Klerk, Saskia; Kasouf, Chick J.

    2017-01-01

    Purpose: The purpose of this paper is to address the role of accelerators as authentic learning-based entrepreneurial training programs. Accelerators facilitate the development and assessment of entrepreneurial competencies in nascent entrepreneurs through the process of creating a start-up venture. Design/methodology/approach: Survey data from…

  16. Advancing precision cosmology with 21 cm intensity mapping

    NASA Astrophysics Data System (ADS)

    Masui, Kiyoshi Wesley

    In this thesis we make progress toward establishing the observational method of 21 cm intensity mapping as a sensitive and efficient method for mapping the large-scale structure of the Universe. In Part I we undertake theoretical studies to better understand the potential of intensity mapping. This includes forecasting the ability of intensity mapping experiments to constrain alternative explanations to dark energy for the Universe's accelerated expansion. We also considered how 21 cm observations of the neutral gas in the early Universe (after recombination but before reionization) could be used to detect primordial gravity waves, thus providing a window into cosmological inflation. Finally we showed that scientifically interesting measurements could in principle be performed using intensity mapping in the near term, using existing telescopes in pilot surveys or prototypes for larger dedicated surveys. Part II describes observational efforts to perform some of the first measurements using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing intensity mapping data from single dish radio telescopes. We then apply the pipeline to observations using the Green Bank Telescope. By cross-correlating the intensity mapping survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude and we thus constrain the signal from both above and below. This pilot survey represents a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.

  17. Genetic mapping and identification of QTL for earliness in the globe artichoke/cultivated cardoon complex.

    PubMed

    Portis, Ezio; Scaglione, Davide; Acquadro, Alberto; Mauromicale, Giovanni; Mauro, Rosario; Knapp, Steven J; Lanteri, Sergio

    2012-05-23

    The Asteraceae species Cynara cardunculus (2n = 2x = 34) includes the two fully cross-compatible domesticated taxa globe artichoke (var. scolymus L.) and cultivated cardoon (var. altilis DC). As both are out-pollinators and suffer from marked inbreeding depression, linkage analysis has focussed on the use of a two way pseudo-test cross approach. A set of 172 microsatellite (SSR) loci derived from expressed sequence tag DNA sequence were integrated into the reference C. cardunculus genetic maps, based on segregation among the F1 progeny of a cross between a globe artichoke and a cultivated cardoon. The resulting maps each detected 17 major linkage groups, corresponding to the species' haploid chromosome number. A consensus map based on 66 co-dominant shared loci (64 SSRs and two SNPs) assembled 694 loci, with a mean inter-marker spacing of 2.5 cM. When the maps were used to elucidate the pattern of inheritance of head production earliness, a key commercial trait, seven regions were shown to harbour relevant quantitative trait loci (QTL). Together, these QTL accounted for up to 74% of the overall phenotypic variance. The newly developed consensus as well as the parental genetic maps can accelerate the process of tagging and eventually isolating the genes underlying earliness in both the domesticated C. cardunculus forms. The largest single effect mapped to the same linkage group in each parental maps, and explained about one half of the phenotypic variance, thus representing a good candidate for marker assisted selection.

  18. Genetic mapping and identification of QTL for earliness in the globe artichoke/cultivated cardoon complex

    PubMed Central

    2012-01-01

    Background The Asteraceae species Cynara cardunculus (2n = 2x = 34) includes the two fully cross-compatible domesticated taxa globe artichoke (var. scolymus L.) and cultivated cardoon (var. altilis DC). As both are out-pollinators and suffer from marked inbreeding depression, linkage analysis has focussed on the use of a two way pseudo-test cross approach. Results A set of 172 microsatellite (SSR) loci derived from expressed sequence tag DNA sequence were integrated into the reference C. cardunculus genetic maps, based on segregation among the F1 progeny of a cross between a globe artichoke and a cultivated cardoon. The resulting maps each detected 17 major linkage groups, corresponding to the species’ haploid chromosome number. A consensus map based on 66 co-dominant shared loci (64 SSRs and two SNPs) assembled 694 loci, with a mean inter-marker spacing of 2.5 cM. When the maps were used to elucidate the pattern of inheritance of head production earliness, a key commercial trait, seven regions were shown to harbour relevant quantitative trait loci (QTL). Together, these QTL accounted for up to 74% of the overall phenotypic variance. Conclusion The newly developed consensus as well as the parental genetic maps can accelerate the process of tagging and eventually isolating the genes underlying earliness in both the domesticated C. cardunculus forms. The largest single effect mapped to the same linkage group in each parental maps, and explained about one half of the phenotypic variance, thus representing a good candidate for marker assisted selection. PMID:22621324

  19. A Statistical Perspective on Highly Accelerated Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Edward V.

    Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use ofmore » highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning

  20. Updated Colombian Seismic Hazard Map

    NASA Astrophysics Data System (ADS)

    Eraso, J.; Arcila, M.; Romero, J.; Dimate, C.; Bermúdez, M. L.; Alvarado, C.

    2013-05-01

    possible to determinate environments and scenarios where the seismic hazard is a function of distance and magnitude and also the principal seismic sources that contribute to the seismic hazard at each site (dissagregation). This project was conducted by the Servicio Geológico Colombiano (Colombian Geological Survey) and the Universidad Nacional de Colombia (National University of Colombia), with the collaboration of national and foreign experts and the National System of Prevention and Attention of Disaster (SNPAD). It is important to stand out that this new seismic hazard map was used in the updated national building code (NSR-10). A new process is ongoing in order to improve and present the Seismic Hazard Map in terms of intensity. This require new knowledge in site effects, in both local and regional scales, checking the existing and develop new acceleration to intensity relationships, in order to obtain results more understandable and useful for a wider range of users, not only in the engineering field, but also all the risk assessment and management institutions, research and general community.

  1. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  2. 42 CFR 484.245 - Accelerated payments for home health agencies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Recovery of payment. Recovery of the accelerated payment is made by recoupment as HHA bills are processed... 42 Public Health 5 2013-10-01 2013-10-01 false Accelerated payments for home health agencies. 484... for Home Health Agencies § 484.245 Accelerated payments for home health agencies. (a) General rule...

  3. 42 CFR 484.245 - Accelerated payments for home health agencies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Recovery of payment. Recovery of the accelerated payment is made by recoupment as HHA bills are processed... 42 Public Health 5 2014-10-01 2014-10-01 false Accelerated payments for home health agencies. 484... for Home Health Agencies § 484.245 Accelerated payments for home health agencies. (a) General rule...

  4. [Utility of conceptual schemes and mental maps on the teaching-learning process of residents in pediatrics].

    PubMed

    Cruza, Norberto Sotelo; Fierros, Luis E

    2006-01-01

    The present study was done at the internal medicine service oft he Hospital lnfantil in the State of Sonora, Mexico. We tried to address the question of the use of conceptual schemes and mind maps and its impact on the teaching-learning-evaluation process among medical residents. Analyze the effects of conceptual schemes, and mind maps as a teaching and evaluation tool and compare them with multiple choice exams among Pediatric residents. Twenty two residents (RI, RII, RIII)on service rotation during six months were assessed initially, followed by a lecture on a medical subject. Conceptual schemes and mind maps were then introduced as a teaching-learning-evaluation instrument. Comprehension impact and comparison with a standard multiple choice evaluation was done. The statistical package (JMP version 5, SAS inst. 2004) was used. We noted that when we used conceptual schemes and mind mapping, learning improvement was noticeable among the three groups of residents (P < 0.001) and constitutes a better evaluation tool when compared with multiple choice exams (P < 0.0005). Based on our experience we recommend the use of this educational technique for medical residents in training.

  5. Modeling of Particle Acceleration at Multiple Shocks Via Diffusive Shock Acceleration: Preliminary Results

    NASA Technical Reports Server (NTRS)

    Parker, Linda Neergaard; Zank, Gary P.

    2013-01-01

    We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).

  6. Electron acceleration via magnetic island coalescence

    NASA Astrophysics Data System (ADS)

    Shinohara, I.; Yumura, T.; Tanaka, K. G.; Fujimoto, M.

    2009-06-01

    Electron acceleration via fast magnetic island coalescence that happens as quick magnetic reconnection triggering (QMRT) proceeds has been studied. We have carried out a three-dimensional full kinetic simulation of the Harris current sheet with a large enough simulation run for two magnetic islands coalescence. Due to the strong inductive electric field associated with the non-linear evolution of the lower-hybrid-drift instability and the magnetic island coalescence process observed in the non-linear stage of the collisionless tearing mode, electrons are significantly accelerated at around the neutral sheet and the subsequent X-line. The accelerated meandering electrons generated by the non-linear evolution of the lower-hybrid-drift instability are resulted in QMRT, and QMRT leads to fast magnetic island coalescence. As a whole, the reconnection triggering and its transition to large-scale structure work as an effective electron accelerator.

  7. Harvesting geographic features from heterogeneous raster maps

    NASA Astrophysics Data System (ADS)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial

  8. Accelerated Adaptive MGS Phase Retrieval

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang

    2011-01-01

    The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.

  9. Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xuanhua; Luo, Xuan; Liang, Junling

    GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications

  10. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.

  11. GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering

    PubMed Central

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2016-01-01

    Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905

  12. Parallel definition of tear film maps on distributed-memory clusters for the support of dry eye diagnosis.

    PubMed

    González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J

    2017-02-01

    The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Regional Landslide Mapping Aided by Automated Classification of SqueeSAR™ Time Series (Northern Apennines, Italy)

    NASA Astrophysics Data System (ADS)

    Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.

    2013-12-01

    , were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.

  14. 77 FR 21880 - Federal Housing Administration (FHA): Multifamily Accelerated Processing-Enhancing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... applications for FHA multifamily mortgage insurance, which generally involve the refinance, purchase, new... to date through direct instructions to FHA-approved lenders under a MAP Guide. Given its experience... mortgage insurance programs. Based on HUD's experience to date with MAP, this proposed rule strives not...

  15. Seismic hazard maps of Mexico, the Caribbean, and Central and South America

    USGS Publications Warehouse

    Tanner, J.G.; Shedlock, K.M.

    2004-01-01

    The growth of megacities in seismically active regions around the world often includes the construction of seismically unsafe buildings and infrastructures due to an insufficient knowledge of existing seismic hazard and/or economic constraints. Minimization of the loss of life, property damage, and social and economic disruption due to earthquakes depends on reliable estimates of seismic hazard. We have produced a suite of seismic hazard estimates for Mexico, the Caribbean, and Central and South America. One of the preliminary maps in this suite served as the basis for the Caribbean and Central and South America portion of the Global Seismic Hazard Map (GSHM) published in 1999, which depicted peak ground acceleration (pga) with a 10% chance of exceedance in 50 years for rock sites. Herein we present maps depicting pga and 0.2 and 1.0 s spectral accelerations (SA) with 50%, 10%, and 2% chances of exceedance in 50 years for rock sites. The seismicity catalog used in the generation of these maps adds 3 more years of data to those used to calculate the GSH Map. Different attenuation functions (consistent with those used to calculate the U.S. and Canadian maps) were used as well. These nine maps are designed to assist in global risk mitigation by providing a general seismic hazard framework and serving as a resource for any national or regional agency to help focus further detailed studies required for regional/local needs. The largest seismic hazard values in Mexico, the Caribbean, and Central and South America generally occur in areas that have been, or are likely to be, the sites of the largest plate boundary earthquakes. High hazard values occur in areas where shallow-to-intermediate seismicity occurs frequently. ?? 2004 Elsevier B.V. All rights reserved.

  16. PARTICLE ACCELERATOR

    DOEpatents

    Teng, L.C.

    1960-01-19

    ABS>A combination of two accelerators, a cyclotron and a ring-shaped accelerator which has a portion disposed tangentially to the cyclotron, is described. Means are provided to transfer particles from the cyclotron to the ring accelerator including a magnetic deflector within the cyclotron, a magnetic shield between the ring accelerator and the cyclotron, and a magnetic inflector within the ring accelerator.

  17. Vacuum Plasma Spray Forming of Tungsten Lorentz Force Accelerator Components

    NASA Technical Reports Server (NTRS)

    Zimmerman, Frank R.

    2001-01-01

    The Vacuum Plasma Spray (VPS) Laboratory at NASA's Marshall Space Flight Center has developed and demonstrated a fabrication technique using the VPS process to form anode sections for a Lorentz force accelerator from tungsten. Lorentz force accelerators are an attractive form of electric propulsion that provides continuous, high-efficiency propulsion at useful power levels for such applications as orbit transfers or deep space missions. The VPS process is used to deposit refractory metals such as tungsten onto a graphite mandrel of the desired shape. Because tungsten is reactive at high temperatures, it is thermally sprayed in an inert environment where the plasma gun melts and accelerates the metal powder onto the mandrel. A three-axis robot inside the chamber controls the motion of the plasma spray torch. A graphite mandrel acts as a male mold, forming the required contour and dimensions of the inside surface of the anode. This paper describes the processing techniques, design considerations, and process development associated with the VPS forming of the Lorentz force accelerator.

  18. Accelerator controls at CERN: Some converging trends

    NASA Astrophysics Data System (ADS)

    Kuiper, B.

    1990-08-01

    CERN's growing services to the high-energy physics community using frozen resources has led to the implementation of "Technical Boards", mandated to assist the management by making recommendations for rationalizations in various technological domains. The Board on Process Control and Electronics for Accelerators, TEBOCO, has emphasized four main lines which might yield economy in resources. First, a common architecture for accelerator controls has been agreed between the three accelerator divisions. Second, a common hardware/software kit has been defined, from which the large majority of future process interfacing may be composed. A support service for this kit is an essential part of the plan. Third, high-level protocols have been developed for standardizing access to process devices. They derive from agreed standard models of the devices and involve a standard control message. This should ease application development and mobility of equipment. Fourth, a common software engineering methodology and a commercial package of application development tools have been adopted. Some rationalization in the field of the man-machine interface and in matters of synchronization is also under way.

  19. ActionMap: A web-based software that automates loci assignments to framework maps.

    PubMed

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-07-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/).

  20. ActionMap: a web-based software that automates loci assignments to framework maps

    PubMed Central

    Albini, Guillaume; Falque, Matthieu; Joets, Johann

    2003-01-01

    Genetic linkage computation may be a repetitive and time consuming task, especially when numerous loci are assigned to a framework map. We thus developed ActionMap, a web-based software that automates genetic mapping on a fixed framework map without adding the new markers to the map. Using this tool, hundreds of loci may be automatically assigned to the framework in a single process. ActionMap was initially developed to map numerous ESTs with a small plant mapping population and is limited to inbred lines and backcrosses. ActionMap is highly configurable and consists of Perl and PHP scripts that automate command steps for the MapMaker program. A set of web forms were designed for data import and mapping settings. Results of automatic mapping can be displayed as tables or drawings of maps and may be exported. The user may create personal access-restricted projects to store raw data, settings and mapping results. All data may be edited, updated or deleted. ActionMap may be used either online or downloaded for free (http://moulon.inra.fr/~bioinfo/). PMID:12824426

  1. Hot deformation behavior of uniform fine-grained GH4720Li alloy based on its processing map

    NASA Astrophysics Data System (ADS)

    Yu, Qiu-ying; Yao, Zhi-hao; Dong, Jian-xin

    2016-01-01

    The hot deformation behavior of uniform fine-grained GH4720Li alloy was studied in the temperature range from 1040 to 1130°C and the strain-rate range from 0.005 to 0.5 s-1 using hot compression testing. Processing maps were constructed on the basis of compression data and a dynamic materials model. Considerable flow softening associated with superplasticity was observed at strain rates of 0.01 s-1 or lower. According to the processing map and observations of the microstructure, the uniform fine-grained microstructure remains intact at 1100°C or lower because of easily activated dynamic recrystallization (DRX), whereas obvious grain growth is observed at 1130°C. Metallurgical instabilities in the form of non-uniform microstructures under higher and lower Zener-Hollomon parameters are induced by local plastic flow and primary γ' local faster dissolution, respectively. The optimum processing conditions at all of the investigated strains are proposed as 1090-1130°C with 0.08-0.5 s-1 and 0.005-0.008 s-1 and 1040-1085°C with 0.005-0.06 s-1.

  2. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  3. Learning from Nature - Mapping of Complex Hydrological and Geomorphological Process Systems for More Realistic Modelling of Hazard-related Maps

    NASA Astrophysics Data System (ADS)

    Chifflard, Peter; Tilch, Nils

    2010-05-01

    Introduction Hydrological or geomorphological processes in nature are often very diverse and complex. This is partly due to the regional characteristics which vary over time and space, as well as changeable process-initiating and -controlling factors. Despite being aware of this complexity, such aspects are usually neglected in the modelling of hazard-related maps due to several reasons. But particularly when it comes to creating more realistic maps, this would be an essential component to consider. The first important step towards solving this problem would be to collect data relating to regional conditions which vary over time and geographical location, along with indicators of complex processes. Data should be acquired promptly during and after events, and subsequently digitally combined and analysed. Study area In June 2009, considerable damage occurred in the residential area of Klingfurth (Lower Austria) as a result of great pre-event wetness and repeatedly heavy rainfall, leading to flooding, debris flow deposit and gravitational mass movement. One of the causes is the fact that the meso-scale watershed (16 km²) of the Klingfurth stream is characterised by adverse geological and hydrological conditions. Additionally, the river system network with its discharge concentration within the residential zone contributes considerably to flooding, particularly during excessive rainfall across the entire region, as the flood peaks from different parts of the catchment area are superposed. First results of mapping Hydro(geo)logical surveys across the entire catchment area have shown that - over 600 gravitational mass movements of various type and stage have occurred. 516 of those have acted as a bed load source, while 325 mass movements had not reached the final stage yet and could thus supply bed load in the future. It should be noted that large mass movements in the initial or intermediate stage were predominately found in clayey-silty areas and weathered material

  4. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  5. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, R.M.; Zander, M.E.; Brown, S.K.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less

  6. Reduction of Effective Acceleration to Microgravity Levels

    NASA Technical Reports Server (NTRS)

    Downey, James P.

    2000-01-01

    Acceleration due to earth's gravity causes buoyancy driven convection and sedimentation in solutions. In addition. pressure gradients occur as a function of the height within a liquid column. Hence gravity effects both equilbria conditions and phase transitions as a result of hydrostatic pressure gradients. The affect of gravity on the rate of heat and man transfer in solutal processes can be particularly important in polymer processing due to the high sensitivity of polymeric materials to processing conditions. The term microgravity has been coined to describe an environment in which the affects of gravitational acceleration am greatly reduced. It may seem odd to talk in term of reducing the effects of gravitational acceleration since gravitational attraction is a basic property of matter. However, die presence of gravity on in situ processing or measurements can be negated by achieving conditions in which the laboratory, or more specifically the container of the experimental materials, a subjected to the same acceleration as the materials themselves. With regard to the laboratory reference frame, there is virtually no force on the experimental solutions. This is difficult to achieve but can be done. A short review of Newtonian physics provides an explanation on both how processes we affected by gravity and how microgravity conditions are achieved. The fact that fluids deform when subject to a force bid solids do not indicates that solids have a structure able to exert an opposing force that negates an externally applied force. Liquids deform when a force is applied, indicating that a liquid structure cannot completely negate an applied force. Just how easily a liquid resists deformation is related to its viscosity. Spaceflight provides an environment in which the laboratory reference frame i.e. the spacecraft and all the equipment therein an experiencing virtually identical forces. There is no solid foundation underneath such a laboratory, so the laboratory

  7. Methods of geometrical integration in accelerator physics

    NASA Astrophysics Data System (ADS)

    Andrianov, S. N.

    2016-12-01

    In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.

  8. 7 CFR 3201.64 - Compost activators and accelerators.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PROCUREMENT Designated Items § 3201.64 Compost activators and accelerators. (a) Definition. Products in liquid or powder form designed to be applied to compost piles to aid in speeding up the composting process... 7 Agriculture 15 2014-01-01 2014-01-01 false Compost activators and accelerators. 3201.64 Section...

  9. 7 CFR 3201.64 - Compost activators and accelerators.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PROCUREMENT Designated Items § 3201.64 Compost activators and accelerators. (a) Definition. Products in liquid or powder form designed to be applied to compost piles to aid in speeding up the composting process... 7 Agriculture 15 2013-01-01 2013-01-01 false Compost activators and accelerators. 3201.64 Section...

  10. 7 CFR 3201.64 - Compost activators and accelerators.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PROCUREMENT Designated Items § 3201.64 Compost activators and accelerators. (a) Definition. Products in liquid or powder form designed to be applied to compost piles to aid in speeding up the composting process... 7 Agriculture 15 2012-01-01 2012-01-01 false Compost activators and accelerators. 3201.64 Section...

  11. Mapping geomorphic process domains to predict hillslope sediment size distribution using remotely-sensed data and field sampling, Inyo Creek, California

    NASA Astrophysics Data System (ADS)

    Leclere, S.; Sklar, L. S.; Genetti, J. R.

    2014-12-01

    The size distribution of sediments produced on hillslopes and supplied to channels depends on the geomorphic processes that weather, detach and transport rock fragments down slopes. Little in the way of theory or data is available to predict patterns in hillslope size distributions at the catchment scale from topographic and geologic maps. Here we use aerial imagery and a variety of remote sensing techniques to map and categorize geomorphic landscape units (GLUs) by inferred sediment production process regime, across the steep mountain catchment of Inyo Creek, eastern Sierra Nevada, California. We also use field measurements of particle size and local geomorphic attributes to test and refine GLU determinations. Across the 2 km of relief in this catchment, landcover varies from bare bedrock cliffs at higher elevations to vegetated, regolith-covered convex slopes at lower elevations. Hillslope gradient could provide a simple index of sediment production process, from rock spallation and landsliding at highest slopes, to tree-throw and other disturbance-driven soil production processes at lowest slopes. However, many other attributes are needed for a more robust predictive model, including elevation, curvature, aspect, drainage area, and color. We combine tools from ArcGIS, ERDAS Imagine and Envi with groundtruthing field work to find an optimal combination of attributes for defining sediment production GLUs. Key challenges include distinguishing: weathered from freshly eroded bedrock, boulders from intact bedrock, and landslide deposits from talus slopes. We take advantage of emerging technologies that provide new ways of conducting fieldwork and comparing field data to mapping solutions. In particular, cellphone GPS is approaching the accuracy of dedicated GPS systems and the ability to geo-reference photos simplifies field notes and increases accuracy of later map creation. However, the predictive power of the GLU mapping approach is limited by inherent uncertainty

  12. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    PubMed

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  13. Three-grid accelerator system for an ion propulsion engine

    NASA Technical Reports Server (NTRS)

    Brophy, John R. (Inventor)

    1994-01-01

    An apparatus is presented for an ion engine comprising a three-grid accelerator system with the decelerator grid biased negative of the beam plasma. This arrangement substantially reduces the charge-exchange ion current reaching the accelerator grid at high tank pressures, which minimizes erosion of the accelerator grid due to charge exchange ion sputtering, known to be the major accelerator grid wear mechanism. An improved method for life testing ion engines is also provided using the disclosed apparatus. In addition, the invention can also be applied in materials processing.

  14. Plasma Radiation and Acceleration Effectiveness of CME-driven Shocks

    NASA Astrophysics Data System (ADS)

    Gopalswamy, N.; Schmidt, J. M.

    2008-05-01

    CME-driven shocks are effective radio radiation generators and accelerators for Solar Energetic Particles (SEPs). We present simulated 3 D time-dependent radio maps of second order plasma radiation generated by CME- driven shocks. The CME with its shock is simulated with the 3 D BATS-R-US CME model developed at the University of Michigan. The radiation is simulated using a kinetic plasma model that includes shock drift acceleration of electrons and stochastic growth theory of Langmuir waves. We find that in a realistic 3 D environment of magnetic field and solar wind outflow of the Sun the CME-driven shock shows a detailed spatial structure of the density, which is responsible for the fine structure of type II radio bursts. We also show realistic 3 D reconstructions of the magnetic cloud field of the CME, which is accelerated outward by magnetic buoyancy forces in the diverging magnetic field of the Sun. The CME-driven shock is reconstructed by tomography using the maximum jump in the gradient of the entropy. In the vicinity of the shock we determine the Alfven speed of the plasma. This speed profile controls how steep the shock can grow and how stable the shock remains while propagating away from the Sun. Only a steep shock can provide for an effective particle acceleration.

  15. Plasma radiation and acceleration effectiveness of CME-driven shocks

    NASA Astrophysics Data System (ADS)

    Schmidt, Joachim

    CME-driven shocks are effective radio radiation generators and accelerators for Solar Energetic Particles (SEPs). We present simulated 3 D time-dependent radio maps of second order plasma radiation generated by CME-driven shocks. The CME with its shock is simulated with the 3 D BATS-R-US CME model developed at the University of Michigan. The radiation is simulated using a kinetic plasma model that includes shock drift acceleration of electrons and stochastic growth theory of Langmuir waves. We find that in a realistic 3 D environment of magnetic field and solar wind outflow of the Sun the CME-driven shock shows a detailed spatial structure of the density, which is responsible for the fine structure of type II radio bursts. We also show realistic 3 D reconstructions of the magnetic cloud field of the CME, which is accelerated outward by magnetic buoyancy forces in the diverging magnetic field of the Sun. The CME-driven shock is reconstructed by tomography using the maximum jump in the gradient of the entropy. In the vicinity of the shock we determine the Alfven speed of the plasma. This speed profile controls how steep the shock can grow and how stable the shock remains while propagating away from the Sun. Only a steep shock can provide for an effective particle acceleration.

  16. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  17. Deuterated methanol map towards L1544

    NASA Astrophysics Data System (ADS)

    Chacón-Tanarro, A.; Caselli, P.; Bizzocchi, L.; Pineda, J. E.; Spezzano, S.; Giuliano, B. M.; Lattanzi, V.; Punanova, A.

    Pre-stellar cores are self-gravitating starless dense cores with clear signs of contraction and chemical evolution (Crapsi et al. 2005), considered to represent the initial conditions in the process of star formation (Caselli & Ceccarelli 2012). Theoretical studies predict that CO is one of the precursors of complex organic molecules (COMs) during this cold and dense phase (Tielens et al. 1982; Watanabe et al. 2002). Moreover, when CO starts to deplete onto dust grains (at densities of a few 104 cm-3), the formation of deuterated species is enhanced, as CO accelerates the destruction of important precursors of deuterated molecules (Dalgarno & Lepp 1984). Here, we present the CH_2DOH/CH_3OH column density map toward the pre-stellar core L1544 (Chacón-Tanarro et al., in prep.), taken with the IRAM 30 m antenna. The results are compared with the C17O (1-0) distribution across L1544. As methanol is formed on dust grains via hydrogenation of frozen-out CO, this work allows us to measure the deuteration on surfaces and compared it with gas phase deuteration, as well as CO freeze-out and dust properties. This is important to shed light on the basic chemical processes just before the formation of a stellar system.

  18. Absolute acceleration measurements on STS-50 from the Orbital Acceleration Research Experiment (OARE)

    NASA Technical Reports Server (NTRS)

    Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.

    1994-01-01

    Orbital Acceleration Research Experiment (OARE) data on Space Transportation System (STS)-50 have been examined in detail during a 2-day time period. Absolute acceleration levels have been derived at the OARE location, the orbiter center-of-gravity, and at the STS-50 spacelab Crystal Growth Facility. During the interval, the tri-axial OARE raw telemetered acceleration measurements have been filtered using a sliding trimmed mean filter in order to remove large acceleration spikes (e.g., thrusters) and reduce the noise. Twelve OARE measured biases in each acceleration channel during the 2-day interval have been analyzed and applied to the filtered data. Similarly, the in situ measured x-axis scale factors in the sensor's most sensitive range were also analyzed and applied to the data. Due to equipment problem(s) on this flight, both y- and z-axis sensitive range scale factors were determined in a separate process using orbiter maneuvers and subsequently applied to the data. All known significant low-frequency corrections at the OARE location (i.e., both vertical and horizontal gravity-gradient, and rotational effects) were removed from the filtered data in order to produce the acceleration components at the orbiter center-of-gravity, which are the aerodynamic signals along each body axis. Results indicate that there is a force being applied to the Orbiter in addition to the aerodynamic forces. The OARE instrument and all known gravitational and electromagnetic forces have been reexamined, but none produces the observed effect. Thus, it is tentatively concluded that the orbiter is creating the environment observed. At least part of this force is thought to be due to the Flash Evaporator System.

  19. Topological visual mapping in robotics.

    PubMed

    Romero, Anna; Cazorla, Miguel

    2012-08-01

    A key problem in robotics is the construction of a map from its environment. This map could be used in different tasks, like localization, recognition, obstacle avoidance, etc. Besides, the simultaneous location and mapping (SLAM) problem has had a lot of interest in the robotics community. This paper presents a new method for visual mapping, using topological instead of metric information. For that purpose, we propose prior image segmentation into regions in order to group the extracted invariant features in a graph so that each graph defines a single region of the image. Although others methods have been proposed for visual SLAM, our method is complete, in the sense that it makes all the process: it presents a new method for image matching; it defines a way to build the topological map; and it also defines a matching criterion for loop-closing. The matching process will take into account visual features and their structure using the graph transformation matching (GTM) algorithm, which allows us to process the matching and to remove out the outliers. Then, using this image comparison method, we propose an algorithm for constructing topological maps. During the experimentation phase, we will test the robustness of the method and its ability constructing topological maps. We have also introduced new hysteresis behavior in order to solve some problems found building the graph.

  20. Recent high-resolution Antarctic ice velocity maps reveal increased mass loss in Wilkes Land, East Antarctica.

    PubMed

    Shen, Qiang; Wang, Hansheng; Shum, C K; Jiang, Liming; Hsu, Hou Tse; Dong, Jinglong

    2018-03-14

    We constructed Antarctic ice velocity maps from Landsat 8 images for the years 2014 and 2015 at a high spatial resolution (100 m). These maps were assembled from 10,690 scenes of displacement vectors inferred from more than 10,000 optical images acquired from December 2013 through March 2016. We estimated the mass discharge of the Antarctic ice sheet in 2008, 2014, and 2015 using the Landsat ice velocity maps, interferometric synthetic aperture radar (InSAR)-derived ice velocity maps (~2008) available from prior studies, and ice thickness data. An increased mass discharge (53 ± 14 Gt yr -1 ) was found in the East Indian Ocean sector since 2008 due to unexpected widespread glacial acceleration in Wilkes Land, East Antarctica, while the other five oceanic sectors did not exhibit significant changes. However, present-day increased mass loss was found by previous studies predominantly in west Antarctica and the Antarctic Peninsula. The newly discovered increased mass loss in Wilkes Land suggests that the ocean heat flux may already be influencing ice dynamics in the marine-based sector of the East Antarctic ice sheet (EAIS). The marine-based sector could be adversely impacted by ongoing warming in the Southern Ocean, and this process may be conducive to destabilization.

  1. Collective acceleration of ions in a system with an insulated anode

    NASA Astrophysics Data System (ADS)

    Bystritskii, V. M.; Didenko, A. N.; Krasik, Ya. E.; Lopatin, V. S.; Podkatov, V. I.

    1980-11-01

    An investigation was made of the processes of collective acceleration of protons in vacuum in a system with an insulated anode and trans-anode electrodes, which were insulated or grounded, in high-current Tonus and Vera electron accelerators. The influence of external conditions and parameters of the electron beam on the efficiency of acceleration processes was investigated. Experiments were carried out in which protons were accelerated in a system with trans-anode electrodes. A study was made of the influence of a charge prepulse and of the number of trans-anode electrodes on the energy of the accelerated electrons. A system with a single anode produced Np=1014 protons of 2Ee < Ep < 3Ee energy. Suppression of a charge prepulse increased the proton energy to (6 8)Ee and the yield was then 1013. The maximum proton energy of 14Ee was obtained in a system with three trans-anode electrodes. A possible mechanism of proton acceleration was analyzed. The results obtained were compared with those of other investigations. Ways of increasing the efficiency of this acceleration method were considered.

  2. The use of concept maps during knowledge elicitation in ontology development processes – the nutrigenomics use case

    PubMed Central

    Castro, Alexander Garcia; Rocca-Serra, Philippe; Stevens, Robert; Taylor, Chris; Nashar, Karim; Ragan, Mark A; Sansone, Susanna-Assunta

    2006-01-01

    Background Incorporation of ontologies into annotations has enabled 'semantic integration' of complex data, making explicit the knowledge within a certain field. One of the major bottlenecks in developing bio-ontologies is the lack of a unified methodology. Different methodologies have been proposed for different scenarios, but there is no agreed-upon standard methodology for building ontologies. The involvement of geographically distributed domain experts, the need for domain experts to lead the design process, the application of the ontologies and the life cycles of bio-ontologies are amongst the features not considered by previously proposed methodologies. Results Here, we present a methodology for developing ontologies within the biological domain. We describe our scenario, competency questions, results and milestones for each methodological stage. We introduce the use of concept maps during knowledge acquisition phases as a feasible transition between domain expert and knowledge engineer. Conclusion The contributions of this paper are the thorough description of the steps we suggest when building an ontology, example use of concept maps, consideration of applicability to the development of lower-level ontologies and application to decentralised environments. We have found that within our scenario conceptual maps played an important role in the development process. PMID:16725019

  3. Flow Stress and Processing Map of a PM 8009Al/SiC Particle Reinforced Composite During Hot Compression

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Teng, Jie; Chen, Shuang; Wang, Yu; Zhang, Hui

    2017-10-01

    Hot compression tests of 8009Al alloy reinforced with 15% SiC particles (8009Al/15%SiCp composites) prepared by powder metallurgy (direct hot extrusion methods) were performed on Gleeble-3500 system in the temperature range of 400-550 °C and strain rate range of 0.001-1 s-1. The processing map based on the dynamic material model was established to evaluate the flow instability regime and optimize processing parameters; the associated microstructural changes were studied by the observations of optical metallographic and scanning electron microscopy. The results showed that the flow stress increased initially and reached a plateau after peak stress value with increasing strain. The peak stress increased as the strain rate increased and deformation temperature decreased. The optimum parameters were identified to be deformation temperature range of 500-550 °C and strain rate range of 0.001-0.02 s-1 by combining the processing map with microstructural observation.

  4. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  5. The Mapping X-ray Fluorescence Spectrometer (MapX)

    NASA Astrophysics Data System (ADS)

    Sarrazin, P.; Blake, D. F.; Marchis, F.; Bristow, T.; Thompson, K.

    2017-12-01

    Many planetary surface processes leave traces of their actions as features in the size range 10s to 100s of microns. The Mapping X-ray Fluorescence Spectrometer (MapX) will provide elemental imaging at 100 micron spatial resolution, yielding elemental chemistry at a scale where many relict physical, chemical, or biological features can be imaged and interpreted in ancient rocks on planetary bodies and planetesimals. MapX is an arm-based instrument positioned on a rock or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample with X-rays or alpha-particles / gamma-rays, resulting in sample X-ray Fluorescence (XRF). X-rays emitted in the direction of an X-ray sensitive CCD imager pass through a 1:1 focusing lens (X-ray micro-pore Optic (MPO)) that projects a spatially resolved image of the X-rays onto the CCD. The CCD is operated in single photon counting mode so that the energies and positions of individual X-ray photons are recorded. In a single analysis, several thousand frames are both stored and processed in real-time. Higher level data products include single-element maps with a lateral spatial resolution of 100 microns and quantitative XRF spectra from ground- or instrument- selected Regions of Interest (ROI). XRF spectra from ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. When applied to airless bodies and implemented with an appropriate radioisotope source for alpha-particle excitation, MapX will be able to analyze biogenic elements C, N, O, P, S, in addition to the cations of the rock-forming elements >Na, accessible with either X-ray or gamma-ray excitation. The MapX concept has been demonstrated with a series of lab-based prototypes and is currently under refinement and TRL maturation.

  6. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  7. Musical Maps as Narrative Inquiry

    ERIC Educational Resources Information Center

    Blair, Deborah V.

    2007-01-01

    This study explores the metaphorical relationship between the process of narrative inquiry and the process of "musical mapping." The creation of musical maps was used as a classroom tool for enabling students' musical understanding while listening to music. As teacher-researcher, I studied my fifth-grade music students as they interacted with…

  8. Smart "geomorphological" map browsing - a tale about geomorphological maps and the internet

    NASA Astrophysics Data System (ADS)

    Geilhausen, M.; Otto, J.-C.

    2012-04-01

    With the digital production of geomorphological maps, the dissemination of research outputs now extends beyond simple paper products. Internet technologies can contribute to both, the dissemination of geomorphological maps and access to geomorphologic data and help to make geomorphological knowledge available to a greater public. Indeed, many national geological surveys employ end-to-end digital workflows from data capture in the field to final map production and dissemination. This paper deals with the potential of web mapping applications and interactive, portable georeferenced PDF maps for the distribution of geomorphological information. Web mapping applications such as Google Maps have become very popular and widespread and increased the interest and access to mapping. They link the Internet with GIS technology and are a common way of presenting dynamic maps online. The GIS processing is performed online and maps are visualised in interactive web viewers characterised by different capabilities such as zooming, panning or adding further thematic layers, with the map refreshed after each task. Depending on the system architecture and the components used, advanced symbology, map overlays from different applications and sources and their integration into a Desktop GIS are possible. This interoperability is achieved through the use of international open standards that include mechanisms for the integration and visualisation of information from multiple sources. The portable document format (PDF) is commonly used for printing and is a standard format that can be processed by many graphic software and printers without loss of information. A GeoPDF enables the sharing of geospatial maps and data in PDF documents. Multiple, independent map frames with individual spatial reference systems are possible within a GeoPDF, for example, for map overlays or insets. Geospatial functionality of a GeoPDF includes scalable map display, layer visibility control, access to attribute

  9. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  10. From conceptual modeling to a map

    NASA Astrophysics Data System (ADS)

    Gotlib, Dariusz; Olszewski, Robert

    2018-05-01

    Nowadays almost every map is a component of the information system. Design and production of maps requires the use of specific rules for modeling information systems: conceptual, application and data modelling. While analyzing various stages of cartographic modeling the authors ask the question: at what stage of this process a map occurs. Can we say that the "life of the map" begins even before someone define its form of presentation? This question is particularly important at the time of exponentially increasing number of new geoinformation products. During the analysis of the theory of cartography and relations of the discipline to other fields of knowledge it has been attempted to define a few properties of cartographic modeling which distinguish the process from other methods of spatial modeling. Assuming that the map is a model of reality (created in the process of cartographic modeling supported by domain-modeling) the article proposes an analogy of the process of cartographic modeling to the scheme of conceptual modeling presented in ISO 19101 standard.

  11. Using a national archive of patient experience narratives to promote local patient-centered quality improvement: an ethnographic process evaluation of 'accelerated' experience-based co-design.

    PubMed

    Locock, Louise; Robert, Glenn; Boaz, Annette; Vougioukalou, Sonia; Shuldham, Caroline; Fielden, Jonathan; Ziebland, Sue; Gager, Melanie; Tollyfield, Ruth; Pearcey, John

    2014-10-01

    To evaluate an accelerated form of experience-based co-design (EBCD), a type of participatory action research in which patients and staff work together to improve quality; to observe how acceleration affected the process and outcomes of the intervention. An ethnographic process evaluation of an adapted form of EBCD was conducted, including observations, interviews, questionnaires and documentary analysis. Whilst retaining all components of EBCD, the adapted approach replaced local patient interviews with secondary analysis of a national archive of patient experience narratives to create national trigger films; shortened the timeframe; and employed local improvement facilitators. It was tested in intensive care and lung cancer in two English National Health Service (NHS) hospitals. A total of 96 clinical staff (primarily nursing and medical), and 63 patients and family members participated in co-design activities. The accelerated approach proved acceptable to staff and patients; using films of national rather than local narratives did not adversely affect local NHS staff engagement, and may have made the process less threatening or challenging. Local patients felt the national films generally reflected important themes although a minority felt they were more negative than their own experience. However, they served their purpose of 'triggering' discussion between patients and staff, and the resulting 48 co-design (improvement) activities across the four pathways were similar to those in EBCD, but achieved more quickly and at lower cost. Accelerated EBCD offers a rigorous and relatively cost-effective patient-centered quality improvement approach. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  12. Effects of Spatial Gradients on Electron Runaway Acceleration

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Ljepojevic, N. N.

    1996-01-01

    The runaway process is known to accelerate electrons in many laboratory plasmas and has been suggested as an acceleration mechanism in some astrophysical plasmas, including solar flares. Current calculations of the electron velocity distributions resulting from the runaway process are greatly restricted because they impose spatial homogeneity on the distribution. We have computed runaway distributions which include consistent development of spatial gradients in the energetic tail. Our solution for the electron velocity distribution is presented as a function of distance along a finite length acceleration region, and is compared with the equivalent distribution for the infinitely long homogenous system (i.e., no spatial gradients), as considered in the existing literature. All these results are for the weak field regime. We also discuss the severe restrictiveness of this weak field assumption.

  13. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  14. Development of the first consensus genetic map of intermediate wheatgrass (Thinopyrum intermedium) using genotyping-by-sequencing.

    PubMed

    Kantarski, Traci; Larson, Steve; Zhang, Xiaofei; DeHaan, Lee; Borevitz, Justin; Anderson, James; Poland, Jesse

    2017-01-01

    Development of the first consensus genetic map of intermediate wheatgrass gives insight into the genome and tools for molecular breeding. Intermediate wheatgrass (Thinopyrum intermedium) has been identified as a candidate for domestication and improvement as a perennial grain, forage, and biofuel crop and is actively being improved by several breeding programs. To accelerate this process using genomics-assisted breeding, efficient genotyping methods and genetic marker reference maps are needed. We present here the first consensus genetic map for intermediate wheatgrass (IWG), which confirms the species' allohexaploid nature (2n = 6x = 42) and homology to Triticeae genomes. Genotyping-by-sequencing was used to identify markers that fit expected segregation ratios and construct genetic maps for 13 heterogeneous parents of seven full-sib families. These maps were then integrated using a linear programming method to produce a consensus map with 21 linkage groups containing 10,029 markers, 3601 of which were present in at least two populations. Each of the 21 linkage groups contained between 237 and 683 markers, cumulatively covering 5061 cM (2891 cM--Kosambi) with an average distance of 0.5 cM between each pair of markers. Through mapping the sequence tags to the diploid (2n = 2x = 14) barley reference genome, we observed high colinearity and synteny between these genomes, with three homoeologous IWG chromosomes corresponding to each of the seven barley chromosomes, and mapped translocations that are known in the Triticeae. The consensus map is a valuable tool for wheat breeders to map important disease-resistance genes within intermediate wheatgrass. These genomic tools can help lead to rapid improvement of IWG and development of high-yielding cultivars of this perennial grain that would facilitate the sustainable intensification of agricultural systems.

  15. Warm Temperature Deformation Behavior and Processing Maps of 5182 and 7075 Aluminum Alloy Sheets with Fine Grains

    NASA Astrophysics Data System (ADS)

    Jang, D. H.; Kim, W. J.

    2018-05-01

    The tensile deformation behavior and processing maps of commercial 5182 and 7075 aluminum alloy sheets with similarly fine grain sizes (about 8 μm) were examined and compared over the temperature range of 423-723 K. The 5182 aluminum alloy with equiaxed grains exhibited larger strain rate sensitivity exponent ( m) values than the 7075 aluminum alloy with elongated grains under most of the testing conditions. The fracture strain behaviors of the two alloys as a function of strain rate and temperature followed the trend in their m values. In the processing maps, the power dissipation parameter values of the 5182 aluminum alloy were larger than those of the 7075 aluminum alloy and the instability domains of the 5182 aluminum alloy were smaller compared to that of the 7075 aluminum alloy, implying that the 5182 aluminum alloy had a better hot workability than the 7075 aluminum alloy.

  16. Gamma-ray emission and electron acceleration in solar flares

    NASA Technical Reports Server (NTRS)

    Petrosian, Vahe; Mctiernan, James M.; Marschhauser, Holger

    1994-01-01

    Recent observations have extended the spectra of the impulsive phase of flares to the GeV range. Such high-energy photons can be produced either by electron bremsstrahlung or by decay of pions produced by accelerated protons. In this paper we investigate the effects of processes which become important at high energies. We examine the effects of synchrotron losses during the transport of electrons as they travel from the acceleration region in the corona to the gamma-ray emission sites deep in the chromosphere and photosphere, and the effects of scattering and absorption of gamma rays on their way from the photosphere to space instruments. These results are compared with the spectra from so-called electron-dominated flares, observed by GRS on the Solar Maximum Mission, which show negligible or no detectable contribution from accelerated protons. The spectra of these flares show a distinct steepening at energies below 100 keV and a rapid falloff at energies above 50 MeV. Following our earlier results based on lower energy gamma-ray flare emission we have modeled these spectra. We show that neither the radiative transfer effects, which are expected to become important at higher energies, nor the transport effects (Coulomb collisions, synchrotron losses, or magnetic field convergence) can explain such sharp spectral deviations from a simple power law. These spectral deviations from a power law are therefore attributed to the acceleration process. In a stochastic acceleration model the low-energy steepening can be attributed to Coulomb collision and the rapid high-energy steepening can result from synchrotron losses during the acceleration process.

  17. The beat in laser-accelerated ion beams

    NASA Astrophysics Data System (ADS)

    Schnürer, M.; Andreev, A. A.; Abicht, F.; Bränzel, J.; Koschitzki, Ch.; Platonov, K. Yu.; Priebe, G.; Sandner, W.

    2013-10-01

    Regular modulation in the ion velocity distribution becomes detectable if intense femtosecond laser pulses with very high temporal contrast are used for target normal sheath acceleration of ions. Analytical and numerical analysis of the experimental observation associates the modulation with the half-cycle of the driving laser field period. In processes like ion acceleration, the collective and laser-frequency determined electron dynamics creates strong fields in plasma to accelerate the ions. Even the oscillatory motion of electrons and its influence on the acceleration field can dominate over smoothing effects in plasma if a high temporal contrast of the driving laser pulse is given. Acceleration parameters can be directly concluded out of the experimentally observed modulation period in ion velocity spectra. The appearance of the phenomenon at a temporal contrast of ten orders between the intensity of the pulse peak and the spontaneous amplified emission background as well as remaining intensity wings at picosecond time-scale might trigger further parameter studies with even higher contrast.

  18. Mapping the Stacks: Sustainability and User Experience of Animated Maps in Library Discovery Interfaces

    ERIC Educational Resources Information Center

    McMillin, Bill; Gibson, Sally; MacDonald, Jean

    2016-01-01

    Animated maps of the library stacks were integrated into the catalog interface at Pratt Institute and into the EBSCO Discovery Service interface at Illinois State University. The mapping feature was developed for optimal automation of the update process to enable a range of library personnel to update maps and call-number ranges. The development…

  19. Coaching versus Direct Service Models for University Training to Accelerated Schools.

    ERIC Educational Resources Information Center

    Kirby, Peggy C.; Meza, James, Jr.

    This paper examines the changing roles and relationships of schools, central offices, and university facilitators at 11 schools that implemented the nationally recognized Accelerated Schools process. The schools joined the Louisiana Accelerated Schools Network in the summer of 1994. The paper begins with an overview of the Accelerated Schools…

  20. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  1. MapFactory - Towards a mapping design pattern for big geospatial data

    NASA Astrophysics Data System (ADS)

    Rautenbach, Victoria; Coetzee, Serena

    2018-05-01

    With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information - Metadata - Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data.

  2. Accelerated Application Development: The ORNL Titan Experience

    DOE PAGES

    Joubert, Wayne; Archibald, Richard K.; Berrill, Mark A.; ...

    2015-05-09

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  3. Accelerated application development: The ORNL Titan experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joubert, Wayne; Archibald, Rick; Berrill, Mark

    2015-08-01

    The use of computational accelerators such as NVIDIA GPUs and Intel Xeon Phi processors is now widespread in the high performance computing community, with many applications delivering impressive performance gains. However, programming these systems for high performance, performance portability and software maintainability has been a challenge. In this paper we discuss experiences porting applications to the Titan system. Titan, which began planning in 2009 and was deployed for general use in 2013, was the first multi-petaflop system based on accelerator hardware. To ready applications for accelerated computing, a preparedness effort was undertaken prior to delivery of Titan. In this papermore » we report experiences and lessons learned from this process and describe how users are currently making use of computational accelerators on Titan.« less

  4. Covariant Uniform Acceleration

    NASA Astrophysics Data System (ADS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-04-01

    We derive a 4D covariant Relativistic Dynamics Equation. This equation canonically extends the 3D relativistic dynamics equation , where F is the 3D force and p = m0γv is the 3D relativistic momentum. The standard 4D equation is only partially covariant. To achieve full Lorentz covariance, we replace the four-force F by a rank 2 antisymmetric tensor acting on the four-velocity. By taking this tensor to be constant, we obtain a covariant definition of uniformly accelerated motion. This solves a problem of Einstein and Planck. We compute explicit solutions for uniformly accelerated motion. The solutions are divided into four Lorentz-invariant types: null, linear, rotational, and general. For null acceleration, the worldline is cubic in the time. Linear acceleration covariantly extends 1D hyperbolic motion, while rotational acceleration covariantly extends pure rotational motion. We use Generalized Fermi-Walker transport to construct a uniformly accelerated family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the Weak Hypothesis of Locality, we obtain local spacetime transformations from a uniformly accelerated frame K' to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. We obtain velocity and acceleration transformations from a uniformly accelerated system K' to an inertial frame K. We introduce the 4D velocity, an adaptation of Horwitz and Piron s notion of "off-shell." We derive the general formula for the time dilation between accelerated clocks. We obtain a formula for the angular velocity of a uniformly accelerated object. Every rest point of K' is uniformly accelerated, and

  5. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  6. Analyzing radial acceleration with a smartphone acceleration sensor

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kuhn, Jochen

    2013-03-01

    This paper continues the sequence of experiments using the acceleration sensor of smartphones (for description of the function and the use of the acceleration sensor, see Ref. 1) within this column, in this case for analyzing the radial acceleration.

  7. Cloud Computing and Validated Learning for Accelerating Innovation in IoT

    ERIC Educational Resources Information Center

    Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus

    2015-01-01

    Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…

  8. Vacuum Plasma Spray Forming of Tungsten Lorentz Force Accelerator Components

    NASA Technical Reports Server (NTRS)

    Zimmerman, Frank R.

    2004-01-01

    The Vacuum Plasma Spray (VPS) Laboratory at NASA's Marshall Space Flight Center, working with the Jet Propulsion Laboratory, has developed and demonstrated a fabrication technique using the VPS process to form anode and cathode sections for a Lorentz force accelerator made from tungsten. Lorentz force accelerators are an attractive form of electric propulsion that provides continuous, high-efficiency propulsion at useful power levels for such applications as orbit transfers or deep space missions. The VPS process is used to deposit refractory metals such as tungsten onto a graphite mandrel of the desired shape. Because tungsten is reactive at high temperatures, it is thermally sprayed in an inert environment where the plasma gun melts and deposits the molten metal powder onto a mandrel. A three-axis robot inside the chamber controls the motion of the plasma spray torch. A graphite mandrel acts as a male mold, forming the required contour and dimensions for the inside surface of the anode or cathode of the accelerator. This paper describes the processing techniques, design considerations, and process development associated with the VPS forming of Lorentz force accelerator components.

  9. Mapping of Arithmetic Processing by Navigated Repetitive Transcranial Magnetic Stimulation in Patients with Parietal Brain Tumors and Correlation with Postoperative Outcome.

    PubMed

    Ille, Sebastian; Drummer, Katharina; Giglhuber, Katrin; Conway, Neal; Maurer, Stefanie; Meyer, Bernhard; Krieg, Sandro M

    2018-06-01

    Preserving functionality is important during neurosurgical resection of brain tumors. Specialized centers also map further brain functions apart from motor and language functions, such as arithmetic processing (AP). The mapping of AP by navigated repetitive transcranial magnetic stimulation (nrTMS) in healthy volunteers has been reported. The present study aimed to correlate the results of mapping AP with functional patient outcomes. We included 26 patients with parietal brain tumors. Because of preoperative impairment of AP, mapping was not possible in 8 patients (31%). We stimulated 52 cortical sites by nrTMS while patients performed a calculation task. Preoperatively and postoperatively, patients underwent a standardized number-processing and calculation test (NPCT). Tumor resection was blinded to nrTMS results, and the change in NPCT performance was correlated to resected AP-positive spots as identified by nrTMS. The resection of AP-positive sites correlated with a worsening of the postoperative NPCT result in 12 cases. In 3 cases, no AP-positive sites were resected and the postoperative NPCT result was similar to or better than preoperatively. Also, in 3 cases, the postoperative NPCT result was better than preoperatively, although AP-positive sites were resected. Despite presenting only a few cases, nrTMS might be a useful tool for preoperative mapping of AP. However, the reliability of the present results has to be evaluated in a larger series and by intraoperative mapping data. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Mapping the connectivity underlying multimodal (verbal and non-verbal) semantic processing: a brain electrostimulation study.

    PubMed

    Moritz-Gasser, Sylvie; Herbet, Guillaume; Duffau, Hugues

    2013-08-01

    Accessing the meaning of words, objects, people and facts is a human ability, made possible thanks to semantic processing. Although studies concerning its cortical organization are proficient, the subcortical connectivity underlying this semantic network received less attention. We used intraoperative direct electrostimulation, which mimics a transient virtual lesion during brain surgery for glioma in eight awaken patients, to map the anatomical white matter substrate subserving the semantic system. Patients performed a picture naming task and a non-verbal semantic association test during the electrical mapping. Direct electrostimulation of the inferior fronto-occipital fascicle, a poorly known ventral association pathway which runs throughout the brain, induced in all cases semantic disturbances. These transient disorders were highly reproducible, and concerned verbal as well as non-verbal output. Our results highlight for the first time the essential role of the left inferior fronto-occipital fascicle in multimodal (and not only in verbal) semantic processing. On the basis of these original findings, and in the lights of phylogenetic considerations regarding this fascicle, we suggest its possible implication in the monitoring of the human level of consciousness related to semantic memory, namely noetic consciousness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Mapping topographic plant location properties using a dense matching approach

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Rutzinger, Martin; Lamprecht, Andrea; Bardy-Durchhalter, Manfred; Pauli, Harald; Winkler, Manuela

    2017-04-01

    Within the project MEDIALPS (Disentangling anthropogenic drivers of climate change impacts on alpine plant species: Alps vs. Mediterranean mountains) six regions in Alpine and in Mediterranean mountain regions are investigated to assess how plant species respond to climate change. The project is embedded in the Global Observation Research Initiative in Alpine Environments (GLORIA), which is a well-established global monitoring initiative for systematic observation of changes in the plant species composition and soil temperature on mountain summits worldwide to discern accelerating climate change pressures on these fragile alpine ecosystems. Close-range sensing techniques such as terrestrial photogrammetry are well suited for mapping terrain topography of small areas with high resolution. Lightweight equipment, flexible positioning for image acquisition in the field, and independence on weather conditions (i.e. wind) make this a feasible method for in-situ data collection. New developments of dense matching approaches allow high quality 3D terrain mapping with less requirements for field set-up. However, challenges occur in post-processing and required data storage if many sites have to be mapped. Within MEDIALPS dense matching is used for mapping high resolution topography for 284 3x3 meter plots deriving information on vegetation coverage, roughness, slope, aspect and modelled solar radiation. This information helps identifying types of topography-dependent ecological growing conditions and evaluating the potential for existing refugial locations for specific plant species under climate change. This research is conducted within the project MEDIALPS - Disentangling anthropogenic drivers of climate change impacts on alpine plant species: Alps vs. Mediterranean mountains funded by the Earth System Sciences Programme of the Austrian Academy of Sciences.

  12. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  13. Modeling of thermalization phenomena in coaxial plasma accelerators

    NASA Astrophysics Data System (ADS)

    Subramaniam, Vivek; Panneerchelvam, Premkumar; Raja, Laxminarayan L.

    2018-05-01

    Coaxial plasma accelerators are electromagnetic acceleration devices that employ a self-induced Lorentz force to produce collimated plasma jets with velocities ~50 km s‑1. The accelerator operation is characterized by the formation of an ionization/thermalization zone near gas inlet of the device that continually processes the incoming neutral gas into a highly ionized thermal plasma. In this paper, we present a 1D non-equilibrium plasma model to resolve the plasma formation and the electron-heavy species thermalization phenomena that take place in the thermalization zone. The non-equilibrium model is based on a self-consistent multi-species continuum description of the plasma with finite-rate chemistry. The thermalization zone is modelled by tracking a 1D gas-bit as it convects down the device with an initial gas pressure of 1 atm. The thermalization process occurs in two stages. The first is a plasma production stage, associated with a rapid increase in the charged species number densities facilitated by cathode surface electron emission and volumetric production processes. The production stage results in the formation of a two-temperature plasma with electron energies of ~2.5 eV in a low temperature background gas of ~300 K. The second, a temperature equilibration stage, is characterized by the energy transfer between the electrons and heavy species. The characteristic length scale for thermalization is found to be comparable to axial length of the accelerator thus putting into question the equilibrium magnetohydrodynamics assumption used in modeling coaxial accelerators.

  14. Acceleration Environment of the International Space Station

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin; Kelly, Eric; Keller, Jennifer

    2009-01-01

    Measurement of the microgravity acceleration environment on the International Space Station has been accomplished by two accelerometer systems since 2001. The Microgravity Acceleration Measurement System records the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime, comprised of vehicle, crew, and equipment disturbances, has been accomplished by the Space Acceleration Measurement System-II. Until the arrival of the Columbus Orbital Facility and the Japanese Experiment Module, the location of these sensors, and therefore, the measurement of the microgravity acceleration environment, has been limited to within the United States Laboratory. Japanese Aerospace Exploration Agency has developed a vibratory acceleration measurement system called the Microgravity Measurement Apparatus which will be deployed within the Japanese Experiment Module to make distributed measurements of the Japanese Experiment Module's vibratory acceleration environment. Two Space Acceleration Measurement System sensors from the United States Laboratory will be re-deployed to support vibratory acceleration data measurement within the Columbus Orbital Facility. The additional measurement opportunities resulting from the arrival of these new laboratories allows Principal Investigators with facilities located in these International Space Station research laboratories to obtain microgravity acceleration data in support of their sensitive experiments. The Principal Investigator Microgravity Services project, at NASA Glenn Research Center, in Cleveland, Ohio, has supported acceleration measurement systems and the microgravity scientific community through the processing, characterization, distribution, and archival of the microgravity acceleration data obtained from the International Space Station acceleration measurement systems. This paper summarizes the PIMS capabilities available

  15. Consistent global structures of complex RNA states through multidimensional chemical mapping

    PubMed Central

    Cheng, Clarence Yu; Chou, Fang-Chieh; Kladwang, Wipapat; Tian, Siqi; Cordero, Pablo; Das, Rhiju

    2015-01-01

    Accelerating discoveries of non-coding RNA (ncRNA) in myriad biological processes pose major challenges to structural and functional analysis. Despite progress in secondary structure modeling, high-throughput methods have generally failed to determine ncRNA tertiary structures, even at the 1-nm resolution that enables visualization of how helices and functional motifs are positioned in three dimensions. We report that integrating a new method called MOHCA-seq (Multiplexed •OH Cleavage Analysis with paired-end sequencing) with mutate-and-map secondary structure inference guides Rosetta 3D modeling to consistent 1-nm accuracy for intricately folded ncRNAs with lengths up to 188 nucleotides, including a blind RNA-puzzle challenge, the lariat-capping ribozyme. This multidimensional chemical mapping (MCM) pipeline resolves unexpected tertiary proximities for cyclic-di-GMP, glycine, and adenosylcobalamin riboswitch aptamers without their ligands and a loose structure for the recently discovered human HoxA9D internal ribosome entry site regulon. MCM offers a sequencing-based route to uncovering ncRNA 3D structure, applicable to functionally important but potentially heterogeneous states. DOI: http://dx.doi.org/10.7554/eLife.07600.001 PMID:26035425

  16. Evaluation of asymmetric quadrupoles for a non-scaling fixed field alternating gradient accelerator

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Hun; Park, Sae-Hoon; Kim, Yu-Seok

    2017-12-01

    A non-scaling fixed field alternating gradient (NS-FFAG) accelerator was constructed, which employs conventional quadrupoles. The possible demerit is the beam instability caused by the variable focusing strength when the orbit radius of the beam changes. To overcome this instability, it was suggested that the asymmetric quadrupole has different current flows in each coil. The magnetic field of the asymmetric quadrupole was found to be more similar to the magnetic field required for the FFAG accelerator than the constructed NS-FFAG accelerator. In this study, a simulation of the beam dynamics was carried out to evaluate the improvement to the beam stability for the NS-FFAG accelerator using the SIMION program. The beam dynamics simulation was conducted with the `hard edge' model; it ignored the fringe field at the end of the magnet. The magnetic field map of the suggested magnet was created using the SIMION program. The lattices for the simulation combined the suggested magnets. The magnets were evaluated for beam stability in the lattices through the SIMION program.

  17. Sci—Thur AM: YIS - 08: Constructing an Attenuation map for a PET/MR Breast coil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, John C.; Imaging, Lawson Health Research Institute, Knoxville, TN; London Regional Cancer Program, Knoxville, TN

    2014-08-15

    In 2013, around 23000 Canadian women and 200 Canadian men were diagnosed with breast cancer. An estimated 5100 women and 55 men died from the disease. Using the sensitivity of MRI with the selectivity of PET, PET/MRI combines anatomical and functional information within the same scan and could help with early detection in high-risk patients. MRI requires radiofrequency coils for transmitting energy and receiving signal but the breast coil attenuates PET signal. To correct for this PET attenuation, a 3-dimensional map of linear attenuation coefficients (μ-map) of the breast coil must be created and incorporated into the PET reconstruction process.more » Several approaches have been proposed for building hardware μ-maps, some of which include the use of conventional kVCT and Dual energy CT. These methods can produce high resolution images based on the electron densities of materials that can be converted into μ-maps. However, imaging hardware containing metal components with photons in the kV range is susceptible to metal artifacts. These artifacts can compromise the accuracy of the resulting μ-map and PET reconstruction; therefore high-Z components should be removed. We propose a method for calculating μ-maps without removing coil components, based on megavoltage (MV) imaging with a linear accelerator that has been detuned for imaging at 1.0MeV. Containers of known geometry with F18 were placed in the breast coil for imaging. A comparison between reconstructions based on the different μ-map construction methods was made. PET reconstructions with our method show a maximum of 6% difference over the existing kVCT-based reconstructions.« less

  18. Formation Mechanisms, Structure, and Properties of HVOF-Sprayed WC-CoCr Coatings: An Approach Toward Process Maps

    NASA Astrophysics Data System (ADS)

    Varis, T.; Suhonen, T.; Ghabchi, A.; Valarezo, A.; Sampath, S.; Liu, X.; Hannula, S.-P.

    2014-08-01

    Our study focuses on understanding the damage tolerance and performance reliability of WC-CoCr coatings. In this paper, the formation of HVOF-sprayed tungsten carbide-based cermet coatings is studied through an integrated strategy: First-order process maps are created by using online-diagnostics to assess particle states in relation to process conditions. Coating properties such as hardness, wear resistance, elastic modulus, residual stress, and fracture toughness are discussed with a goal to establish a linkage between properties and particle characteristics via second-order process maps. A strong influence of particle state on the mechanical properties, wear resistance, and residual stress stage of the coating was observed. Within the used processing window (particle temperature ranged from 1687 to 1831 °C and particle velocity from 577 to 621 m/s), the coating hardness varied from 1021 to 1507 HV and modulus from 257 to 322 GPa. The variation in coating mechanical state is suggested to relate to the microstructural changes arising from carbide dissolution, which affects the properties of the matrix and, on the other hand, cohesive properties of the lamella. The complete tracking of the coating particle state and its linking to mechanical properties and residual stresses enables coating design with desired properties.

  19. Menopause accelerates biological aging

    PubMed Central

    Levine, Morgan E.; Lu, Ake T.; Chen, Brian H.; Hernandez, Dena G.; Singleton, Andrew B.; Ferrucci, Luigi; Bandinelli, Stefania; Salfati, Elias; Manson, JoAnn E.; Quach, Austin; Kusters, Cynthia D. J.; Kuh, Diana; Wong, Andrew; Teschendorff, Andrew E.; Widschwendter, Martin; Ritz, Beate R.; Absher, Devin; Assimes, Themistocles L.; Horvath, Steve

    2016-01-01

    Although epigenetic processes have been linked to aging and disease in other systems, it is not yet known whether they relate to reproductive aging. Recently, we developed a highly accurate epigenetic biomarker of age (known as the “epigenetic clock”), which is based on DNA methylation levels. Here we carry out an epigenetic clock analysis of blood, saliva, and buccal epithelium using data from four large studies: the Women's Health Initiative (n = 1,864); Invecchiare nel Chianti (n = 200); Parkinson's disease, Environment, and Genes (n = 256); and the United Kingdom Medical Research Council National Survey of Health and Development (n = 790). We find that increased epigenetic age acceleration in blood is significantly associated with earlier menopause (P = 0.00091), bilateral oophorectomy (P = 0.0018), and a longer time since menopause (P = 0.017). Conversely, epigenetic age acceleration in buccal epithelium and saliva do not relate to age at menopause; however, a higher epigenetic age in saliva is exhibited in women who undergo bilateral oophorectomy (P = 0.0079), while a lower epigenetic age in buccal epithelium was found for women who underwent menopausal hormone therapy (P = 0.00078). Using genetic data, we find evidence of coheritability between age at menopause and epigenetic age acceleration in blood. Using Mendelian randomization analysis, we find that two SNPs that are highly associated with age at menopause exhibit a significant association with epigenetic age acceleration. Overall, our Mendelian randomization approach and other lines of evidence suggest that menopause accelerates epigenetic aging of blood, but mechanistic studies will be needed to dissect cause-and-effect relationships further. PMID:27457926

  20. Ground motion models used in the 2014 U.S. National Seismic Hazard Maps

    USGS Publications Warehouse

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.

    2015-01-01

    The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.

  1. Scalability of the LEU-Modified Cintichem Process: 3-MeV Van de Graaff and 35-MeV Electron Linear Accelerator Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rotsch, David A.; Brossard, Tom; Roussin, Ethan

    Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABOmore » were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.« less

  2. USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.

    2000-01-01

    The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and

  3. Creation of a full color geologic map by computer: A case history from the Port Moller project resource assessment, Alaska Peninsula: A section in Geologic studies in Alaska by the U.S. Geological Survey, 1988

    USGS Publications Warehouse

    Wilson, Frederic H.

    1989-01-01

    Graphics programs on computers can facilitate the compilation and production of geologic maps, including full color maps of publication quality. This paper describes the application of two different programs, GSMAP and ARC/INFO, to the production of a geologic map of the Port Meller and adjacent 1:250,000-scale quadrangles on the Alaska Peninsula. GSMAP was used at first because of easy digitizing on inexpensive computer hardware. Limitations in its editing capability led to transfer of the digital data to ARC/INFO, a Geographic Information System, which has better editing and also added data analysis capability. Although these improved capabilities are accompanied by increased complexity, the availability of ARC/INFO's data analysis capability provides unanticipated advantages. It allows digital map data to be processed as one of multiple data layers for mineral resource assessment. As a result of development of both software packages, it is now easier to apply both software packages to geologic map production. Both systems accelerate the drafting and revision of maps and enhance the compilation process. Additionally, ARC/ INFO's analysis capability enhances the geologist's ability to develop answers to questions of interest that were previously difficult or impossible to obtain.

  4. Global Mapping Project - Applications and Development of Version 2 Dataset

    NASA Astrophysics Data System (ADS)

    Ubukawa, T.; Nakamura, T.; Otsuka, T.; Iimura, T.; Kishimoto, N.; Nakaminami, K.; Motojima, Y.; Suga, M.; Yatabe, Y.; Koarai, M.; Okatani, T.

    2012-07-01

    The Global Mapping Project aims to develop basic geospatial information of the whole land area of the globe, named Global Map, through the cooperation of National Mapping Organizations (NMOs) around the world. The Global Map data can be a base of global geospatial infrastructure and is composed of eight layers: Boundaries, Drainage, Transportation, Population Centers, Elevation, Land Use, Land Cover and Vegetation. The Global Map Version 1 was released in 2008, and the Version 2 will be released in 2013 as the data are to be updated every five years. In 2009, the International Steering Committee for Global Mapping (ISCGM) adopted new Specifications to develop the Global Map Version 2 with a change of its format so that it is compatible with the international standards, namely ISO 19136 and ISO 19115. With the support of the secretariat of ISCGM, the project participating countries are accelerating their data development toward the completion of the global coverage in 2013, while some countries have already released their Global Map version 2 datasets since 2010. Global Map data are available from the Internet free of charge for non-commercial purposes, which can be used to predict, assess, prepare for and cope with global issues by combining with other spatial data. There are a lot of Global Map applications in various fields, and further utilization of Global Map is expected. This paper summarises the activities toward the development of the Global Map Version 2 as well as some examples of the Global Map applications in various fields.

  5. New seismic hazard maps for Puerto Rico and the U.S. Virgin Islands

    USGS Publications Warehouse

    Mueller, C.; Frankel, A.; Petersen, M.; Leyendecker, E.

    2010-01-01

    The probabilistic methodology developed by the U.S. Geological Survey is applied to a new seismic hazard assessment for Puerto Rico and the U.S. Virgin Islands. Modeled seismic sources include gridded historical seismicity, subduction-interface and strike-slip faults with known slip rates, and two broad zones of crustal extension with seismicity rates constrained by GPS geodesy. We use attenuation relations from western North American and worldwide data, as well as a Caribbean-specific relation. Results are presented as maps of peak ground acceleration and 0.2- and 1.0-second spectral response acceleration for 2% and 10% probabilities of exceedance in 50 years (return periods of about 2,500 and 500 years, respectively). This paper describes the hazard model and maps that were balloted by the Building Seismic Safety Council and recommended for the 2003 NEHRP Provisions and the 2006 International Building Code. ?? 2010, Earthquake Engineering Research Institute.

  6. Acceleration during magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beresnyak, Andrey; Li, Hui

    2015-07-16

    The presentation begins with colorful depictions of solar x-ray flares and references to pulsar phenomena. Plasma reconnection is complex, could be x-point dominated or turbulent, field lines could break due to either resistivity or non-ideal effects, such as electron pressure anisotropy. Electron acceleration is sometimes observed, and sometimes not. One way to study this complex problem is to have many examples of the process (reconnection) and compare them; the other way is to simplify and come to something robust. Ideal MHD (E=0) turbulence driven by magnetic energy is assumed, and the first-order acceleration is sought. It is found that dissipationmore » in big (length >100 ion skin depths) current sheets is universal and independent on microscopic resistivity and the mean imposed field; particles are regularly accelerated while experiencing curvature drift in flows driven by magnetic tension. One example of such flow is spontaneous reconnection. This explains hot electrons with a power-law tail in solar flares, as well as ultrashort time variability in some astrophysical sources.« less

  7. Implementation of the analytical hierarchy process with VBA in ArcGIS

    NASA Astrophysics Data System (ADS)

    Marinoni, Oswald

    2004-07-01

    Decisions on landuse have become progressively more difficult in the last decades. The main reasons for this development lie in the increasing population combined with an increasing demand for new land and resources and in the growing consciousness for sustainable land and resource use. The steady reduction of valuable land leads to an increase of conflicts in land use decision-making processes since more interests are being affected and therefore more stakeholders with different land use interests and different valuation criteria are being involved in the decision-making process. In the course of such a decision process all identified criteria are weighted according to their relative importance. But assigning weights to the relevant criteria quickly becomes a difficult task when a greater number of criteria are being considered, especially with regard to land use decisions where decision makers expect some kind of mapped result it is therefore useful to use procedures that not only help to derive criteria weights but also accelerate the visualisation and mapping of land use assessment results. Both aspects can easily be facilitated in a GIS. This paper focuses the development of an ArcGIS VBA macro which enables the user to derive criteria weights with the analytical hierarchy process and which allows a mapping of the land use assessment results by a weighted summation of GIS raster data sets. A dynamic link library for the calculation of the eigenvalues and eigenvectors of a square matrix is provided.

  8. Report of the Fourth International Workshop on human X chromosome mapping 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlessinger, D.; Mandel, J.L.; Monaco, A.P.

    1993-12-31

    Vigorous interactive efforts by the X chromosome community have led to accelerated mapping in the last six months. Seventy-five participants from 12 countries around the globe contributed progress reports to the Fourth International X Chromosome Workshop, at St. Louis, MO, May 9-12, 1993. It became clear that well over half the chromosome is now covered by YAC contigs that are being extended, verified, and aligned by their content of STSs and other markers placed by cytogenetic or linkage mapping techniques. The major aim of the workshop was to assemble the consensus map that appears in this report, summarizing both consensusmore » order and YAC contig information.« less

  9. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.; /SLAC

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  10. Turbulence, Magnetic Reconnection in Turbulent Fluids and Energetic Particle Acceleration

    NASA Astrophysics Data System (ADS)

    Lazarian, A.; Vlahos, L.; Kowal, G.; Yan, H.; Beresnyak, A.; de Gouveia Dal Pino, E. M.

    2012-11-01

    Turbulence is ubiquitous in astrophysics. It radically changes many astrophysical phenomena, in particular, the propagation and acceleration of cosmic rays. We present the modern understanding of compressible magnetohydrodynamic (MHD) turbulence, in particular its decomposition into Alfvén, slow and fast modes, discuss the density structure of turbulent subsonic and supersonic media, as well as other relevant regimes of astrophysical turbulence. All this information is essential for understanding the energetic particle acceleration that we discuss further in the review. For instance, we show how fast and slow modes accelerate energetic particles through the second order Fermi acceleration, while density fluctuations generate magnetic fields in pre-shock regions enabling the first order Fermi acceleration of high energy cosmic rays. Very importantly, however, the first order Fermi cosmic ray acceleration is also possible in sites of magnetic reconnection. In the presence of turbulence this reconnection gets fast and we present numerical evidence supporting the predictions of the Lazarian and Vishniac (Astrophys. J. 517:700-718, 1999) model of fast reconnection. The efficiency of this process suggests that magnetic reconnection can release substantial amounts of energy in short periods of time. As the particle tracing numerical simulations show that the particles can be efficiently accelerated during the reconnection, we argue that the process of magnetic reconnection may be much more important for particle acceleration than it is currently accepted. In particular, we discuss the acceleration arising from reconnection as a possible origin of the anomalous cosmic rays measured by Voyagers as well as the origin cosmic ray excess in the direction of Heliotail.

  11. Uniform, optimal signal processing of mapped deep-sequencing data.

    PubMed

    Kumar, Vibhor; Muratani, Masafumi; Rayan, Nirmala Arul; Kraus, Petra; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam

    2013-07-01

    Despite their apparent diversity, many problems in the analysis of high-throughput sequencing data are merely special cases of two general problems, signal detection and signal estimation. Here we adapt formally optimal solutions from signal processing theory to analyze signals of DNA sequence reads mapped to a genome. We describe DFilter, a detection algorithm that identifies regulatory features in ChIP-seq, DNase-seq and FAIRE-seq data more accurately than assay-specific algorithms. We also describe EFilter, an estimation algorithm that accurately predicts mRNA levels from as few as 1-2 histone profiles (R ∼0.9). Notably, the presence of regulatory motifs in promoters correlates more with histone modifications than with mRNA levels, suggesting that histone profiles are more predictive of cis-regulatory mechanisms. We show by applying DFilter and EFilter to embryonic forebrain ChIP-seq data that regulatory protein identification and functional annotation are feasible despite tissue heterogeneity. The mathematical formalism underlying our tools facilitates integrative analysis of data from virtually any sequencing-based functional profile.

  12. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  13. Comprehensive process maps for synthesizing high density aluminum oxide-carbon nanotube coatings by plasma spraying for improved mechanical and wear properties

    NASA Astrophysics Data System (ADS)

    Keshri, Anup Kumar

    Plasma sprayed aluminum oxide ceramic coating is widely used due to its outstanding wear, corrosion, and thermal shock resistance. But porosity is the integral feature in the plasma sprayed coating which exponentially degrades its properties. In this study, process maps were developed to obtain Al2O3-CNT composite coatings with the highest density (i.e. lowest porosity) and improved mechanical and wear properties. Process map is defined as a set of relationships that correlates large number of plasma processing parameters to the coating properties. Carbon nanotubes (CNTs) were added as reinforcement to Al2O 3 coating to improve the fracture toughness and wear resistance. Two novel powder processing approaches viz spray drying and chemical vapor growth were adopted to disperse CNTs in Al2O3 powder. The degree of CNT dispersion via chemical vapor deposition (CVD) was superior to spray drying but CVD could not synthesize powder in large amount. Hence optimization of plasma processing parameters and process map development was limited to spray dried Al2O3 powder containing 0, 4 and 8 wt. % CNTs. An empirical model using Pareto diagram was developed to link plasma processing parameters with the porosity of coating. Splat morphology as a function of plasma processing parameter was also studied to understand its effect on mechanical properties. Addition of a mere 1.5 wt. % CNTs via CVD technique showed ˜27% and ˜24% increase in the elastic modulus and fracture toughness respectively. Improved toughness was attributed to combined effect of lower porosity and uniform dispersion of CNTs which promoted the toughening by CNT bridging, crack deflection and strong CNT/Al2O3 interface. Al2O 3-8 wt. % CNT coating synthesized using spray dried powder showed 73% improvement in the fracture toughness when porosity reduced from 4.7% to 3.0%. Wear resistance of all coatings at room and elevated temperatures (573 K, 873 K) showed improvement with CNT addition and decreased porosity

  14. Using Medical Text Extraction, Reasoning and Mapping System (MTERMS) to Process Medication Information in Outpatient Clinical Notes

    PubMed Central

    Zhou, Li; Plasek, Joseph M; Mahoney, Lisa M; Karipineni, Neelima; Chang, Frank; Yan, Xuemin; Chang, Fenny; Dimaggio, Dana; Goldman, Debora S.; Rocha, Roberto A.

    2011-01-01

    Clinical information is often coded using different terminologies, and therefore is not interoperable. Our goal is to develop a general natural language processing (NLP) system, called Medical Text Extraction, Reasoning and Mapping System (MTERMS), which encodes clinical text using different terminologies and simultaneously establishes dynamic mappings between them. MTERMS applies a modular, pipeline approach flowing from a preprocessor, semantic tagger, terminology mapper, context analyzer, and parser to structure inputted clinical notes. Evaluators manually reviewed 30 free-text and 10 structured outpatient clinical notes compared to MTERMS output. MTERMS achieved an overall F-measure of 90.6 and 94.0 for free-text and structured notes respectively for medication and temporal information. The local medication terminology had 83.0% coverage compared to RxNorm’s 98.0% coverage for free-text notes. 61.6% of mappings between the terminologies are exact match. Capture of duration was significantly improved (91.7% vs. 52.5%) from systems in the third i2b2 challenge. PMID:22195230

  15. Journey Mapping the User Experience

    ERIC Educational Resources Information Center

    Samson, Sue; Granath, Kim; Alger, Adrienne

    2017-01-01

    This journey-mapping pilot study was designed to determine whether journey mapping is an effective method to enhance the student experience of using the library by assessing our services from their point of view. Journey mapping plots a process or service to produce a visual representation of a library transaction--from the point at which the…

  16. Particle acceleration at shocks in the inner heliosphere

    NASA Astrophysics Data System (ADS)

    Parker, Linda Neergaard

    This dissertation describes a study of particle acceleration at shocks via the diffusive shock acceleration mechanism. Results for particle acceleration at both quasi-parallel and quasi-perpendicular shocks are presented to address the question of whether there are sufficient particles in the solar wind thermal core, modeled as either a Maxwellian or kappa- distribution, to account for the observed accelerated spectrum. Results of accelerating the theoretical upstream distribution are compared to energetic observations at 1 AU. It is shown that the particle distribution in the solar wind thermal core is sufficient to explain the accelerated particle spectrum downstream of the shock, although the shape of the downstream distribution in some cases does not follow completely the theory of diffusive shock acceleration, indicating possible additional processes at work in the shock for these cases. Results show good to excellent agreement between the theoretical and observed spectral index for one third to one half of both quasi-parallel and quasi-perpendicular shocks studied herein. Coronal mass ejections occurring during periods of high solar activity surrounding solar maximum can produce shocks in excess of 3-8 shocks per day. During solar minimum, diffusive shock acceleration at shocks can generally be understood on the basis of single independent shocks and no other shock necessarily influences the diffusive shock acceleration mechanism. In this sense, diffusive shock acceleration during solar minimum may be regarded as Markovian. By contrast, diffusive shock acceleration of particles at periods of high solar activity (e.g. solar maximum) see frequent, closely spaced shocks that include the effects of particle acceleration at preceding and following shocks. Therefore, diffusive shock acceleration of particles at solar maximum cannot be modeled on the basis of diffusive shock acceleration as a single, independent shock and the process is essentially non-Markovian. A

  17. Higher Education Planning for a Strategic Goal with a Concept Mapping Process at a Small Private College

    ERIC Educational Resources Information Center

    Driscoll, Deborah P.

    2010-01-01

    Faculty, staff, and administrators at a small independent college determined that planning with a Concept Mapping process efficiently produced strategic thinking and action plans for the accomplishment of a strategic goal to expand experiential learning within the curriculum. One year into a new strategic plan, the college enjoyed enrollment…

  18. Consistency of different tropospheric models and mapping functions for precise GNSS processing

    NASA Astrophysics Data System (ADS)

    Graffigna, Victoria; Hernández-Pajares, Manuel; García-Rigo, Alberto; Gende, Mauricio

    2017-04-01

    The TOmographic Model of the IONospheric electron content (TOMION) software implements a simultaneous precise geodetic and ionospheric modeling, which can be used to test new approaches for real-time precise GNSS modeling (positioning, ionospheric and tropospheric delays, clock errors, among others). In this work, the software is used to estimate the Zenith Tropospheric Delay (ZTD) emulating real time and its performance is evaluated through a comparative analysis with a built-in GIPSY estimation and IGS final troposphere product, exemplified in a two-day experiment performed in East Australia. Furthermore, the troposphere mapping function was upgraded from Niell to Vienna approach. On a first scenario, only forward processing was activated and the coordinates of the Wide Area GNSS network were loosely constrained, without fixing the carrier phase ambiguities, for both reference and rover receivers. On a second one, precise point positioning (PPP) was implemented, iterating for a fixed coordinates set for the second day. Comparisons between TOMION, IGS and GIPSY estimates have been performed and for the first one, IGS clocks and orbits were considered. The agreement with GIPSY results seems to be 10 times better than with the IGS final ZTD product, despite having considered IGS products for the computations. Hence, the subsequent analysis was carried out with respect to the GIPSY computations. The estimates show a typical bias of 2cm for the first strategy and of 7mm for PPP, in the worst cases. Moreover, Vienna mapping function showed in general a fairly better agreement than Niell one for both strategies. The RMS values' were found to be around 1cm for all studied situations, with a slightly fitter performance for the Niell one. Further improvement could be achieved for such estimations with coefficients for the Vienna mapping function calculated from raytracing as well as integrating meteorological comparative parameters.

  19. Mapping QTL for Omega-3 Content in Hybrid Saline Tilapia.

    PubMed

    Lin, Grace; Wang, Le; Ngoh, Si Te; Ji, Lianghui; Orbán, Laszlo; Yue, Gen Hua

    2018-02-01

    Tilapia is one of most important foodfish species. The low omega-3 to omega-6 fatty acid ratio in freshwater tilapia meat is disadvantageous for human health. Increasing omega-3 content is an important task in breeding to increase the nutritional value of tilapia. However, conventional breeding to increase omega-3 content is difficult and slow. To accelerate the increase of omega-3 through marker-assisted selection (MAS), we conducted QTL mapping for fatty acid contents and profiles in a F 2 family of saline tilapia generated by crossing red tilapia and Mozambique tilapia. The total omega-3 content in F 2 hybrid tilapia was 2.5 ± 1.0 mg/g, higher than that (2.00 mg/g) in freshwater tilapia. Genotyping by sequencing (GBS) technology was used to discover and genotype SNP markers, and microsatellites were also genotyped. We constructed a linkage map with 784 markers (151 microsatellites and 633 SNPs). The linkage map was 2076.7 cM long and consisted of 22 linkage groups. Significant and suggestive QTL for total lipid content were mapped on six linkage groups (LG3, -4, -6, -8, -13, and -15) and explained 5.8-8.3% of the phenotypic variance. QTL for omega-3 fatty acids were located on four LGs (LG11, -18, -19, and -20) and explained 5.0 to 7.5% of the phenotypic variance. Our data suggest that the total lipid and omega-3 fatty acid content were determined by multiple genes in tilapia. The markers flanking the QTL for omega-3 fatty acids can be used in MAS to accelerate the genetic improvements of these traits in salt-tolerant tilapia.

  20. The IBA Rhodotron: an industrial high-voltage high-powered electron beam accelerator for polymers radiation processing

    NASA Astrophysics Data System (ADS)

    Van Lancker, Marc; Herer, Arnold; Cleland, Marshall R.; Jongen, Yves; Abs, Michel

    1999-05-01

    The Rhodotron is a high-voltage, high-power electron beam accelerator based on a design concept first proposed in 1989 by J. Pottier of the French Atomic Agency, Commissariat à l'Energie Atomique (CEA). In December 1991, the Belgian particle accelerator manufacturer, Ion Beam Applications s.a. (IBA) entered into an exclusive agreement with the CEA to develop and industrialize the Rhodotron. Electron beams have long been used as the preferential method to cross-link a variety of polymers, either in their bulk state or in their final form. Used extensively in the wire and cable industry to toughen insulating jackets, electron beam-treated plastics can demonstrate improved tensile and impact strength, greater abrasion resistance, increased temperature resistance and dramatically improved fire retardation. Electron beams are used to selectively cross-link or degrade a wide range of polymers in resin pellets form. Electron beams are also used for rapid curing of advanced composites, for cross-linking of floor-heating and sanitary pipes and for cross-linking of formed plastic parts. Other applications include: in-house and contract medical device sterilization, food irradiation in both electron and X-ray modes, pulp processing, electron beam doping of semi-conductors, gemstone coloration and general irradiation research. IBA currently markets three models of the Rhodotron, all capable of 10 MeV and alternate beam energies from 3 MeV upwards. The Rhodotron models TT100, TT200 and TT300 are typically specified with guaranteed beam powers of 35, 80 and 150 kW, respectively. Founded in 1986, IBA, a spin-off of the Cyclotron Research Center at the University of Louvain (UCL) in Belgium, is a pioneer in accelerator design for industrial-scale production.