Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-08
... Information Collection: Comment Request; The Multifamily Accelerated Processing Guide AGENCY: Office of the... also lists the following information: Title of Proposal: Multifamily Accelerated Processing Guide (MAP...-0541. Description of the need for the information and proposed use: Multifamily Accelerated Processing...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... Administration (FHA): Multifamily Accelerated Processing (MAP)--Lender and Underwriter Eligibility Criteria and....gov . FOR FURTHER INFORMATION CONTACT: Terry W. Clark, Office of Multifamily Development, Office of... qualifications could underwrite loans involving more complex multifamily housing programs and transactions. II...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, A; Rangaraj, D; Perez-Andujar, A
2016-06-15
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each weremore » calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.« less
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
cudaMap: a GPU accelerated program for gene expression connectivity mapping.
McArt, Darragh G; Bankhead, Peter; Dunne, Philip D; Salto-Tellez, Manuel; Hamilton, Peter; Zhang, Shu-Dong
2013-10-11
Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Probation. 200.1510 Section 200.1510 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Multifamily Accelerated Processing (MAP): MAP Lender Quality...
Managing mapping data using commercial data base management software.
Elassal, A.A.
1985-01-01
Electronic computers are involved in almost every aspect of the map making process. This involvement has become so thorough that it is practically impossible to find a recently developed process or device in the mapping field which does not employ digital processing in some form or another. This trend, which has been evolving over two decades, is accelerated by the significant improvements in capility, reliability, and cost-effectiveness of electronic devices. Computerized mapping processes and devices share a common need for machine readable data. Integrating groups of these components into automated mapping systems requires careful planning for data flow amongst them. Exploring the utility of commercial data base management software to assist in this task is the subject of this paper. -Author
Geophysical Interpretation of Venus Gravity Data
NASA Technical Reports Server (NTRS)
Reasenberg, R. D.
1985-01-01
The subsurface distribution of Venus was investigated through the analysis of the data from Pioneer Venus Orbiter (PVO). In particular, the Doppler tracking data were used to map the gravitational potential. These were compared to the topographic data from the PVO radar (ORAD). In order to obtain an unbiased comparison, the topography data obtained from the PVO-ORAD were filtered to introduce distortions which are the same as those of the gravity models. Both the gravity and filtered topography maps are derived by two stage processes with a common second stage. In the first stage, the topography was used to calculate a corresponding spacecraft acceleration under the assumptions that the topography has a uniform given density and no compensation. In the second stage, the acceleration measures found in the first stage were passed through a linear inverter to yield maps of gravity and topography. Because these maps are the result of the same inversion process, they contain the same distortion; a comparison between them is unbiased to first order.
NASA Technical Reports Server (NTRS)
Hakimzadeh, Roshanak; McPherson, Kevin M.; Matisak, Brian P.; Wagar, William O.
1997-01-01
A knowledge of the quasi-steady acceleration environment on the NASA Space Shuttle Orbiter is of particular importance for materials processing experiments which are limited by slow diffusive processes. The quasi-steady (less than 1 HZ) acceleration environment on STS-73 (USML-2) was measured using the Orbital Acceleration Research Experiment (OARE) accelerometer. One of the facilities flown on USML-2 was the Crystal Growth Furnace (CGF), which was used by several Principal Investigators (PIS) to grow crystals. In this paper the OARE data mapped to the sample melt location within this furnace is presented. The ratio of the axial to radial components of the quasi-steady acceleration at the melt site is presented. Effects of Orbiter attitude on the acceleration data is discussed.
NASA Technical Reports Server (NTRS)
Wobber, F. J. (Principal Investigator); Martin, K. R.; Amato, R. V.; Leshendok, T.
1973-01-01
The author has identified the following significant results. The applications of ERTS-1 imagery for geological fracture mapping regardless of season has been repeatedly confirmed. The enhancement provided by a differential cover of snow increases the number and length of fracture-lineaments which can be detected with ERTS-1 data and accelerates the fracture mapping process for a variety of practical applications. The geological mapping benefits of the program will be realized in geographic areas where data are most needed - complex glaciated terrain and areas of deep residual soils. ERTS-1 derived fracture-lineament maps which provide detail well in excess of existing geological maps are not available in the Massachusetts-Connecticut area. The large quantity of new data provided by ERTS-1 may accelerate and improve field mapping now in progress in the area. Numerous other user groups have requested data on the techniques. This represents a major change in operating philosophy for groups who to data judged that snow obscured geological detail.
GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries
NASA Astrophysics Data System (ADS)
Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh
2018-04-01
Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... applications for FHA multifamily mortgage insurance, which generally involve the refinance, purchase, new... to date through direct instructions to FHA-approved lenders under a MAP Guide. Given its experience... mortgage insurance programs. Based on HUD's experience to date with MAP, this proposed rule strives not...
cudaMap: a GPU accelerated program for gene expression connectivity mapping
2013-01-01
Background Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take > 2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping. Results cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance. Conclusion Emerging ‘omics’ technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap. PMID:24112435
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... Proposed Information Collection to OMB; Lender Qualifications for Multifamily Accelerated Processing (MAP) AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notice. SUMMARY: The proposed information...-processing plan that will take substantially less processing time than traditional processing. DATES...
NASA Astrophysics Data System (ADS)
Widyaningrum, E.; Gorte, B. G. H.
2017-05-01
LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.
NASA Technical Reports Server (NTRS)
Strangeway, R. J.; Crawford, G. K.
1995-01-01
Plasma waves observed in the VLF range upstream of planetary bow shocks not only modify the particle distributions, but also provide important information about the acceleration processes that occur at the bow shock. Electron plasma oscillations observed near the tangent field line in the electron foreshock are generated by electrons reflected at the bow shock through a process that has been referred to as Fast Fermi acceleration. Fast Fermi acceleration is the same as shock-drift acceleration, which is one of the mechanisms by which ions are energized at the shock. We have generated maps of the VLF emissions upstream of the Venus bow shock, using these maps to infer properties of the shock energization processes. We find that the plasma oscillations extend along the field line up to a distance that appears to be controlled by the shock scale size, implying that shock curvature restricsts the flux and energy of reflected electrons. We also find that the ion acoustic waves are observed in the ion foreshock, but at Venus these emissions are not detected near the ULF forshock boundary. Through analogy with terrestrial ion observations, this implies that the ion acoustic waves are not generated by ion beams, but are instead generated by diffuse ion distributions found deep within the ion foreshock. However, since the shock is much smaller at Venus, and there is no magnetosphere, we might expect ion distributions within the ion foreshock to be different than at the Earth. Mapping studies of the terrestrial foreshock similar to those carried out at Venus appear to be necessary to determine if the inferences drawn from Venus data are applicable to other foreshocks.
BowMapCL: Burrows-Wheeler Mapping on Multiple Heterogeneous Accelerators.
Nogueira, David; Tomas, Pedro; Roma, Nuno
2016-01-01
The computational demand of exact-search procedures has pressed the exploitation of parallel processing accelerators to reduce the execution time of many applications. However, this often imposes strict restrictions in terms of the problem size and implementation efforts, mainly due to their possibly distinct architectures. To circumvent this limitation, a new exact-search alignment tool (BowMapCL) based on the Burrows-Wheeler Transform and FM-Index is presented. Contrasting to other alternatives, BowMapCL is based on a unified implementation using OpenCL, allowing the exploitation of multiple and possibly different devices (e.g., NVIDIA, AMD/ATI, and Intel GPUs/APUs). Furthermore, to efficiently exploit such heterogeneous architectures, BowMapCL incorporates several techniques to promote its performance and scalability, including multiple buffering, work-queue task-distribution, and dynamic load-balancing, together with index partitioning, bit-encoding, and sampling. When compared with state-of-the-art tools, the attained results showed that BowMapCL (using a single GPU) is 2 × to 7.5 × faster than mainstream multi-threaded CPU BWT-based aligners, like Bowtie, BWA, and SOAP2; and up to 4 × faster than the best performing state-of-the-art GPU implementations (namely, SOAP3 and HPG-BWT). When multiple and completely distinct devices are considered, BowMapCL efficiently scales the offered throughput, ensuring a convenient load-balance of the involved processing in the several distinct devices.
GPU-BSM: A GPU-Based Tool to Map Bisulfite-Treated Reads
Manconi, Andrea; Orro, Alessandro; Manca, Emanuele; Armano, Giuliano; Milanesi, Luciano
2014-01-01
Cytosine DNA methylation is an epigenetic mark implicated in several biological processes. Bisulfite treatment of DNA is acknowledged as the gold standard technique to study methylation. This technique introduces changes in the genomic DNA by converting cytosines to uracils while 5-methylcytosines remain nonreactive. During PCR amplification 5-methylcytosines are amplified as cytosine, whereas uracils and thymines as thymine. To detect the methylation levels, reads treated with the bisulfite must be aligned against a reference genome. Mapping these reads to a reference genome represents a significant computational challenge mainly due to the increased search space and the loss of information introduced by the treatment. To deal with this computational challenge we devised GPU-BSM, a tool based on modern Graphics Processing Units. Graphics Processing Units are hardware accelerators that are increasingly being used successfully to accelerate general-purpose scientific applications. GPU-BSM is a tool able to map bisulfite-treated reads from whole genome bisulfite sequencing and reduced representation bisulfite sequencing, and to estimate methylation levels, with the goal of detecting methylation. Due to the massive parallelization obtained by exploiting graphics cards, GPU-BSM aligns bisulfite-treated reads faster than other cutting-edge solutions, while outperforming most of them in terms of unique mapped reads. PMID:24842718
Asymmetric neighborhood functions accelerate ordering process of self-organizing maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ota, Kaiichiro; Aoki, Takaaki; Kurata, Koji
2011-02-15
A self-organizing map (SOM) algorithm can generate a topographic map from a high-dimensional stimulus space to a low-dimensional array of units. Because a topographic map preserves neighborhood relationships between the stimuli, the SOM can be applied to certain types of information processing such as data visualization. During the learning process, however, topological defects frequently emerge in the map. The presence of defects tends to drastically slow down the formation of a globally ordered topographic map. To remove such topological defects, it has been reported that an asymmetric neighborhood function is effective, but only in the simple case of mapping one-dimensionalmore » stimuli to a chain of units. In this paper, we demonstrate that even when high-dimensional stimuli are used, the asymmetric neighborhood function is effective for both artificial and real-world data. Our results suggest that applying the asymmetric neighborhood function to the SOM algorithm improves the reliability of the algorithm. In addition, it enables processing of complicated, high-dimensional data by using this algorithm.« less
Direct and accelerated parameter mapping using the unscented Kalman filter.
Zhao, Li; Feng, Xue; Meyer, Craig H
2016-05-01
To accelerate parameter mapping using a new paradigm that combines image reconstruction and model regression as a parameter state-tracking problem. In T2 mapping, the T2 map is first encoded in parameter space by multi-TE measurements and then encoded by Fourier transformation with readout/phase encoding gradients. Using a state transition function and a measurement function, the unscented Kalman filter can describe T2 mapping as a dynamic system and directly estimate the T2 map from the k-space data. The proposed method was validated with a numerical brain phantom and volunteer experiments with a multiple-contrast spin echo sequence. Its performance was compared with a conjugate-gradient nonlinear inversion method at undersampling factors of 2 to 8. An accelerated pulse sequence was developed based on this method to achieve prospective undersampling. Compared with the nonlinear inversion reconstruction, the proposed method had higher precision, improved structural similarity and reduced normalized root mean squared error, with acceleration factors up to 8 in numerical phantom and volunteer studies. This work describes a new perspective on parameter mapping by state tracking. The unscented Kalman filter provides a highly accelerated and efficient paradigm for T2 mapping. © 2015 Wiley Periodicals, Inc.
Phase quality map based on local multi-unwrapped results for two-dimensional phase unwrapping.
Zhong, Heping; Tang, Jinsong; Zhang, Sen
2015-02-01
The efficiency of a phase unwrapping algorithm and the reliability of the corresponding unwrapped result are two key problems in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) or interferometric synthetic aperture sonar (InSAS) data. In this paper, a new phase quality map is designed and implemented in a graphic processing unit (GPU) environment, which greatly accelerates the unwrapping process of the quality-guided algorithm and enhances the correctness of the unwrapped result. In a local wrapped phase window, the center point is selected as the reference point, and then two unwrapped results are computed by integrating in two different simple ways. After the two local unwrapped results are computed, the total difference of the two unwrapped results is regarded as the phase quality value of the center point. In order to accelerate the computing process of the new proposed quality map, we have implemented it in a GPU environment. The wrapped phase data are first uploaded to the memory of a device, and then the kernel function is called in the device to compute the phase quality in parallel by blocks of threads. Unwrapping tests performed on the simulated and real InSAS data confirm the accuracy and efficiency of the proposed method.
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, G.; Bonito, L.; Lampasi, A.; Revellino, P.; Guerriero, L.; Sappa, G.; Guadagno, F. M.
2015-06-01
SiSeRHMap is a computerized methodology capable of drawing up prediction maps of seismic response. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code-architecture composed of five interdependent modules. A GIS (Geographic Information System) Cubic Model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A metamodeling process confers a hybrid nature to the methodology. In this process, the one-dimensional linear equivalent analysis produces acceleration response spectra of shear wave velocity-thickness profiles, defined as trainers, which are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated Evolutionary Algorithm (EA) and the Levenberg-Marquardt Algorithm (LMA) as the final optimizer. In the final step, the GCM Maps Executor module produces a serial map-set of a stratigraphic seismic response at different periods, grid-solving the calibrated Spectra model. In addition, the spectra topographic amplification is also computed by means of a numerical prediction model. This latter is built to match the results of the numerical simulations related to isolate reliefs using GIS topographic attributes. In this way, different sets of seismic response maps are developed, on which, also maps of seismic design response spectra are defined by means of an enveloping technique.
A hybrid short read mapping accelerator
2013-01-01
Background The rapid growth of short read datasets poses a new challenge to the short read mapping problem in terms of sensitivity and execution speed. Existing methods often use a restrictive error model for computing the alignments to improve speed, whereas more flexible error models are generally too slow for large-scale applications. A number of short read mapping software tools have been proposed. However, designs based on hardware are relatively rare. Field programmable gate arrays (FPGAs) have been successfully used in a number of specific application areas, such as the DSP and communications domains due to their outstanding parallel data processing capabilities, making them a competitive platform to solve problems that are “inherently parallel”. Results We present a hybrid system for short read mapping utilizing both FPGA-based hardware and CPU-based software. The computation intensive alignment and the seed generation operations are mapped onto an FPGA. We present a computationally efficient, parallel block-wise alignment structure (Align Core) to approximate the conventional dynamic programming algorithm. The performance is compared to the multi-threaded CPU-based GASSST and BWA software implementations. For single-end alignment, our hybrid system achieves faster processing speed than GASSST (with a similar sensitivity) and BWA (with a higher sensitivity); for pair-end alignment, our design achieves a slightly worse sensitivity than that of BWA but has a higher processing speed. Conclusions This paper shows that our hybrid system can effectively accelerate the mapping of short reads to a reference genome based on the seed-and-extend approach. The performance comparison to the GASSST and BWA software implementations under different conditions shows that our hybrid design achieves a high degree of sensitivity and requires less overall execution time with only modest FPGA resource utilization. Our hybrid system design also shows that the performance bottleneck for the short read mapping problem can be changed from the alignment stage to the seed generation stage, which provides an additional requirement for the future development of short read aligners. PMID:23441908
Accelerated transport and growth with symmetrized dynamics
NASA Astrophysics Data System (ADS)
Merikoski, Juha
2013-12-01
In this paper we consider a model of accelerated dynamics with the rules modified from those of the recently proposed [Dong et al., Phys. Rev. Lett. 109, 130602 (2012), 10.1103/PhysRevLett.109.130602] accelerated exclusion process (AEP) such that particle-vacancy symmetry is restored to facilitate a mapping to a solid-on-solid growth model in 1+1 dimensions. In addition to kicking a particle ahead of the moving particle, as in the AEP, in our model another particle from behind is drawn, provided it is within the "distance of interaction" denoted by ℓmax. We call our model the doubly accelerated exclusion process (DAEP). We observe accelerated transport and interface growth and widening of the cluster size distribution for cluster sizes above ℓmax, when compared with the ordinary totally asymmetric exclusion process (TASEP). We also characterize the difference between the TASEP, AEP, and DAEP by computing a "staggered" order parameter, which reveals the local order in the steady state. This order in part explains the behavior of the particle current as a function of density. The differences of the steady states are also reflected by the behavior of the temporal and spatial correlation functions in the interface picture.
Experimental Results from a Resonant Dielectric Laser Accelerator
NASA Astrophysics Data System (ADS)
Yoder, Rodney; McNeur, Joshua; Sozer, Esin; Travish, Gil; Hazra, Kiran Shankar; Matthews, Brian; England, Joel; Peralta, Edgar; Wu, Ziran
2015-04-01
Laser-powered accelerators have the potential to operate with very large accelerating gradients (~ GV/m) and represent a path toward extremely compact colliders and accelerator technology. Optical-scale laser-powered devices based on field-shaping structures (known as dielectric laser accelerators, or DLAs) have been described and demonstrated recently. Here we report on the first experimental results from the Micro-Accelerator Platform (MAP), a DLA based on a slab-symmetric resonant optical-scale structure. As a resonant (rather than near-field) device, the MAP is distinct from other DLAs. Its cavity resonance enhances its accelerating field relative to the incoming laser fields, which are coupled efficiently through a diffractive optic on the upper face of the device. The MAP demonstrated modest accelerating gradients in recent experiments, in which it was powered by a Ti:Sapphire laser well below its breakdown limit. More detailed results and some implications for future developments will be discussed. Supported in part by the U.S. Defense Threat Reduction Agency (UCLA); U.S. Dept of Energy (SLAC); and DARPA (SLAC).
Access to Data Accelerates Innovation and Adoption of Geothermal
Technologies | News | NREL Access to Data Accelerates Innovation and Adoption of Geothermal Technologies Access to Data Accelerates Innovation and Adoption of Geothermal Technologies May 18, 2018 A map of the continental U.S. is overlaid with a colored map showing deep geothermal heat potential. NREL's
Development of Maximum Considered Earthquake Ground Motion Maps
Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.
2000-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.
NASA Astrophysics Data System (ADS)
Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.
1998-04-01
The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.
Evaluating secular acceleration in geomagnetic field model GRIMM-3
NASA Astrophysics Data System (ADS)
Lesur, V.; Wardinski, I.
2012-12-01
Secular acceleration of the magnetic field is the rate of change of its secular variation. One of the main results of studying magnetic data collected by the German survey satellite CHAMP was the mapping of field acceleration and its evolution in time. Questions remain about the accuracy of the modeled acceleration and the effect of the applied regularization processes. We have evaluated to what extent the regularization affects the temporal variability of the Gauss coefficients. We also obtained results of temporal variability of the Gauss coefficients where alternative approaches to the usual smoothing norms have been applied for regularization. Except for the dipole term, the secular acceleration of the Gauss coefficients is fairly well described up to spherical harmonic degree 5 or 6. There is no clear evidence from observatory data that the spectrum of this acceleration is underestimated at the Earth surface. Assuming a resistive mantle, the observed acceleration supports a characteristic time scale for the secular variation of the order of 11 years.
Torque-based optimal acceleration control for electric vehicle
NASA Astrophysics Data System (ADS)
Lu, Dongbin; Ouyang, Minggao
2014-03-01
The existing research of the acceleration control mainly focuses on an optimization of the velocity trajectory with respect to a criterion formulation that weights acceleration time and fuel consumption. The minimum-fuel acceleration problem in conventional vehicle has been solved by Pontryagin's maximum principle and dynamic programming algorithm, respectively. The acceleration control with minimum energy consumption for battery electric vehicle(EV) has not been reported. In this paper, the permanent magnet synchronous motor(PMSM) is controlled by the field oriented control(FOC) method and the electric drive system for the EV(including the PMSM, the inverter and the battery) is modeled to favor over a detailed consumption map. The analytical algorithm is proposed to analyze the optimal acceleration control and the optimal torque versus speed curve in the acceleration process is obtained. Considering the acceleration time, a penalty function is introduced to realize a fast vehicle speed tracking. The optimal acceleration control is also addressed with dynamic programming(DP). This method can solve the optimal acceleration problem with precise time constraint, but it consumes a large amount of computation time. The EV used in simulation and experiment is a four-wheel hub motor drive electric vehicle. The simulation and experimental results show that the required battery energy has little difference between the acceleration control solved by analytical algorithm and that solved by DP, and is greatly reduced comparing with the constant pedal opening acceleration. The proposed analytical and DP algorithms can minimize the energy consumption in EV's acceleration process and the analytical algorithm is easy to be implemented in real-time control.
Are supernova remnants quasi-parallel or quasi-perpendicular accelerators
NASA Technical Reports Server (NTRS)
Spangler, S. R.; Leckband, J. A.; Cairns, I. H.
1989-01-01
Observations of shock waves in the solar system which show a pronounced difference in the plasma wave and particle environment depending on whether the shock is propagating along or perpendicular to the interplanetary magnetic field are discussed. Theories for particle acceleration developed for quasi-parallel and quasi-perpendicular shocks, when extended to the interstellar medium suggest that the relativistic electrons in radio supernova remnants are accelerated by either the Q parallel or Q perpendicular mechanisms. A model for the galactic magnetic field and published maps of supernova remnants were used to search for a dependence of structure on the angle Phi. Results show no tendency for the remnants as a whole to favor the relationship expected for either mechanism, although individual sources resemble model remnants of one or the other acceleration process.
Standard map in magnetized relativistic systems: fixed points and regular acceleration.
de Sousa, M C; Steffens, F M; Pakter, R; Rizzato, F B
2010-08-01
We investigate the concept of a standard map for the interaction of relativistic particles and electrostatic waves of arbitrary amplitudes, under the action of external magnetic fields. The map is adequate for physical settings where waves and particles interact impulsively, and allows for a series of analytical result to be exactly obtained. Unlike the traditional form of the standard map, the present map is nonlinear in the wave amplitude and displays a series of peculiar properties. Among these properties we discuss the relation involving fixed points of the maps and accelerator regimes.
USDA-ARS?s Scientific Manuscript database
Intermediate wheatgrass (Thinopyrum intermedium) has been identified as a candidate for domestication and improvement as a perennial grain, forage, and biofuel crop by several active breeding programs. To accelerate this process using genomics-assisted breeding, efficient genotyping methods and gen...
Laboratory Astrophysics Prize: Laboratory Astrophysics with Nuclei
NASA Astrophysics Data System (ADS)
Wiescher, Michael
2018-06-01
Nuclear astrophysics is concerned with nuclear reaction and decay processes from the Big Bang to the present star generation controlling the chemical evolution of our universe. Such nuclear reactions maintain stellar life, determine stellar evolution, and finally drive stellar explosion in the circle of stellar life. Laboratory nuclear astrophysics seeks to simulate and understand the underlying processes using a broad portfolio of nuclear instrumentation, from reactor to accelerator from stable to radioactive beams to map the broad spectrum of nucleosynthesis processes. This talk focuses on only two aspects of the broad field, the need of deep underground accelerator facilities in cosmic ray free environments in order to understand the nucleosynthesis in stars, and the need for high intensity radioactive beam facilities to recreate the conditions found in stellar explosions. Both concepts represent the two main frontiers of the field, which are being pursued in the US with the CASPAR accelerator at the Sanford Underground Research Facility in South Dakota and the FRIB facility at Michigan State University.
Seismic design parameters - A user guide
Leyendecker, E.V.; Frankel, A.D.; Rukstales, K.S.
2001-01-01
The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings (1997 NEHRP Provisions) introduced seismic design procedure that is based on the explicit use of spectral response acceleration rather than the traditional peak ground acceleration and/or peak ground velocity or zone factors. The spectral response accelerations are obtained from spectral response acceleration maps accompanying the report. Maps are available for the United States and a number of U.S. territories. Since 1997 additional codes and standards have also adopted seismic design approaches based on the same procedure used in the NEHRP Provisions and the accompanying maps. The design documents using the 1997 NEHRP Provisions procedure may be divided into three categories -(1) Design of New Construction, (2) Design and Evaluation of Existing Construction, and (3) Design of Residential Construction. A CD-ROM has been prepared for use in conjunction with the design documents in each of these three categories. The spectral accelerations obtained using the software on the CD are the same as those that would be obtained by using the maps accompanying the design documents. The software has been prepared to operate on a personal computer using a Windows (Microsoft Corporation) operating environment and a point and click type of interface. The user can obtain the spectral acceleration values that would be obtained by use of the maps accompanying the design documents, include site factors appropriate for the Site Class provided by the user, calculate a response spectrum that includes the site factor, and plot a response spectrum. Sites may be located by providing the latitude-longitude or zip code for all areas covered by the maps. All of the maps used in the various documents are also included on the CDROM
Characteristics of Vibration that Alter Cardiovascular Parameters in Mice
Li, Yao; Rabey, Karyne N; Schmitt, Daniel; Norton, John N; Reynolds, Randall P
2015-01-01
We hypothesized that short-term exposure of mice to vibration within a frequency range thought to be near the resonant frequency range of mouse tissue and at an acceleration of 0 to 1 m/s2 would alter heart rate (HR) and mean arterial pressure (MAP). We used radiotelemetry to evaluate the cardiovascular response to vibration in C57BL/6 and CD1 male mice exposed to vertical vibration of various frequencies and accelerations. MAP was consistently increased above baseline values at an acceleration near 1 m/s2 and a frequency of 90 Hz in both strains, and HR was increased also in C57BL/6 mice. In addition, MAP increased at 80 Hz in individual mice of both strains. When both strains were analyzed together, mean MAP and HR were increased at 90 Hz at 1 m/s2, and HR was increased at 80 Hz at 1 m/s2. No consistent change in MAP or HR occurred when mice were exposed to frequencies below 80 Hz or above 90 Hz. The increase in MAP and HR occurred only when the mice had conscious awareness of the vibration, given that these changes did not occur when anesthetized mice were exposed to vibration. Tested vibration acceleration levels lower than 0.75 m/s2 did not increase MAP or HR at 80 or 90 Hz, suggesting that a relatively high level of vibration is necessary to increase these parameters. These data are important to establish the harmful frequencies and accelerations of environmental vibration that should be minimized or avoided in mouse facilities. PMID:26224436
A remote sensing research agenda for mapping and monitoring biodiversity
NASA Technical Reports Server (NTRS)
Stoms, D. M.; Estes, J. E.
1993-01-01
A remote sensing research agenda designed to expand the knowledge of the spatial distribution of species richness and its ecological determinants and to predict its response to global change is proposed. Emphasis is placed on current methods of mapping species richness of both plants and animals, hypotheses concerning the biophysical factors believed to determine patterns of species richness, and anthropogenic processes causing the accelerating rate of extinctions. It is concluded that biodiversity should be incorporated more prominently into the global change and earth system science paradigms.
Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity
Clayton, C. E.; Adli, E.; Allen, J.; ...
2016-08-16
The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.).more » Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m –1 to a similar degree of accuracy. Lastly, these results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity.« less
Self-mapping the longitudinal field structure of a nonlinear plasma accelerator cavity
Clayton, C. E.; Adli, E.; Allen, J.; An, W.; Clarke, C. I.; Corde, S.; Frederico, J.; Gessner, S.; Green, S. Z.; Hogan, M. J.; Joshi, C.; Litos, M.; Lu, W.; Marsh, K. A.; Mori, W. B.; Vafaei-Najafabadi, N.; Xu, X.; Yakimenko, V.
2016-01-01
The preservation of emittance of the accelerating beam is the next challenge for plasma-based accelerators envisioned for future light sources and colliders. The field structure of a highly nonlinear plasma wake is potentially suitable for this purpose but has not been yet measured. Here we show that the longitudinal variation of the fields in a nonlinear plasma wakefield accelerator cavity produced by a relativistic electron bunch can be mapped using the bunch itself as a probe. We find that, for much of the cavity that is devoid of plasma electrons, the transverse force is constant longitudinally to within ±3% (r.m.s.). Moreover, comparison of experimental data and simulations has resulted in mapping of the longitudinal electric field of the unloaded wake up to 83 GV m−1 to a similar degree of accuracy. These results bode well for high-gradient, high-efficiency acceleration of electron bunches while preserving their emittance in such a cavity. PMID:27527569
First USGS urban seismic hazard maps predict the effects of soils
Cramer, C.H.; Gomberg, J.S.; Schweig, E.S.; Waldron, B.A.; Tucker, K.
2006-01-01
Probabilistic and scenario urban seismic hazard maps have been produced for Memphis, Shelby County, Tennessee covering a six-quadrangle area of the city. The nine probabilistic maps are for peak ground acceleration and 0.2 s and 1.0 s spectral acceleration and for 10%, 5%, and 2% probability of being exceeded in 50 years. Six scenario maps for these three ground motions have also been generated for both an M7.7 and M6.2 on the southwest arm of the New Madrid seismic zone ending at Marked Tree, Arkansas. All maps include the effect of local geology. Relative to the national seismic hazard maps, the effect of the thick sediments beneath Memphis is to decrease 0.2 s probabilistic ground motions by 0-30% and increase 1.0 s probabilistic ground motions by ???100%. Probabilistic peak ground accelerations remain at levels similar to the national maps, although the ground motion gradient across Shelby County is reduced and ground motions are more uniform within the county. The M7.7 scenario maps show ground motions similar to the 5%-in-50-year probabilistic maps. As an effect of local geology, both M7.7 and M6.2 scenario maps show a more uniform seismic ground-motion hazard across Shelby County than scenario maps with constant site conditions (i.e., NEHRP B/C boundary).
Accelerating Exploitation of Low-grade Intelligence through Semantic Text Processing of Social Media
2013-06-01
importance as an information source. The brevity of social media content (e.g., 140 characters per tweet) combined with the increasing usage of mobile...platform imports unstructured text from a variety of sources and then maps the text to an existing ontology of frames (FrameNet, https...framenet.icsi.berkeley.edu/fndrupal/) during a process of Semantic Role Labeling ( SRL ). FrameNet is a structured language model grounded in the theory of Frame
NASA Technical Reports Server (NTRS)
Kamhawi, Hani; Haag, Thomas; Huang, Wensheng; Shastry, Rohit; Pinero, Luis; Peterson, Todd; Mathers, Alex
2012-01-01
NASA Science Mission Directorate's In-Space Propulsion Technology Program is sponsoring the development of a 3.5 kW-class engineering development unit Hall thruster for implementation in NASA science and exploration missions. NASA Glenn and Aerojet are developing a high fidelity high voltage Hall accelerator that can achieve specific impulse magnitudes greater than 2,700 seconds and xenon throughput capability in excess of 300 kilograms. Performance, plume mappings, thermal characterization, and vibration tests of the high voltage Hall accelerator engineering development unit have been performed. Performance test results indicated that at 3.9 kW the thruster achieved a total thrust efficiency and specific impulse of 58%, and 2,700 sec, respectively. Thermal characterization tests indicated that the thruster component temperatures were within the prescribed material maximum operating temperature limits during full power thruster operation. Finally, thruster vibration tests indicated that the thruster survived the 3-axes qualification full-level random vibration test series. Pre and post-vibration test performance mappings indicated almost identical thruster performance. Finally, an update on the development progress of a power processing unit and a xenon feed system is provided.
NASA Astrophysics Data System (ADS)
Hasyim, Fuad; Subagio, Habib; Darmawan, Mulyanto
2016-06-01
A preparation of spatial planning documents require basic geospatial information and thematic accuracies. Recently these issues become important because spatial planning maps are impartial attachment of the regional act draft on spatial planning (PERDA). The needs of geospatial information in the preparation of spatial planning maps preparation can be divided into two major groups: (i). basic geospatial information (IGD), consist of of Indonesia Topographic maps (RBI), coastal and marine environmental maps (LPI), and geodetic control network and (ii). Thematic Geospatial Information (IGT). Currently, mostly local goverment in Indonesia have not finished their regulation draft on spatial planning due to some constrain including technical aspect. Some constrain in mapping of spatial planning are as follows: the availability of large scale ofbasic geospatial information, the availability of mapping guidelines, and human resources. Ideal conditions to be achieved for spatial planning maps are: (i) the availability of updated geospatial information in accordance with the scale needed for spatial planning maps, (ii) the guideline of mapping for spatial planning to support local government in completion their PERDA, and (iii) capacity building of local goverment human resources to completed spatial planning maps. The OMP strategies formulated to achieve these conditions are: (i) accelerating of IGD at scale of 1:50,000, 1: 25,000 and 1: 5,000, (ii) to accelerate mapping and integration of Thematic Geospatial Information (IGT) through stocktaking availability and mapping guidelines, (iii) the development of mapping guidelines and dissemination of spatial utilization and (iv) training of human resource on mapping technology.
Wilson, Frederic H.
1989-01-01
Graphics programs on computers can facilitate the compilation and production of geologic maps, including full color maps of publication quality. This paper describes the application of two different programs, GSMAP and ARC/INFO, to the production of a geologic map of the Port Meller and adjacent 1:250,000-scale quadrangles on the Alaska Peninsula. GSMAP was used at first because of easy digitizing on inexpensive computer hardware. Limitations in its editing capability led to transfer of the digital data to ARC/INFO, a Geographic Information System, which has better editing and also added data analysis capability. Although these improved capabilities are accompanied by increased complexity, the availability of ARC/INFO's data analysis capability provides unanticipated advantages. It allows digital map data to be processed as one of multiple data layers for mineral resource assessment. As a result of development of both software packages, it is now easier to apply both software packages to geologic map production. Both systems accelerate the drafting and revision of maps and enhance the compilation process. Additionally, ARC/ INFO's analysis capability enhances the geologist's ability to develop answers to questions of interest that were previously difficult or impossible to obtain.
Borcherdt, R.D.; Mark, R.K.
1995-01-01
The Hanshin-Awaji earthquake (also known as the Hyogo-ken Nanbu and the Great Hanshin earthquake) provided an unprecedented set of measurements of strong ground shaking. The measurements constitute the most comprehensive set of strong- motion recordings yet obtained for sites underlain by soft soil deposits of Holocene age within a few kilometers of the crustal rupture zone. The recordings, obtained on or near many important structures, provide an important new empirical data set for evaluating input ground motion levels and site amplification factors for codes and site-specific design procedures world wide. This report describes the data used to prepare a preliminary map summarizing the strong motion data in relation to seismicity and underlying geology (Wentworth, Borcherdt, and Mark., 1995; Figure 1, hereafter referred to as Figure 1/I). The map shows station locations, peak acceleration values, and generalized acceleration contours superimposed on pertinent seismicity and the geologic map of Japan. The map (Figure 1/I) indicates a zone of high acceleration with ground motions throughout the zone greater than 400 gal and locally greater than 800 gal. This zone encompasses the area of most intense damage mapped as JMA intensity level 7, which extends through Kobe City. The zone of most intense damage is parallel, but displaced slightly from the surface projection of the crustal rupture zone implied by aftershock locations. The zone is underlain by soft-soil deposits of Holocene age.
USDA-ARS?s Scientific Manuscript database
The challenge posed by rapidly changing wheat rust pathogens, both in virulence and in environmental adaptation, calls for the development and application of new techniques to accelerate the process of breeding for durable resistance. To expand the wheat resistance gene pool available for germplasm ...
Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas
2017-01-01
The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.
NASA Astrophysics Data System (ADS)
Schwadron, N.
2017-12-01
Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. The Interstellar Boundary Explorer (IBEX) was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies ( 5-55 keV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The global structure of the heliosphere is highly complex and influenced by competing factors ranging from the local interstellar magnetic field, suprathermal populations both within and beyond the heliopause, and the detailed flow properties of the LISM. Global heliospheric structure and microphysics in turn influences the acceleration of energetic particles and creates feedbacks that modify the interstellar interaction as a whole. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics and probe the acceleration of suprathermal and higher energy particles at a time when the space environment is rapidly evolving. IMAP ultimately connects the acceleration processes observed directly at 1 AU with unprecedented sensitivity and temporal resolution with the global structure of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment, to discover the fundamental origins of particle acceleration, while discerning the physical processes that control our global heliosphere's interactions with the local interstellar medium.
GRAIL Gravity Map of Orientale Basin
2016-10-27
This color-coded map shows the strength of surface gravity around Orientale basin on Earth's moon, derived from data obtained by NASA's GRAIL mission. The GRAIL mission produced a very high-resolution map of gravity over the surface of the entire moon. This plot is zoomed in on the part of that map that features Orientale basin, where the two GRAIL spacecraft flew extremely low near the end of their mission. Their close proximity to the basin made the probes' measurements particularly sensitive to the gravitational acceleration there (due to the inverse squared law). The color scale plots the gravitational acceleration in units of "gals," where 1 gal is one centimeter per second squared, or about 1/1000th of the gravitational acceleration at Earth's surface. (The unit was devised in honor of the astronomer Galileo). Labels on the x and y axes represent latitude and longitude. http://photojournal.jpl.nasa.gov/catalog/PIA21050
Seismic hazard assessment for Guam and the Northern Mariana Islands
Mueller, Charles S.; Haller, Kathleen M.; Luco, Nicholas; Petersen, Mark D.; Frankel, Arthur D.
2012-01-01
We present the results of a new probabilistic seismic hazard assessment for Guam and the Northern Mariana Islands. The Mariana island arc has formed in response to northwestward subduction of the Pacific plate beneath the Philippine Sea plate, and this process controls seismic activity in the region. Historical seismicity, the Mariana megathrust, and two crustal faults on Guam were modeled as seismic sources, and ground motions were estimated by using published relations for a firm-rock site condition. Maps of peak ground acceleration, 0.2-second spectral acceleration for 5 percent critical damping, and 1.0-second spectral acceleration for 5 percent critical damping were computed for exceedance probabilities of 2 percent and 10 percent in 50 years. For 2 percent probability of exceedance in 50 years, probabilistic peak ground acceleration is 0.94 gravitational acceleration at Guam and 0.57 gravitational acceleration at Saipan, 0.2-second spectral acceleration is 2.86 gravitational acceleration at Guam and 1.75 gravitational acceleration at Saipan, and 1.0-second spectral acceleration is 0.61 gravitational acceleration at Guam and 0.37 gravitational acceleration at Saipan. For 10 percent probability of exceedance in 50 years, probabilistic peak ground acceleration is 0.49 gravitational acceleration at Guam and 0.29 gravitational acceleration at Saipan, 0.2-second spectral acceleration is 1.43 gravitational acceleration at Guam and 0.83 gravitational acceleration at Saipan, and 1.0-second spectral acceleration is 0.30 gravitational acceleration at Guam and 0.18 gravitational acceleration at Saipan. The dominant hazard source at the islands is upper Benioff-zone seismicity (depth 40–160 kilometers). The large probabilistic ground motions reflect the strong concentrations of this activity below the arc, especially near Guam.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You
2017-12-01
The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
USDA-ARS?s Scientific Manuscript database
A calf model was used to determine if the depletion of CD4 T cells prior to inoculation of Mycobacterium avium subsp. paratuberculosis (Map) would delay development of an immune response to Map and accelerate disease progression. Ileal cannulas were surgically implanted in 5 bull calves at two month...
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Xiang, X D
Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.
Mapping and energization in the magnetotail. II - Particle acceleration
NASA Technical Reports Server (NTRS)
Kaufmann, Richard L.; Larson, Douglas J.; Lu, Chen
1993-01-01
Mapping with the Tsyganenko (1989) or T89 magnetosphere model has been examined previously. In the present work, an attempt is made to evaluate quantitatively what the selection of T89 implies for steady-state particle energization. The Heppner and Maynard (1987) or HM87 electric field model is mapped from the ionosphere to the equatorial plane, and the electric currents associated with T89 are evaluated. Consideration is also given to the nature of the acceleration that occurs when cross-tail current is suddenly diverted to the ionosphere.
NASA Astrophysics Data System (ADS)
Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.
1998-04-01
The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.
Mapping GRACE Accelerometer Error
NASA Astrophysics Data System (ADS)
Sakumura, C.; Harvey, N.; McCullough, C. M.; Bandikova, T.; Kruizinga, G. L. H.
2017-12-01
After more than fifteen years in orbit, instrument noise, and accelerometer noise in particular, remains one of the limiting error sources for the NASA/DLR Gravity Recovery and Climate Experiment mission. The recent V03 Level-1 reprocessing campaign used a Kalman filter approach to produce a high fidelity, smooth attitude solution fusing star camera and angular acceleration data. This process provided an unprecedented method for analysis and error estimation of each instrument. The accelerometer exhibited signal aliasing, differential scale factors between electrode plates, and magnetic effects. By applying the noise model developed for the angular acceleration data to the linear measurements, we explore the magnitude and geophysical pattern of gravity field error due to the electrostatic accelerometer.
NASA Astrophysics Data System (ADS)
Desai, M. I.; McComas, D. J.; Christian, E. R.; Mewaldt, R. A.; Schwadron, N.
2014-12-01
Solar energetic particles or SEPs from suprathermal (few keV) up to relativistic (~few GeV) speeds are accelerated near the Sun in at least two ways, namely, (1) by magnetic reconnection-driven processes during solar flares resulting in impulsive SEPs and (2) at fast coronal-mass-ejection-driven shock waves that produce large gradual SEP events. Large gradual SEP events are of particular interest because the accompanying high-energy (>10s MeV) protons pose serious radiation threats to human explorers living and working outside low-Earth orbit and to technological assets such as communications and scientific satellites in space. However, a complete understanding of SEP events has eluded us primarily because their properties, as observed near Earth orbit, are smeared due to mixing and contributions from many important physical effects. Thus, despite being studied for decades, several key questions regarding SEP events remain unanswered. These include (1) What are the contributions of co-temporal flares, jets, and CME shocks to impulsive and gradual SEP events?; (2) Do flares contribute to large SEP events directly by providing high-energy particles and/or by providing the suprathermal seed population?; (3) What are the roles of ambient turbulence/waves and self-generated waves?; (4) What are the origins of the source populations and how do their temporal and spatial variations affect SEP properties?; and (5) How do diffusion and scattering during acceleration and propagation through the interplanetary medium affect SEP properties observed out in the heliosphere? This talk describes how during the next decade, inner heliospheric measurements from the Solar Probe Plus and Solar Orbiter in conjunction with high sensitivity measurements from the Interstellar Mapping and Acceleration Probe will provide the ground-truth for various models of particle acceleration and transport and address these questions.
Stadlbauer, Andreas; van der Riet, Wilma; Crelier, Gerard; Salomonowitz, Erich
2010-07-01
To assess the feasibility and potential limitations of the acceleration techniques SENSE and k-t BLAST for time-resolved three-dimensional (3D) velocity mapping of aortic blood flow. Furthermore, to quantify differences in peak velocity versus heart phase curves. Time-resolved 3D blood flow patterns were investigated in eleven volunteers and two patients suffering from aortic diseases with accelerated PC-MR sequences either in combination with SENSE (R=2) or k-t BLAST (6-fold). Both sequences showed similar data acquisition times and hence acceleration efficiency. Flow-field streamlines were calculated and visualized using the GTFlow software tool in order to reconstruct 3D aortic blood flow patterns. Differences between the peak velocities from single-slice PC-MRI experiments using SENSE 2 and k-t BLAST 6 were calculated for the whole cardiac cycle and averaged for all volunteers. Reconstruction of 3D flow patterns in volunteers revealed attenuations in blood flow dynamics for k-t BLAST 6 compared to SENSE 2 in terms of 3D streamlines showing fewer and less distinct vortices and reduction in peak velocity, which is caused by temporal blurring. Solely by time-resolved 3D MR velocity mapping in combination with SENSE detected pathologic blood flow patterns in patients with aortic diseases. For volunteers, we found a broadening and flattering of the peak velocity versus heart phase diagram between the two acceleration techniques, which is an evidence for the temporal blurring of the k-t BLAST approach. We demonstrated the feasibility of SENSE and detected potential limitations of k-t BLAST when used for time-resolved 3D velocity mapping. The effects of higher k-t BLAST acceleration factors have to be considered for application in 3D velocity mapping. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jangi, Mehdi; Lucchini, Tommaso; Gong, Cheng; Bai, Xue-Song
2015-09-01
An Eulerian stochastic fields (ESF) method accelerated with the chemistry coordinate mapping (CCM) approach for modelling spray combustion is formulated, and applied to model diesel combustion in a constant volume vessel. In ESF-CCM, the thermodynamic states of the discretised stochastic fields are mapped into a low-dimensional phase space. Integration of the chemical stiff ODEs is performed in the phase space and the results are mapped back to the physical domain. After validating the ESF-CCM, the method is used to investigate the effects of fuel cetane number on the structure of diesel spray combustion. It is shown that, depending of the fuel cetane number, liftoff length is varied, which can lead to a change in combustion mode from classical diesel spray combustion to fuel-lean premixed burned combustion. Spray combustion with a shorter liftoff length exhibits the characteristics of the classical conceptual diesel combustion model proposed by Dec in 1997 (http://dx.doi.org/10.4271/970873), whereas in a case with a lower cetane number the liftoff length is much larger and the spray combustion probably occurs in a fuel-lean-premixed mode of combustion. Nevertheless, the transport budget at the liftoff location shows that stabilisation at all cetane numbers is governed primarily by the auto-ignition process.
Petersen, Mark D.; Harmsen, Stephen C.; Rukstales, Kenneth S.; Mueller, Charles S.; McNamara, Daniel E.; Luco, Nicolas; Walling, Melanie
2012-01-01
American Samoa and the neighboring islands of the South Pacific lie near active tectonic-plate boundaries that host many large earthquakes which can result in strong earthquake shaking and tsunamis. To mitigate earthquake risks from future ground shaking, the Federal Emergency Management Agency requested that the U.S. Geological Survey prepare seismic hazard maps that can be applied in building-design criteria. This Open-File Report describes the data, methods, and parameters used to calculate the seismic shaking hazard as well as the output hazard maps, curves, and deaggregation (disaggregation) information needed for building design. Spectral acceleration hazard for 1 Hertz having a 2-percent probability of exceedance on a firm rock site condition (Vs30=760 meters per second) is 0.12 acceleration of gravity (1 second, 1 Hertz) and 0.32 acceleration of gravity (0.2 seconds, 5 Hertz) on American Samoa, 0.72 acceleration of gravity (1 Hertz) and 2.54 acceleration of gravity (5 Hertz) on Tonga, 0.15 acceleration of gravity (1 Hertz) and 0.55 acceleration of gravity (5 Hertz) on Fiji, and 0.89 acceleration of gravity (1 Hertz) and 2.77 acceleration of gravity (5 Hertz) on the Vanuatu Islands.
On the safety of ITER accelerators.
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.
On the safety of ITER accelerators
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267
Symplectic maps and chromatic optics in particle accelerators
Cai, Yunhai
2015-07-06
Here, we have applied the nonlinear map method to comprehensively characterize the chromatic optics in particle accelerators. Our approach is built on the foundation of symplectic transfer maps of magnetic elements. The chromatic lattice parameters can be transported from one element to another by the maps. We also introduce a Jacobian operator that provides an intrinsic linkage between the maps and the matrix with parameter dependence. The link allows us to directly apply the formulation of the linear optics to compute the chromatic lattice parameters. As an illustration, we analyze an alternating-gradient cell with nonlinear sextupoles, octupoles, and decapoles andmore » derive analytically their settings for the local chromatic compensation. Finally, the cell becomes nearly perfect up to the third-order of the momentum deviation.« less
Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans
2014-08-01
Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.
ERIC Educational Resources Information Center
Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis
2008-01-01
Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…
Problem of Auroral Oval Mapping and Multiscale Auroral Structures
NASA Astrophysics Data System (ADS)
Antonova, Elizaveta; Stepanova, Marina; Kirpichev, Igor; Vovchenko, Vadim; Vorobjev, Viachislav; Yagodkina, Oksana
The problem of the auroral oval mapping to the equatorial plane is reanalyzed taking into account the latest results of the analysis of plasma pressure distribution at low altitudes and at the equatorial plane. Statistical pictures of pressure distribution at low latitudes are obtained using data of DMSP observations. We obtain the statistical pictures of pressure distribution at the equatorial plane using data of THEMIS mission. Results of THEMIS observations demonstrate the existence of plasma ring surrounding the Earth at geocentric distances from ~6 till ~12Re. Plasma pressure in the ring is near to isotropic and its averaged values are larger than 0.2 nPa. We take into account that isotropic plasma pressure is constant along the field line and that the existence of field-aligned potential drops in the region of the acceleration of auroral electrons leads to pressure decrease at low altitudes. We show that most part of quite time auroral oval does not map to the real plasma sheet. It maps to the surrounding the Earth plasma ring. We also show that transverse currents in the plasma ring are closed inside the magnetosphere forming the high latitude continuation of the ordinary ring current. The obtained results are used for the explanation of ring like form of the auroral oval. We also analyze the processes of the formation of multiscale auroral structures including thin auroral arcs and discuss the difficulties of the theories of alfvenic acceleration of auroral electrons.
High-Resolution Regional Biomass Map of Siberia from Glas, Palsar L-Band Radar and Landsat Vcf Data
NASA Astrophysics Data System (ADS)
Sun, G.; Ranson, K.; Montesano, P.; Zhang, Z.; Kharuk, V.
2015-12-01
The Arctic-Boreal zone is known be warming at an accelerated rate relative to other biomes. The taiga or boreal forest covers over 16 x106 km2 of Arctic North America, Scandinavia, and Eurasia. A large part of the northern Boreal forests are in Russia's Siberia, as area with recent accelerated climate warming. During the last two decades we have been working on characterization of boreal forests in north-central Siberia using field and satellite measurements. We have published results of circumpolar biomass using field plots, airborne (PALS, ACTM) and spaceborne (GLAS) lidar data with ASTER DEM, LANDSAT and MODIS land cover classification, MODIS burned area and WWF's ecoregion map. Researchers from ESA and Russia have also been working on biomass (or growing stock) mapping in Siberia. For example, they developed a pan-boreal growing stock volume map at 1-kilometer scale using hyper-temporal ENVISAT ASAR ScanSAR backscatter data. Using the annual PALSAR mosaics from 2007 to 2010 growing stock volume maps were retrieved based on a supervised random forest regression approach. This method is being used in the ESA/Russia ZAPAS project for Central Siberia Biomass mapping. Spatially specific biomass maps of this region at higher resolution are desired for carbon cycle and climate change studies. In this study, our work focused on improving resolution ( 50 m) of a biomass map based on PALSAR L-band data and Landsat Vegetation Canopy Fraction products. GLAS data were carefully processed and screened using land cover classification, local slope, and acquisition dates. The biomass at remaining footprints was estimated using a model developed from field measurements at GLAS footprints. The GLAS biomass samples were then aggregated into 1 Mg/ha bins of biomass and mean VCF and PALSAR backscatter and textures were calculated for each of these biomass bins. The resulted biomass/signature data was used to train a random forest model for biomass mapping of entire region from 50oN to 75oN, and 80oE to 145oE. The spatial patterns of the new biomass map is much better than the previous maps due to spatially specific mapping in high resolution. The uncertainties of field/GLAS and GLAS/imagery models were investigated using bootstrap procedure, and the final biomass map was compared with previous maps.
NASA Technical Reports Server (NTRS)
Rebeske, John J , Jr; Rohlik, Harold E
1953-01-01
An analytical investigation was made to determine from component performance characteristics the effect of air bleed at the compressor outlet on the acceleration characteristics of a typical high-pressure-ratio single-spool turbojet engine. Consideration of several operating lines on the compressor performance map with two turbine-inlet temperatures showed that for a minimum acceleration time the turbine-inlet temperature should be the maximum allowable, and the operating line on the compressor map should be as close to the surge region as possible throughout the speed range. Operation along such a line would require a continuously varying bleed area. A relatively simple two-step area bleed gives only a small increase in acceleration time over a corresponding variable-area bleed. For the modes of operation considered, over 84 percent of the total acceleration time was required to accelerate through the low-speed range ; therefore, better low-speed compressor performance (higher pressure ratios and efficiencies) would give a significant reduction in acceleration time.
NASA Astrophysics Data System (ADS)
Qi, Wenke; Jiang, Pan; Lin, Dan; Chi, Xiaoping; Cheng, Min; Du, Yikui; Zhu, Qihe
2018-01-01
A mini time-sliced ion velocity map imaging photofragment translational spectrometer using low voltage acceleration has been constructed. The innovation of this apparatus adopts a relative low voltage (30-150 V) to substitute the traditional high voltage (650-4000 V) to accelerate and focus the fragment ions. The overall length of the flight path is merely 12 cm. There are many advantages for this instrument, such as compact structure, less interference, and easy to operate and control. Low voltage acceleration gives a longer turn-around time to the photofragment ions forming a thicker Newton sphere, which provides sufficient time for slicing. Ion trajectory simulation has been performed for determining the structure dimensions and the operating voltages. The photodissociation and multiphoton ionization of O2 at 224.999 nm is used to calibrate the ion images and examine the overall performance of the new spectrometer. The velocity resolution (Δν/ν) of this spectrometer from O2 photodissociation is about 0.8%, which is better than most previous results using high acceleration voltage. For the case of CF3I dissociation at 277.38 nm, many CF3 vibrational states have been resolved, and the anisotropy parameter has been measured. The application of low voltage acceleration has shown its advantages on the ion velocity map imaging (VMI) apparatus. The miniaturization of the VMI instruments can be realized on the premise of high resolution.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Particle acceleration on a chip: A laser-driven micro-accelerator for research and industry
NASA Astrophysics Data System (ADS)
Yoder, R. B.; Travish, G.
2013-03-01
Particle accelerators are conventionally built from radio-frequency metal cavities, but this technology limits the maximum energy available and prevents miniaturization. In the past decade, laser-powered acceleration has been intensively studied as an alternative technology promising much higher accelerating fields in a smaller footprint and taking advantage of recent advances in photonics. Among the more promising approaches are those based on dielectric field-shaping structures. These ``dielectric laser accelerators'' (DLAs) scale with the laser wavelength employed and can be many orders of magnitude smaller than conventional accelerators; DLAs may enable the production of high-intensity, ultra-short relativistic electron bunches in a chip-scale device. When combined with a high- Z target or an optical-period undulator, these systems could produce high-brilliance x-rays from a breadbox-sized device having multiple applications in imaging, medicine, and homeland security. In our research program we have developed one such DLA, the Micro-Accelerator Platform (MAP). We describe the fundamental physics, our fabrication and testing program, and experimental results to date, along with future prospects for MAP-based light-sources and some remaining challenges. Supported in part by the Defense Threat Reduction Agency and National Nuclear Security Administration.
DNA Probe Pooling for Rapid Delineation of Chromosomal Breakpoints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Chun-Mei; Kwan, Johnson; Baumgartner, Adolf
2009-01-30
Structural chromosome aberrations are hallmarks of many human genetic diseases. The precise mapping of translocation breakpoints in tumors is important for identification of genes with altered levels of expression, prediction of tumor progression, therapy response, or length of disease-free survival as well as the preparation of probes for detection of tumor cells in peripheral blood. Similarly, in vitro fertilization (IVF) and preimplantation genetic diagnosis (PGD) for carriers of balanced, reciprocal translocations benefit from accurate breakpoint maps in the preparation of patient-specific DNA probes followed by a selection of normal or balanced oocytes or embryos. We expedited the process of breakpointmore » mapping and preparation of case-specific probes by utilizing physically mapped bacterial artificial chromosome (BAC) clones. Historically, breakpoint mapping is based on the definition of the smallest interval between proximal and distal probes. Thus, many of the DNA probes prepared for multi-clone and multi-color mapping experiments do not generate additional information. Our pooling protocol described here with examples from thyroid cancer research and PGD accelerates the delineation of translocation breakpoints without sacrificing resolution. The turnaround time from clone selection to mapping results using tumor or IVF patient samples can be as short as three to four days.« less
UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.
Madore, Bruno
2004-08-01
This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-08
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xuanhua; Luo, Xuan; Liang, Junling
GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weightmore » asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and datasets, Frog is able to significantly outperform existing GPU-based graph processing systems except Gunrock and MapGraph. MapGraph gets better performance than Frog when running BFS on RoadNet-CA. The comparison between Gunrock and Frog is inconclusive. Frog can outperform Gunrock more than 1.04X when running PageRank and SSSP, while the advantage of Frog is not obvious when running BFS and CC on some datasets especially for RoadNet-CA.« less
NASA Astrophysics Data System (ADS)
Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.
2013-12-01
Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit, were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.
Hayes, Kathryn J; Eljiz, Kathy; Dadich, Ann; Fitzgerald, Janna-Anneke; Sloan, Terry
2015-01-01
The purpose of this paper is to provide a retrospective analysis of computer simulation's role in accelerating individual innovation adoption decisions. The process innovation examined is Lean Systems Thinking, and the organizational context is the imaging department of an Australian public hospital. Intrinsic case study methods including observation, interviews with radiology and emergency personnel about scheduling procedures, mapping patient appointment processes and document analysis were used over three years and then complemented with retrospective interviews with key hospital staff. The multiple data sources and methods were combined in a pragmatic and reflexive manner to explore an extreme case that provides potential to act as an instructive template for effective change. Computer simulation of process change ideas offered by staff to improve patient-flow accelerated the adoption of the process changes, largely because animated computer simulation permitted experimentation (trialability), provided observable predictions of change results (observability) and minimized perceived risk. The difficulty of making accurate comparisons between time periods in a health care setting is acknowledged. This work has implications for policy, practice and theory, particularly for inducing the rapid diffusion of process innovations to address challenges facing health service organizations and national health systems. Originality/value - The research demonstrates the value of animated computer simulation in presenting the need for change, identifying options, and predicting change outcomes and is the first work to indicate the importance of trialability, observability and risk reduction in individual adoption decisions in health services.
A Free Database of Auto-detected Full-sun Coronal Hole Maps
NASA Astrophysics Data System (ADS)
Caplan, R. M.; Downs, C.; Linker, J.
2016-12-01
We present a 4-yr (06/10/2010 to 08/18/14 at 6-hr cadence) database of full-sun synchronic EUV and coronal hole (CH) maps made available on a dedicated web site (http://www.predsci.com/chd). The maps are generated using STEREO/EUVI A&B 195Å and SDO/AIA 193Å images through an automated pipeline (Caplan et al, (2016) Ap.J. 823, 53).Specifically, the original data is preprocessed with PSF-deconvolution, a nonlinear limb-brightening correction, and a nonlinear inter-instrument intensity normalization. Coronal holes are then detected in the preprocessed images using a GPU-accelerated region growing segmentation algorithm. The final results from all three instruments are then merged and projected to form full-sun sine-latitude maps. All the software used in processing the maps is provided, which can easily be adapted for use with other instruments and channels. We describe the data pipeline and show examples from the database. We also detail recent CH-detection validation experiments using synthetic EUV emission images produced from global thermodynamic MHD simulations.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunz, Josiah; Snopok, Pavel; Berz, Martin
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochasticmore » nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.« less
STS-107 Microgravity Environment Summary Report
NASA Technical Reports Server (NTRS)
Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; Reckhart, Timothy
2005-01-01
This summary report presents the results of the processed acceleration data measured aboard the Columbia orbiter during the STS-107 microgravity mission from January 16 to February 1, 2003. Two accelerometer systems were used to measure the acceleration levels due to vehicle and science operations activities that took place during the 16-day mission. Due to lack of precise timeline information regarding some payload's operations, not all of the activities were analyzed for this report. However, a general characterization of the microgravity environment of the Columbia Space Shuttle during the 16-day mission is presented followed by a more specific characterization of the environment for some designated payloads during their operations. Some specific quasi-steady and vibratory microgravity environment characterization analyses were performed for the following payloads: Structure of Flame Balls at Low Lewis-number-2, Laminar Soot Processes-2, Mechanics of Granular Materials-3 and Water Mist Fire-Suppression Experiment. The Physical Science Division of the National Aeronautics and Space Administration sponsors the Orbital Acceleration Research Experiment and the Space Acceleration Measurement System for Free Flyer to support microgravity science experiments, which require microgravity acceleration measurements. On January 16, 2003, both the Orbital Acceleration Research Experiment and the Space Acceleration Measurement System for Free Flyer accelerometer systems were launched on the Columbia Space Transportation System-107 from the Kennedy Space Center. The Orbital Acceleration Research Experiment supported science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System for Free Flyer unit supported experiments requiring vibratory acceleration measurement. The Columbia reduced gravity environment analysis presented in this report uses acceleration data collected by these two sets of accelerometer systems: The Orbital Acceleration Research Experiment is a low frequency sensor, which measures acceleration up to 1 Hz, but the 1 Hz acceleration data is trimmean filtered to yield much lower frequency acceleration data up to 0.01 Hz. This filtered data can be mapped to other locations for characterizing the quasi-steady environment for payloads and the vehicle. The Space Acceleration Measurement System for Free Flyer measures vibratory acceleration in the range of 0.01 to 200 Hz at multiple measurement locations. The vibratory acceleration data measured by this system is used to assess the local vibratory environment for payloads as well as to measure the disturbance causes by the vehicle systems, crew exercise devices and payloads operation disturbances. This summary report presents analysis of selected quasi-steady and vibratory activities measured by these two accelerometers during the Columbia 16-day microgravity mission from January 16 to February 1, 2003.
Frankel, Arthur; Harmsen, Stephen; Mueller, Charles; Calais, Eric; Haase, Jennifer
2011-01-01
We have produced probabilistic seismic hazard maps of Haiti for peak ground acceleration and response spectral accelerations that include the hazard from the major crustal faults, subduction zones, and background earthquakes. The hazard from the Enriquillo-Plantain Garden, Septentrional, and Matheux-Neiba fault zones was estimated using fault slip rates determined from GPS measurements. The hazard from the subduction zones along the northern and southeastern coasts of Hispaniola was calculated from slip rates derived from GPS data and the overall plate motion. Hazard maps were made for a firm-rock site condition and for a grid of shallow shear-wave velocities estimated from topographic slope. The maps show substantial hazard throughout Haiti, with the highest hazard in Haiti along the Enriquillo-Plantain Garden and Septentrional fault zones. The Matheux-Neiba Fault exhibits high hazard in the maps for 2% probability of exceedance in 50 years, although its slip rate is poorly constrained.
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
Acceleration of planes segmentation using normals from previous frame
NASA Astrophysics Data System (ADS)
Gritsenko, Pavel; Gritsenko, Igor; Seidakhmet, Askar; Abduraimov, Azizbek
2017-12-01
One of the major problem in integration process of robots is to make them able to function in a human environment. In terms of computer vision, the major feature of human made rooms is the presence of planes [1, 2, 20, 21, 23]. In this article, we will present an algorithm dedicated to increase speed of a plane segmentation. The algorithm uses information about location of a plane and its normal vector to speed up the segmentation process in the next frame. In conjunction with it, we will address such aspects of ICP SLAM as performance and map representation.
NASA Technical Reports Server (NTRS)
Martin, Gary L.; Baugher, Charles R.; Delombard, Richard
1990-01-01
In order to define the acceleration requirements for future Shuttle and Space Station Freedom payloads, methods and hardware characterizing accelerations on microgravity experiment carriers are discussed. The different aspects of the acceleration environment and the acceptable disturbance levels are identified. The space acceleration measurement system features an adjustable bandwidth, wide dynamic range, data storage, and ability to be easily reconfigured and is expected to fly on the Spacelab Life Sciences-1. The acceleration characterization and analysis project describes the Shuttle acceleration environment and disturbance mechanisms, and facilitates the implementation of the microgravity research program.
The role of partial knowledge in statistical word learning
Fricker, Damian C.; Yu, Chen; Smith, Linda B.
2013-01-01
A critical question about the nature of human learning is whether it is an all-or-none or a gradual, accumulative process. Associative and statistical theories of word learning rely critically on the later assumption: that the process of learning a word's meaning unfolds over time. That is, learning the correct referent for a word involves the accumulation of partial knowledge across multiple instances. Some theories also make an even stronger claim: Partial knowledge of one word–object mapping can speed up the acquisition of other word–object mappings. We present three experiments that test and verify these claims by exposing learners to two consecutive blocks of cross-situational learning, in which half of the words and objects in the second block were those that participants failed to learn in Block 1. In line with an accumulative account, Re-exposure to these mis-mapped items accelerated the acquisition of both previously experienced mappings and wholly new word–object mappings. But how does partial knowledge of some words speed the acquisition of others? We consider two hypotheses. First, partial knowledge of a word could reduce the amount of information required for it to reach threshold, and the supra-threshold mapping could subsequently aid in the acquisition of new mappings. Alternatively, partial knowledge of a word's meaning could be useful for disambiguating the meanings of other words even before the threshold of learning is reached. We construct and compare computational models embodying each of these hypotheses and show that the latter provides a better explanation of the empirical data. PMID:23702980
Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska
Wesson, Robert L.; Boyd, Oliver S.; Mueller, Charles S.; Bufe, Charles G.; Frankel, Arthur D.; Petersen, Mark D.
2007-01-01
We present here time-independent probabilistic seismic hazard maps of Alaska and the Aleutians for peak ground acceleration (PGA) and 0.1, 0.2, 0.3, 0.5, 1.0 and 2.0 second spectral acceleration at probability levels of 2 percent in 50 years (annual probability of 0.000404), 5 percent in 50 years (annual probability of 0.001026) and 10 percent in 50 years (annual probability of 0.0021). These maps represent a revision of existing maps based on newly obtained data and assumptions reflecting best current judgments about methodology and approach. These maps have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States. A significant improvement relative to the 2002 methodology is the ability to include variable slip rate along a fault where appropriate. These maps incorporate new data, the responses to comments received at workshops held in Fairbanks and Anchorage, Alaska, in May, 2005, and comments received after draft maps were posted on the National Seismic Hazard Mapping Web Site. These maps will be proposed for adoption in future revisions to the International Building Code. In this documentation we describe the maps and in particular explain and justify changes that have been made relative to the 1999 maps. We are also preparing a series of experimental maps of time-dependent hazard that will be described in future documents.
Interstellar Mapping and Acceleration Probe (IMAP)
NASA Astrophysics Data System (ADS)
Schwadron, Nathan
2016-04-01
Our piece of cosmic real-estate, the heliosphere, is the domain of all human existence - an astrophysical case-history of the successful evolution of life in a habitable system. By exploring our global heliosphere and its myriad interactions, we develop key physical knowledge of the interstellar interactions that influence exoplanetary habitability as well as the distant history and destiny of our solar system and world. IBEX was the first mission to explore the global heliosphere and in concert with Voyager 1 and Voyager 2 is discovering a fundamentally new and uncharted physical domain of the outer heliosphere. In parallel, Cassini/INCA maps the global heliosphere at energies (~5-55 KeV) above those measured by IBEX. The enigmatic IBEX ribbon and the INCA belt were unanticipated discoveries demonstrating that much of what we know or think we understand about the outer heliosphere needs to be revised. The next quantum leap enabled by IMAP will open new windows on the frontier of Heliophysics at a time when the space environment is rapidly evolving. IMAP with 100 times the combined resolution and sensitivity of IBEX and INCA will discover the substructure of the IBEX ribbon and will reveal in unprecedented resolution global maps of our heliosphere. The remarkable synergy between IMAP, Voyager 1 and Voyager 2 will remain for at least the next decade as Voyager 1 pushes further into the interstellar domain and Voyager 2 moves through the heliosheath. The "A" in IMAP refers to acceleration of energetic particles. With its combination of highly sensitive pickup and suprathermal ion sensors, IMAP will provide the species and spectral coverage as well as unprecedented temporal resolution to associate emerging suprathermal tails with interplanetary structures and discover underlying physical acceleration processes. These key measurements will provide what has been a critical missing piece of suprathermal seed particles in our understanding of particle acceleration to high energies in the solar-heliospheric system and by extension to other planetary and astrophysical paradigms. IMAP, like ACE before it, will be a keystone of the Heliophysics System Observatory by providing comprehensive cosmic ray, energetic particle, pickup ion, suprathermal ion, neutral atom, solar wind, solar wind heavy ion, and magnetic field observations to diagnose the changing space environment and understand the fundamental origins of particle acceleration.
Kadlecek, Stephen; Hamedani, Hooman; Xu, Yinan; Emami, Kiarash; Xin, Yi; Ishii, Masaru; Rizi, Rahim
2013-10-01
Alveolar oxygen tension (Pao2) is sensitive to the interplay between local ventilation, perfusion, and alveolar-capillary membrane permeability, and thus reflects physiologic heterogeneity of healthy and diseased lung function. Several hyperpolarized helium ((3)He) magnetic resonance imaging (MRI)-based Pao2 mapping techniques have been reported, and considerable effort has gone toward reducing Pao2 measurement error. We present a new Pao2 imaging scheme, using parallel accelerated MRI, which significantly reduces measurement error. The proposed Pao2 mapping scheme was computer-simulated and was tested on both phantoms and five human subjects. Where possible, correspondence between actual local oxygen concentration and derived values was assessed for both bias (deviation from the true mean) and imaging artifact (deviation from the true spatial distribution). Phantom experiments demonstrated a significantly reduced coefficient of variation using the accelerated scheme. Simulation results support this observation and predict that correspondence between the true spatial distribution and the derived map is always superior using the accelerated scheme, although the improvement becomes less significant as the signal-to-noise ratio increases. Paired measurements in the human subjects, comparing accelerated and fully sampled schemes, show a reduced Pao2 distribution width for 41 of 46 slices. In contrast to proton MRI, acceleration of hyperpolarized imaging has no signal-to-noise penalty; its use in Pao2 measurement is therefore always beneficial. Comparison of multiple schemes shows that the benefit arises from a longer time-base during which oxygen-induced depolarization modifies the signal strength. Demonstration of the accelerated technique in human studies shows the feasibility of the method and suggests that measurement error is reduced here as well, particularly at low signal-to-noise levels. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.
Chow, James C.L.; Grigorov, Grigor N.; Yazdani, Nuri
2006-01-01
A custom‐made computer program, SWIMRT, to construct “multileaf collimator (MLC) machine” file for intensity‐modulated radiotherapy (IMRT) fluence maps was developed using MATLAB® and the sliding window algorithm. The user can either import a fluence map with a graphical file format created by an external treatment‐planning system such as Pinnacle3 or create his or her own fluence map using the matrix editor in the program. Through comprehensive calibrations of the dose and the dimension of the imported fluence field, the user can use associated image‐processing tools such as field resizing and edge trimming to modify the imported map. When the processed fluence map is suitable, a “MLC machine” file is generated for our Varian 21 EX linear accelerator with a 120‐leaf Millennium MLC. This machine file is transferred to the MLC console of the LINAC to control the continuous motions of the leaves during beam irradiation. An IMRT field is then irradiated with the 2D intensity profiles, and the irradiated profiles are compared to the imported or modified fluence map. This program was verified and tested using film dosimetry to address the following uncertainties: (1) the mechanical limitation due to the leaf width and maximum traveling speed, and (2) the dosimetric limitation due to the leaf leakage/transmission and penumbra effect. Because the fluence map can be edited, resized, and processed according to the requirement of a study, SWIMRT is essential in studying and investigating the IMRT technique using the sliding window algorithm. Using this program, future work on the algorithm may include redistributing the time space between segmental fields to enhance the fluence resolution, and readjusting the timing of each leaf during delivery to avoid small fields. Possible clinical utilities and examples for SWIMRT are given in this paper. PACS numbers: 87.53.Kn, 87.53.St, 87.53.Uv PMID:17533330
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebman, Sarah R.; Ivezic, Zeljko; Quinn, Thomas R.
2012-10-10
We search for evidence of dark matter in the Milky Way by utilizing the stellar number density distribution and kinematics measured by the Sloan Digital Sky Survey (SDSS) to heliocentric distances exceeding {approx}10 kpc. We employ the cylindrically symmetric form of Jeans equations and focus on the morphology of the resulting acceleration maps, rather than the normalization of the total mass as done in previous, mostly local, studies. Jeans equations are first applied to a mock catalog based on a cosmologically derived N-body+SPH simulation, and the known acceleration (gradient of gravitational potential) is successfully recovered. The same simulation is alsomore » used to quantify the impact of dark matter on the total acceleration. We use Galfast, a code designed to quantitatively reproduce SDSS measurements and selection effects, to generate a synthetic stellar catalog. We apply Jeans equations to this catalog and produce two-dimensional maps of stellar acceleration. These maps reveal that in a Newtonian framework, the implied gravitational potential cannot be explained by visible matter alone. The acceleration experienced by stars at galactocentric distances of {approx}20 kpc is three times larger than what can be explained by purely visible matter. The application of an analytic method for estimating the dark matter halo axis ratio to SDSS data implies an oblate halo with q{sub DM} = 0.47 {+-} 0.14 within the same distance range. These techniques can be used to map the dark matter halo to much larger distances from the Galactic center using upcoming deep optical surveys, such as LSST.« less
A Hybrid CPU-GPU Accelerated Framework for Fast Mapping of High-Resolution Human Brain Connectome
Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong
2013-01-01
Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson’s Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states. PMID:23675425
NASA Astrophysics Data System (ADS)
Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D.; Shang, Hong; Shin, Peter J.; Larson, Peder E. Z.; Vigneron, Daniel B.
2018-05-01
Acceleration of dynamic 2D (T2 Mapping) and 3D hyperpolarized 13C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2-13C]pyruvate, [1-13C]lactate, [13C, 15N2]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T2 maps.
The status and road map of Turkish Accelerator Center (TAC)
NASA Astrophysics Data System (ADS)
Yavaş, Ö.
2012-02-01
Turkish Accelerator Center (TAC) project is supported by the State Planning Organization (SPO) of Turkey and coordinated by Ankara University. After having completed the Feasibility Report (FR) in 2000 and the Conceptual Design Report (CDR) in 2005, third phase of the project started in 2006 as an inter-universities project including ten Turkish Universities with the support of SPO. Third phase of the project has two main scientific goals: to prepare the Technical Design Report (TDR) of TAC and to establish an Infrared Free Electron Laser (IR FEL) facility, named as Turkish Accelerator and Radiation Laboratory at Ankara (TARLA) as a first step. The facility is planned to be completed in 2015 and will be based on 15-40 MeV superconducting linac. In this paper, main aims, national and regional importance, main parts main parameters, status and road map of Turkish Accelerator Center will be presented.
Monitoring oil displacement processes with k-t accelerated spin echo SPI.
Li, Ming; Xiao, Dan; Romero-Zerón, Laura; Balcom, Bruce J
2016-03-01
Magnetic resonance imaging (MRI) is a robust tool to monitor oil displacement processes in porous media. Conventional MRI measurement times can be lengthy, which hinders monitoring time-dependent displacements. Knowledge of the oil and water microscopic distribution is important because their pore scale behavior reflects the oil trapping mechanisms. The oil and water pore scale distribution is reflected in the magnetic resonance T2 signal lifetime distribution. In this work, a pure phase-encoding MRI technique, spin echo SPI (SE-SPI), was employed to monitor oil displacement during water flooding and polymer flooding. A k-t acceleration method, with low-rank matrix completion, was employed to improve the temporal resolution of the SE-SPI MRI measurements. Comparison to conventional SE-SPI T2 mapping measurements revealed that the k-t accelerated measurement was more sensitive and provided higher-quality results. It was demonstrated that the k-t acceleration decreased the average measurement time from 66.7 to 20.3 min in this work. A perfluorinated oil, containing no (1) H, and H2 O brine were employed to distinguish oil and water phases in model flooding experiments. High-quality 1D water saturation profiles were acquired from the k-t accelerated SE-SPI measurements. Spatially and temporally resolved T2 distributions were extracted from the profile data. The shift in the (1) H T2 distribution of water in the pore space to longer lifetimes during water flooding and polymer flooding is consistent with increased water content in the pore space. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
A Web-based Visualization System for Three Dimensional Geological Model using Open GIS
NASA Astrophysics Data System (ADS)
Nemoto, T.; Masumoto, S.; Nonogaki, S.
2017-12-01
A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.
Accelerating EPI distortion correction by utilizing a modern GPU-based parallel computation.
Yang, Yao-Hao; Huang, Teng-Yi; Wang, Fu-Nien; Chuang, Tzu-Chao; Chen, Nan-Kuei
2013-04-01
The combination of phase demodulation and field mapping is a practical method to correct echo planar imaging (EPI) geometric distortion. However, since phase dispersion accumulates in each phase-encoding step, the calculation complexity of phase modulation is Ny-fold higher than conventional image reconstructions. Thus, correcting EPI images via phase demodulation is generally a time-consuming task. Parallel computing by employing general-purpose calculations on graphics processing units (GPU) can accelerate scientific computing if the algorithm is parallelized. This study proposes a method that incorporates the GPU-based technique into phase demodulation calculations to reduce computation time. The proposed parallel algorithm was applied to a PROPELLER-EPI diffusion tensor data set. The GPU-based phase demodulation method reduced the EPI distortion correctly, and accelerated the computation. The total reconstruction time of the 16-slice PROPELLER-EPI diffusion tensor images with matrix size of 128 × 128 was reduced from 1,754 seconds to 101 seconds by utilizing the parallelized 4-GPU program. GPU computing is a promising method to accelerate EPI geometric correction. The resulting reduction in computation time of phase demodulation should accelerate postprocessing for studies performed with EPI, and should effectuate the PROPELLER-EPI technique for clinical practice. Copyright © 2011 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
The effects of SENSE on PROPELLER imaging.
Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael
2015-12-01
To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.
A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing
Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui
2017-01-01
Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4× speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration. PMID:28075343
Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan
2016-03-01
Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.
Circulation patterns in active lava lakes
NASA Astrophysics Data System (ADS)
Redmond, T. C.; Lev, E.
2014-12-01
Active lava lakes provide a unique window into magmatic conduit processes. We investigated circulation patterns of 4 active lava lakes: Kilauea's Halemaumau crater, Mount Erebus, Erta Ale and Nyiragongo, and in an artificial "lava lake" constructed at the Syracuse University Lava Lab. We employed visual and thermal video recordings collected at these volcanoes and use computer vision techniques to extract time-dependent, two-dimensional surface velocity maps. The large amount of data available from Halemaumau enabled us to identify several characteristic circulation patterns. One such pattern is a rapid acceleration followed by rapid deceleration, often to a level lower than the pre-acceleration level, and then a slow recovery. Another pattern is periodic asymmetric peaks of gradual acceleration and rapid deceleration, or vice versa, previously explained by gas pistoning. Using spectral analysis, we find that the dominant period of circulation cycles at approximately 30 minutes, 3 times longer than the dominant period identified previously for Mount Erebus. Measuring a complete surface velocity field allowed us to map and follow locations of divergence and convergence, therefore upwelling and downwelling, thus connecting the surface flow with that at depth. At Nyiragongo, the location of main upwelling shifts gradually, yet is usually at the interior of the lake, for Erebus it is usually along the perimeter yet often there is catastrophic downwelling at the interior; For Halemaumau upwelling/downwelling position is almost always on the perimeter. In addition to velocity fields, we developed an automated tool for counting crustal plates at the surface of the lava lakes, and found a correlation, and a lag time, between changes if circulation vigor and the average size of crustal plates. Circulation in the artificial basaltic lava "lake" was limited by its size and degree of foaming, yet we measured surface velocities and identify patterns. Maximum surface velocity showed symmetrical peaks of acceleration and deceleration. In summary, extended observations at lava lakes reveal patterns of circulations at different time scales, yielding insight into different processes controlling the exchange of gas and fluids between the magma chamber and conduit, and the surface and atmosphere.
Documentation for the 2014 update of the United States national seismic hazard maps
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.
2014-01-01
The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.
Implementation of the analytical hierarchy process with VBA in ArcGIS
NASA Astrophysics Data System (ADS)
Marinoni, Oswald
2004-07-01
Decisions on landuse have become progressively more difficult in the last decades. The main reasons for this development lie in the increasing population combined with an increasing demand for new land and resources and in the growing consciousness for sustainable land and resource use. The steady reduction of valuable land leads to an increase of conflicts in land use decision-making processes since more interests are being affected and therefore more stakeholders with different land use interests and different valuation criteria are being involved in the decision-making process. In the course of such a decision process all identified criteria are weighted according to their relative importance. But assigning weights to the relevant criteria quickly becomes a difficult task when a greater number of criteria are being considered, especially with regard to land use decisions where decision makers expect some kind of mapped result it is therefore useful to use procedures that not only help to derive criteria weights but also accelerate the visualisation and mapping of land use assessment results. Both aspects can easily be facilitated in a GIS. This paper focuses the development of an ArcGIS VBA macro which enables the user to derive criteria weights with the analytical hierarchy process and which allows a mapping of the land use assessment results by a weighted summation of GIS raster data sets. A dynamic link library for the calculation of the eigenvalues and eigenvectors of a square matrix is provided.
Stochastic Ion Heating by the Lower-Hybrid Waves
NASA Technical Reports Server (NTRS)
Khazanov, G.; Tel'nikhin, A.; Krotov, A.
2011-01-01
The resonance lower-hybrid wave-ion interaction is described by a group (differentiable map) of transformations of phase space of the system. All solutions to the map belong to a strange attractor, and chaotic motion of the attractor manifests itself in a number of macroscopic effects, such as the energy spectrum and particle heating. The applicability of the model to the problem of ion heating by waves at the front of collisionless shock as well as ion acceleration by a spectrum of waves is discussed. Keywords: plasma; ion-cyclotron heating; shocks; beat-wave accelerator.
Baryon acoustic oscillation intensity mapping of dark energy.
Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B; McDonald, Patrick
2008-03-07
The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called "dark energy." To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 10(9) individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.
A Bayesian and Physics-Based Ground Motion Parameters Map Generation System
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.
2014-12-01
We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the computations, and present some examples of recent earthquakes. We conclude that the system provides information with a fair goodness-of-fit against observations. This project is partially supported by DGAPA-PAPIIT (UNAM) project TB100313-RR170313.
Rioux, James A; Beyea, Steven D; Bowen, Chris V
2017-02-01
Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.
International Space Station Increment-4/5 Microgravity Environment Summary Report
NASA Technical Reports Server (NTRS)
Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy
2003-01-01
This summary report presents the results of some of the processed acceleration data measured aboard the International Space Station during the period of December 2001 to December 2002. Unlike the past two ISS Increment reports, which were increment specific, this summary report covers two increments: Increments 4 and 5, hereafter referred to as Increment-4/5. Two accelerometer systems were used to measure the acceleration levels for the activities that took place during Increment-4/5. Due to time constraint and lack of precise timeline information regarding some payload operations and station activities, not a11 of the activities were analyzed for this report. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System supports science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit supports experiments requiring vibratory acceleration measurement. The International Space Station Increment-4/5 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: The Microgravity Acceleration Measurement System, which consists of two sensors: the low-frequency Orbital Acceleration Research Experiment Sensor Subsystem and the higher frequency High Resolution Accelerometer Package. The low frequency sensor measures up to 1 Hz, but is routinely trimmean filtered to yield much lower frequency acceleration data up to 0.01 Hz. This filtered data can be mapped to arbitrary locations for characterizing the quasi-steady environment for payloads and the vehicle. The high frequency sensor is used to characterize the vibratory environment up to 100 Hz at a single measurement location. The Space Acceleration Measurement System, which deploys high frequency sensors, measures vibratory acceleration data in the range of 0.01 to 400 Hz at multiple measurement locations. This summary report presents analysis of some selected quasi-steady and vibratory activities measured by these accelerometers during Increment- 4/5 from December 2001 to December 2002.
NASA Astrophysics Data System (ADS)
Muñoz-Andrade, Juan D.
2013-12-01
By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation process between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes.
NASA Astrophysics Data System (ADS)
Moustafa, Sayed, Sr.; Alarifi, Nassir S.; Lashin, Aref A.
2016-04-01
Urban areas along the western coast of Saudi Arabia are susceptible to natural disasters and environmental damages due to lack of planning. To produce a site-specific microzonation map of the rapidly growing Yanbu industrial city, spatial distribution of different hazard entities are assessed using the Analytical Hierarchal Process (AHP) together with Geographical Information System (GIS). For this purpose six hazard parameter layers are considered, namely; fundamental frequency, site amplification, soil strength in terms of effective shear-wave velocity, overburden sediment thickness, seismic vulnerability index and peak ground acceleration. The weight and rank values are determined during AHP and are assigned to each layer and its corresponding classes, respectively. An integrated seismic microzonation map was derived using GIS platform. Based on the derived map, the study area is classified into five hazard categories: very low, low, moderate high, and very high. The western and central parts of the study area, as indicated from the derived microzonation map, are categorized as a high hazard zone as compared to other surrounding places. The produced microzonation map of the current study is envisaged as a first-level assessment of the site specific hazards in the Yanbu city area, which can be used as a platform by different stakeholders in any future land-use planning and environmental hazard management.
Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.
2003-01-01
Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.
CUDA-Accelerated Geodesic Ray-Tracing for Fiber Tracking
van Aart, Evert; Sepasian, Neda; Jalba, Andrei; Vilanova, Anna
2011-01-01
Diffusion Tensor Imaging (DTI) allows to noninvasively measure the diffusion of water in fibrous tissue. By reconstructing the fibers from DTI data using a fiber-tracking algorithm, we can deduce the structure of the tissue. In this paper, we outline an approach to accelerating such a fiber-tracking algorithm using a Graphics Processing Unit (GPU). This algorithm, which is based on the calculation of geodesics, has shown promising results for both synthetic and real data, but is limited in its applicability by its high computational requirements. We present a solution which uses the parallelism offered by modern GPUs, in combination with the CUDA platform by NVIDIA, to significantly reduce the execution time of the fiber-tracking algorithm. Compared to a multithreaded CPU implementation of the same algorithm, our GPU mapping achieves a speedup factor of up to 40 times. PMID:21941525
Accelerated Brain Aging in Schizophrenia: A Longitudinal Pattern Recognition Study.
Schnack, Hugo G; van Haren, Neeltje E M; Nieuwenhuis, Mireille; Hulshoff Pol, Hilleke E; Cahn, Wiepke; Kahn, René S
2016-06-01
Despite the multitude of longitudinal neuroimaging studies that have been published, a basic question on the progressive brain loss in schizophrenia remains unaddressed: Does it reflect accelerated aging of the brain, or is it caused by a fundamentally different process? The authors used support vector regression, a supervised machine learning technique, to address this question. In a longitudinal sample of 341 schizophrenia patients and 386 healthy subjects with one or more structural MRI scans (1,197 in total), machine learning algorithms were used to build models to predict the age of the brain and the presence of schizophrenia ("schizophrenia score"), based on the gray matter density maps. Age at baseline ranged from 16 to 67 years, and follow-up scans were acquired between 1 and 13 years after the baseline scan. Differences between brain age and chronological age ("brain age gap") and between schizophrenia score and healthy reference score ("schizophrenia gap") were calculated. Accelerated brain aging was calculated from changes in brain age gap between two consecutive measurements. The age prediction model was validated in an independent sample. In schizophrenia patients, brain age was significantly greater than chronological age at baseline (+3.36 years) and progressively increased during follow-up (+1.24 years in addition to the baseline gap). The acceleration of brain aging was not constant: it decreased from 2.5 years/year just after illness onset to about the normal rate (1 year/year) approximately 5 years after illness onset. The schizophrenia gap also increased during follow-up, but more pronounced variability in brain abnormalities at follow-up rendered this increase nonsignificant. The progressive brain loss in schizophrenia appears to reflect two different processes: one relatively homogeneous, reflecting accelerated aging of the brain and related to various measures of outcome, and a more variable one, possibly reflecting individual variation and medication use. Differentiating between these two processes may not only elucidate the various factors influencing brain loss in schizophrenia, but also assist in individualizing treatment.
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D; Shang, Hong; Shin, Peter J; Larson, Peder E Z; Vigneron, Daniel B
2018-05-01
Acceleration of dynamic 2D (T 2 Mapping) and 3D hyperpolarized 13 C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm 2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2- 13 C]pyruvate, [1- 13 C]lactate, [ 13 C, 15 N 2 ]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T 2 maps. Copyright © 2018 Elsevier Inc. All rights reserved.
The focusing optics x-ray solar imager (FOXSI): instrument and first flight
NASA Astrophysics Data System (ADS)
Krucker, Säm.; Christe, Steven; Glesener, Lindsay; Ishikawa, Shinnosuke; Ramsey, Brian; Gubarev, Mikhail; Saito, Shinya; Takahashi, Tadayuki; Watanabe, Shin; Tajima, Hiroyasu; Tanaka, Takaaki; Turin, Paul; Glaser, David; Fermin, Jose; Lin, Robert P.
2013-09-01
Solar flares accelerate particles up to high energies (MeV and GeV scales for electrons and ions, respectively) through efficient acceleration processes that are not currently understood. Hard X-rays (HXRs) are the most direct diagnostic of flare-accelerated electrons. However, past and current solar HXR observers lack the necessary sensitivity and imaging dynamic range to make detailed studies of faint HXR sources in the solar corona (where particle acceleration is thought to occur); these limitations are mainly due to the indirect Fourier imaging techniques used by these observers. With greater sensitivity and dynamic range, electron acceleration sites could be systematically studied in detail. Both these capabilities can be advanced by the use of direct focusing optics. The recently own Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload demonstrates the unique diagnostic power of focusing optics for observations of solar HXRs. FOXSI features grazing-incidence replicated nickel optics with 5 arcsecond resolution and fine-pitch silicon strip detectors with a 7.7 arcsecond strip pitch. FOXSI flew successfully on 2012 November 2, producing images and spectra of a microflare and performing a search for non-thermal emission (4{15 keV) from nanoflares occurring outside active regions in the quiet Sun. A future spacecraft version of FOXSI, featuring similar optics and detectors, could make detailed observations of HXRs from flare-accelerated electrons, identifying and characterizing particle acceleration sites and mapping out paths of energetic electrons as they leave these sites and propagate throughout the solar corona. This paper will describe the FOXSI instrument and present images from the first flight.
Pankin, Artem; Campoli, Chiara; Dong, Xue; Kilian, Benjamin; Sharma, Rajiv; Himmelbach, Axel; Saini, Reena; Davis, Seth J; Stein, Nils; Schneeberger, Korbinian; von Korff, Maria
2014-01-01
Phytochromes play an important role in light signaling and photoperiodic control of flowering time in plants. Here we propose that the red/far-red light photoreceptor HvPHYTOCHROME C (HvPHYC), carrying a mutation in a conserved region of the GAF domain, is a candidate underlying the early maturity 5 locus in barley (Hordeum vulgare L.). We fine mapped the gene using a mapping-by-sequencing approach applied on the whole-exome capture data from bulked early flowering segregants derived from a backcross of the Bowman(eam5) introgression line. We demonstrate that eam5 disrupts circadian expression of clock genes. Moreover, it interacts with the major photoperiod response gene Ppd-H1 to accelerate flowering under noninductive short days. Our results suggest that HvPHYC participates in transmission of light signals to the circadian clock and thus modulates light-dependent processes such as photoperiodic regulation of flowering. PMID:24996910
Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.
Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke
2018-04-29
The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.
Jupiter's X-ray Auroral Pulsations and Spectra During Juno Perijove 7
NASA Astrophysics Data System (ADS)
Dunn, W.; Branduardi-Raymont, G.; Ray, L. C.; Jackman, C. M.; Kraft, R.; Gladstone, R.; Yao, Z.; Rae, J.; Gray, R.; Elsner, R.; Grodent, D. C.; Nichols, J. D.; Ford, P. G.; Ness, J. U.; Kammer, J.; Rodriguez, P.
2017-12-01
Jupiter's X-ray aurora is concentrated into a bright and dynamic hot spot that is produced by precipitating 10 MeV ions [Gladstone et al. 2002; Elsner et al. 2005; Branduardi-Raymont et al. 2007]. These highly energetic emissions exhibit pulsations over timescales of 10s of minutes and change morphology, intensity and precipitating particle populations from observation to observation and pole to pole [e.g. Dunn et al. 2016; in-press]. The acceleration process/es that allow Jupiter to produce these high-energy ion charge exchange emissions are not well understood, but are concentrated in the most poleward regions of the aurora, where field lines map to the outer magnetosphere and possibly beyond [Vogt et al. 2015; Kimura et al. 2016]. On July 11th 2017, NASA's Juno spacecraft conducted its 7th perijove flyby of Jupiter and is predicted to have flown directly through field lines that map to the Northern and Southern X-ray hot spots. During this unique flight, the XMM-Newton observatory conducted 40 hours of continuous time-tagged X-ray observations. We present the results from these X-ray observations, showing that Jupiter's X-ray aurora varies significantly from one planetary rotation to the next and that the spectral signatures, indicative of the precipitating ion and electron populations producing the emission, also vary. We measure the Doppler broadening of the spectral lines to calculate the ion energies at the point when they impact the ionosphere, in order that these might be compared with in-situ data to constrain Jovian auroral acceleration processes. Finally, we compare X-ray signatures from the last decade of observations with UV polar emissions at similar times to further enrich multi-wavelength connections and deepen our understanding of how Jupiter is able to generate its highly energetic polar auroral precipitations.
Baryon Acoustic Oscillation Intensity Mapping of Dark Energy
NASA Astrophysics Data System (ADS)
Chang, Tzu-Ching; Pen, Ue-Li; Peterson, Jeffrey B.; McDonald, Patrick
2008-03-01
The expansion of the Universe appears to be accelerating, and the mysterious antigravity agent of this acceleration has been called “dark energy.” To measure the dynamics of dark energy, baryon acoustic oscillations (BAO) can be used. Previous discussions of the BAO dark energy test have focused on direct measurements of redshifts of as many as 109 individual galaxies, by observing the 21 cm line or by detecting optical emission. Here we show how the study of acoustic oscillation in the 21 cm brightness can be accomplished by economical three-dimensional intensity mapping. If our estimates gain acceptance they may be the starting point for a new class of dark energy experiments dedicated to large angular scale mapping of the radio sky, shedding light on dark energy.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
India Solar Resource Data: Enhanced Data for Accelerated Deployment (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.
India Solar Resource Data: Enhanced Data for Accelerated Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.
Model and algorithm based on accurate realization of dwell time in magnetorheological finishing.
Song, Ci; Dai, Yifan; Peng, Xiaoqiang
2010-07-01
Classically, a dwell-time map is created with a method such as deconvolution or numerical optimization, with the input being a surface error map and influence function. This dwell-time map is the numerical optimum for minimizing residual form error, but it takes no account of machine dynamics limitations. The map is then reinterpreted as machine speeds and accelerations or decelerations in a separate operation. In this paper we consider combining the two methods in a single optimization by the use of a constrained nonlinear optimization model, which regards both the two-norm of the surface residual error and the dwell-time gradient as an objective function. This enables machine dynamic limitations to be properly considered within the scope of the optimization, reducing both residual surface error and polishing times. Further simulations are introduced to demonstrate the feasibility of the model, and the velocity map is reinterpreted from the dwell time, meeting the requirement of velocity and the limitations of accelerations or decelerations. Indeed, the model and algorithm can also apply to other computer-controlled subaperture methods.
Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.
Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar
2014-07-30
To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.
Accelerating artificial intelligence with reconfigurable computing
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw
Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.
Generating clock signals for a cycle accurate, cycle reproducible FPGA based hardware accelerator
Asaad, Sameth W.; Kapur, Mohit
2016-01-05
A method, system and computer program product are disclosed for generating clock signals for a cycle accurate FPGA based hardware accelerator used to simulate operations of a device-under-test (DUT). In one embodiment, the DUT includes multiple device clocks generating multiple device clock signals at multiple frequencies and at a defined frequency ratio; and the FPG hardware accelerator includes multiple accelerator clocks generating multiple accelerator clock signals to operate the FPGA hardware accelerator to simulate the operations of the DUT. In one embodiment, operations of the DUT are mapped to the FPGA hardware accelerator, and the accelerator clock signals are generated at multiple frequencies and at the defined frequency ratio of the frequencies of the multiple device clocks, to maintain cycle accuracy between the DUT and the FPGA hardware accelerator. In an embodiment, the FPGA hardware accelerator may be used to control the frequencies of the multiple device clocks.
Portis, Ezio; Scaglione, Davide; Acquadro, Alberto; Mauromicale, Giovanni; Mauro, Rosario; Knapp, Steven J; Lanteri, Sergio
2012-05-23
The Asteraceae species Cynara cardunculus (2n = 2x = 34) includes the two fully cross-compatible domesticated taxa globe artichoke (var. scolymus L.) and cultivated cardoon (var. altilis DC). As both are out-pollinators and suffer from marked inbreeding depression, linkage analysis has focussed on the use of a two way pseudo-test cross approach. A set of 172 microsatellite (SSR) loci derived from expressed sequence tag DNA sequence were integrated into the reference C. cardunculus genetic maps, based on segregation among the F1 progeny of a cross between a globe artichoke and a cultivated cardoon. The resulting maps each detected 17 major linkage groups, corresponding to the species' haploid chromosome number. A consensus map based on 66 co-dominant shared loci (64 SSRs and two SNPs) assembled 694 loci, with a mean inter-marker spacing of 2.5 cM. When the maps were used to elucidate the pattern of inheritance of head production earliness, a key commercial trait, seven regions were shown to harbour relevant quantitative trait loci (QTL). Together, these QTL accounted for up to 74% of the overall phenotypic variance. The newly developed consensus as well as the parental genetic maps can accelerate the process of tagging and eventually isolating the genes underlying earliness in both the domesticated C. cardunculus forms. The largest single effect mapped to the same linkage group in each parental maps, and explained about one half of the phenotypic variance, thus representing a good candidate for marker assisted selection.
González-Domínguez, Jorge; Remeseiro, Beatriz; Martín, María J
2017-02-01
The analysis of the interference patterns on the tear film lipid layer is a useful clinical test to diagnose dry eye syndrome. This task can be automated with a high degree of accuracy by means of the use of tear film maps. However, the time required by the existing applications to generate them prevents a wider acceptance of this method by medical experts. Multithreading has been previously successfully employed by the authors to accelerate the tear film map definition on multicore single-node machines. In this work, we propose a hybrid message-passing and multithreading parallel approach that further accelerates the generation of tear film maps by exploiting the computational capabilities of distributed-memory systems such as multicore clusters and supercomputers. The algorithm for drawing tear film maps is parallelized using Message Passing Interface (MPI) for inter-node communications and the multithreading support available in the C++11 standard for intra-node parallelization. The original algorithm is modified to reduce the communications and increase the scalability. The hybrid method has been tested on 32 nodes of an Intel cluster (with two 12-core Haswell 2680v3 processors per node) using 50 representative images. Results show that maximum runtime is reduced from almost two minutes using the previous only-multithreaded approach to less than ten seconds using the hybrid method. The hybrid MPI/multithreaded implementation can be used by medical experts to obtain tear film maps in only a few seconds, which will significantly accelerate and facilitate the diagnosis of the dry eye syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Geomorphic processes active in the Southwestern Louisiana Canal, Lafourche Parish, Louisiana
NASA Technical Reports Server (NTRS)
Doiron, L. N.; Whitehurst, C. A.
1974-01-01
The geomorphological changes causing the destruction of the banks of the Southwestern Louisiana Canal are studied by means of field work, laboratory analyses, and infrared color imagery interpretation. Turbulence and flow patterns are mapped, and related to erosion and sediment deposition processes. The accelerated erosion rate of the last decade is discussed, with two causative factors cited: (1) development of faster boats, increasing bank and bottom erosion, and (2) a subsequently larger tidal influx, with greater erosive ability. The physical properties of the canal bank materials are also analyzed. It is concluded that channel erosion progressively increases, with no indications of stabilization, until they merge with other waterways and become indistinguishable from natural water bodies.
NASA Technical Reports Server (NTRS)
Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei
2016-01-01
Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
Shakal, A.; Graizer, V.; Huang, M.; Borcherdt, R.; Haddadi, H.; Lin, K.-W.; Stephens, C.; Roffers, P.
2005-01-01
The Parkfield 2004 earthquake yielded the most extensive set of strong-motion data in the near-source region of a magnitude 6 earthquake yet obtained. The recordings of acceleration and volumetric strain provide an unprecedented document of the near-source seismic radiation for a moderate earthquake. The spatial density of the measurements alon g the fault zone and in the linear arrays perpendicular to the fault is expected to provide an exceptional opportunity to develop improved models of the rupture process. The closely spaced measurements should help infer the temporal and spatial distribution of the rupture process at much higher resolution than previously possible. Preliminary analyses of the peak a cceleration data presented herein shows that the motions vary significantly along the rupture zone, from 0.13 g to more than 2.5 g, with a map of the values showing that the larger values are concentrated in three areas. Particle motions at the near-fault stations are consistent with bilateral rupture. Fault-normal pulses similar to those observed in recent strike-slip earthquakes are apparent at several of the stations. The attenuation of peak ground acceleration with distance is more rapid than that indicated by some standard relationships but adequately fits others. Evidence for directivity in the peak acceleration data is not strong. Several stations very near, or over, the rupturing fault recorded relatively low accelerations. These recordings may provide a quantitative basis to understand observations of low near-fault shaking damage that has been reported in other large strike-slip earthquak.
GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering.
Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka
2016-01-01
Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads.
Rotation number of integrable symplectic mappings of the plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolkin, Timofey; Nagaitsev, Sergei; Danilov, Viatcheslav
2017-04-11
Symplectic mappings are discrete-time analogs of Hamiltonian systems. They appear in many areas of physics, including, for example, accelerators, plasma, and fluids. Integrable mappings, a subclass of symplectic mappings, are equivalent to a Twist map, with a rotation number, constant along the phase trajectory. In this letter, we propose a succinct expression to determine the rotation number and present two examples. Similar to the period of the bounded motion in Hamiltonian systems, the rotation number is the most fundamental property of integrable maps and it provides a way to analyze the phase-space dynamics.
ERIC Educational Resources Information Center
Stringfield, Suzanne Griggs; Luscre, Deanna; Gast, David L.
2011-01-01
In this study, three elementary-aged boys with high-functioning autism (HFA) were taught to use a graphic organizer called a Story Map as a postreading tool during language arts instruction. Students learned to accurately complete the Story Map. The effect of the intervention on story recall was assessed within the context of a multiple-baseline…
Minati, Ludovico; Cercignani, Mara; Chan, Dennis
2013-10-01
Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Shen, Qiang; Wang, Hansheng; Shum, C K; Jiang, Liming; Hsu, Hou Tse; Dong, Jinglong
2018-03-14
We constructed Antarctic ice velocity maps from Landsat 8 images for the years 2014 and 2015 at a high spatial resolution (100 m). These maps were assembled from 10,690 scenes of displacement vectors inferred from more than 10,000 optical images acquired from December 2013 through March 2016. We estimated the mass discharge of the Antarctic ice sheet in 2008, 2014, and 2015 using the Landsat ice velocity maps, interferometric synthetic aperture radar (InSAR)-derived ice velocity maps (~2008) available from prior studies, and ice thickness data. An increased mass discharge (53 ± 14 Gt yr -1 ) was found in the East Indian Ocean sector since 2008 due to unexpected widespread glacial acceleration in Wilkes Land, East Antarctica, while the other five oceanic sectors did not exhibit significant changes. However, present-day increased mass loss was found by previous studies predominantly in west Antarctica and the Antarctic Peninsula. The newly discovered increased mass loss in Wilkes Land suggests that the ocean heat flux may already be influencing ice dynamics in the marine-based sector of the East Antarctic ice sheet (EAIS). The marine-based sector could be adversely impacted by ongoing warming in the Southern Ocean, and this process may be conducive to destabilization.
Ionosphere-magnetosphere coupling
NASA Technical Reports Server (NTRS)
Kaufmann, Richard L.
1994-01-01
Principal results are presented for the four papers that were supported from this grant. These papers include: 'Mapping and Energization in the Magnetotail. 1. Magnetospheric Boundaries; 'Mapping and Energization in the Magnetotail. 2. Particle Acceleration'; 'Cross-Tail Current: Resonant Orbits'; and 'Cross-Tail Current, Field-Aligned Current, and B(sub y)'.
Hierarchical algorithms for modeling the ocean on hierarchical architectures
NASA Astrophysics Data System (ADS)
Hill, C. N.
2012-12-01
This presentation will describe an approach to using accelerator/co-processor technology that maps hierarchical, multi-scale modeling techniques to an underlying hierarchical hardware architecture. The focus of this work is on making effective use of both CPU and accelerator/co-processor parts of a system, for large scale ocean modeling. In the work, a lower resolution basin scale ocean model is locally coupled to multiple, "embedded", limited area higher resolution sub-models. The higher resolution models execute on co-processor/accelerator hardware and do not interact directly with other sub-models. The lower resolution basin scale model executes on the system CPU(s). The result is a multi-scale algorithm that aligns with hardware designs in the co-processor/accelerator space. We demonstrate this approach being used to substitute explicit process models for standard parameterizations. Code for our sub-models is implemented through a generic abstraction layer, so that we can target multiple accelerator architectures with different programming environments. We will present two application and implementation examples. One uses the CUDA programming environment and targets GPU hardware. This example employs a simple non-hydrostatic two dimensional sub-model to represent vertical motion more accurately. The second example uses a highly threaded three-dimensional model at high resolution. This targets a MIC/Xeon Phi like environment and uses sub-models as a way to explicitly compute sub-mesoscale terms. In both cases the accelerator/co-processor capability provides extra compute cycles that allow improved model fidelity for little or no extra wall-clock time cost.
2012-01-01
Background Cucurbita pepo is a member of the Cucurbitaceae family, the second- most important horticultural family in terms of economic importance after Solanaceae. The "summer squash" types, including Zucchini and Scallop, rank among the highest-valued vegetables worldwide. There are few genomic tools available for this species. The first Cucurbita transcriptome, along with a large collection of Single Nucleotide Polymorphisms (SNP), was recently generated using massive sequencing. A set of 384 SNP was selected to generate an Illumina GoldenGate assay in order to construct the first SNP-based genetic map of Cucurbita and map quantitative trait loci (QTL). Results We herein present the construction of the first SNP-based genetic map of Cucurbita pepo using a population derived from the cross of two varieties with contrasting phenotypes, representing the main cultivar groups of the species' two subspecies: Zucchini (subsp. pepo) × Scallop (subsp. ovifera). The mapping population was genotyped with 384 SNP, a set of selected EST-SNP identified in silico after massive sequencing of the transcriptomes of both parents, using the Illumina GoldenGate platform. The global success rate of the assay was higher than 85%. In total, 304 SNP were mapped, along with 11 SSR from a previous map, giving a map density of 5.56 cM/marker. This map was used to infer syntenic relationships between C. pepo and cucumber and to successfully map QTL that control plant, flowering and fruit traits that are of benefit to squash breeding. The QTL effects were validated in backcross populations. Conclusion Our results show that massive sequencing in different genotypes is an excellent tool for SNP discovery, and that the Illumina GoldenGate platform can be successfully applied to constructing genetic maps and performing QTL analysis in Cucurbita. This is the first SNP-based genetic map in the Cucurbita genus and is an invaluable new tool for biological research, especially considering that most of these markers are located in the coding regions of genes involved in different physiological processes. The platform will also be useful for future mapping and diversity studies, and will be essential in order to accelerate the process of breeding new and better-adapted squash varieties. PMID:22356647
USDA-ARS?s Scientific Manuscript database
Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry). Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker d...
New seismic hazard maps for Puerto Rico and the U.S. Virgin Islands
Mueller, C.; Frankel, A.; Petersen, M.; Leyendecker, E.
2010-01-01
The probabilistic methodology developed by the U.S. Geological Survey is applied to a new seismic hazard assessment for Puerto Rico and the U.S. Virgin Islands. Modeled seismic sources include gridded historical seismicity, subduction-interface and strike-slip faults with known slip rates, and two broad zones of crustal extension with seismicity rates constrained by GPS geodesy. We use attenuation relations from western North American and worldwide data, as well as a Caribbean-specific relation. Results are presented as maps of peak ground acceleration and 0.2- and 1.0-second spectral response acceleration for 2% and 10% probabilities of exceedance in 50 years (return periods of about 2,500 and 500 years, respectively). This paper describes the hazard model and maps that were balloted by the Building Seismic Safety Council and recommended for the 2003 NEHRP Provisions and the 2006 International Building Code. ?? 2010, Earthquake Engineering Research Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muñoz-Andrade, Juan D., E-mail: jdma@correo.azc.uam.mx
2013-12-16
By systematic study the mapping of polycrystalline flow of sheet 304 austenitic stainless steel (ASS) during tension test at constant crosshead velocity at room temperature was obtained. The main results establish that the trajectory of crystals in the polycrystalline spatially extended system (PCSES), during irreversible deformation process obey a hyperbolic motion. Where, the ratio between the expansion velocity of the field and the velocity of the field source is not constant and the field lines of such trajectory of crystals become curved, this accelerated motion is called a hyperbolic motion. Such behavior is assisted by dislocations dynamics and self-accommodation processmore » between crystals in the PCSES. Furthermore, by applying the quantum mechanics and relativistic model proposed by Muñoz-Andrade, the activation energy for polycrystalline flow during the tension test of 304 ASS was calculated for each instant in a global form. In conclusion was established that the mapping of the polycrystalline flow is fundamental to describe in an integral way the phenomenology and mechanics of irreversible deformation processes.« less
Cartography for lunar exploration: 2008 status and mission plans
Kirk, R.L.; Archinal, B.A.; Gaddis, L.R.; Rosiek, M.R.; Chen, Jun; Jiang, Jie; Nayak, Shailesh
2008-01-01
The initial spacecraft exploration of the Moon in the 1960s-70s yielded extensive data, primarily in the form of film and television images, which were used to produce a large number of hardcopy maps by conventional techniques. A second era of exploration, beginning in the early 1990s, has produced digital data including global multispectral imagery and altimetry, from which a new generation of digital map products tied to a rapidly evolving global control network has been made. Efforts are also underway to scan the earlier hardcopy maps for online distribution and to digitize the film images so that modern processing techniques can be used to make high-resolution digital terrain models (DTMs) and image mosaics consistent with the current global control. The pace of lunar exploration is accelerating dramatically, with as many as eight new missions already launched or planned for the current decade. These missions, of which the most important for cartography are SMART-1 (Europe), Kaguya/SELENE (Japan), Chang'e-1 (China), Chandrayaan-1 (India), and Lunar Reconnaissance Orbiter (USA), will return a volume of data exceeding that of all previous lunar and planetary missions combined. Framing and scanner camera images, including multispectral and stereo data, hyperspectral images, synthetic aperture radar (SAR) images, and laser altimetry will all be collected, including, in most cases, multiple data sets of each type. Substantial advances in international standardization and cooperation, development of new and more efficient data processing methods, and availability of resources for processing and archiving will all be needed if the next generation of missions are to fulfill their potential for high-precision mapping of the Moon in support of subsequent exploration and scientific investigation.
2012-01-01
Background The Asteraceae species Cynara cardunculus (2n = 2x = 34) includes the two fully cross-compatible domesticated taxa globe artichoke (var. scolymus L.) and cultivated cardoon (var. altilis DC). As both are out-pollinators and suffer from marked inbreeding depression, linkage analysis has focussed on the use of a two way pseudo-test cross approach. Results A set of 172 microsatellite (SSR) loci derived from expressed sequence tag DNA sequence were integrated into the reference C. cardunculus genetic maps, based on segregation among the F1 progeny of a cross between a globe artichoke and a cultivated cardoon. The resulting maps each detected 17 major linkage groups, corresponding to the species’ haploid chromosome number. A consensus map based on 66 co-dominant shared loci (64 SSRs and two SNPs) assembled 694 loci, with a mean inter-marker spacing of 2.5 cM. When the maps were used to elucidate the pattern of inheritance of head production earliness, a key commercial trait, seven regions were shown to harbour relevant quantitative trait loci (QTL). Together, these QTL accounted for up to 74% of the overall phenotypic variance. Conclusion The newly developed consensus as well as the parental genetic maps can accelerate the process of tagging and eventually isolating the genes underlying earliness in both the domesticated C. cardunculus forms. The largest single effect mapped to the same linkage group in each parental maps, and explained about one half of the phenotypic variance, thus representing a good candidate for marker assisted selection. PMID:22621324
USDA-ARS?s Scientific Manuscript database
The American cranberry (Vaccinium macrocarpon Ait.) is a recently domesticated, but economically important, fruit crop with limited molecular resources. New genetic resources could accelerate genetic gain in cranberry through characterization of its genomic structure and by enabling molecular-assist...
Accelerating DNA analysis applications on GPU clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less
Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems alsomore » include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.« less
Venus spherical harmonic gravity model to degree and order 60
NASA Technical Reports Server (NTRS)
Konopliv, Alex S.; Sjogren, William L.
1994-01-01
The Magellan and Pioneer Venus Orbiter radiometric tracking data sets have been combined to produce a 60th degree and order spherical harmonic gravity field. The Magellan data include the high-precision X-band gravity tracking from September 1992 to May 1993 and post-aerobraking data up to January 5, 1994. Gravity models are presented from the application of Kaula's power rule for Venus and an alternative a priori method using surface accelerations. Results are given as vertical gravity acceleration at the reference surface, geoid, vertical Bouguer, and vertical isostatic maps with errors for the vertical gravity and geoid maps included. Correlation of the gravity with topography for the different models is also discussed.
Parallel processing optimization strategy based on MapReduce model in cloud storage environment
NASA Astrophysics Data System (ADS)
Cui, Jianming; Liu, Jiayi; Li, Qiuyan
2017-05-01
Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
Correlation of Noise Signature to Pulsed Power Events at the HERMES III Accelerator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Barbara; Joseph, Nathan Ryan; Salazar, Juan Diego
2016-11-01
The HERMES III accelerator, which is located at Sandia National Laboratories' Tech Area IV, is the largest pulsed gamma X-ray source in the world. The accelerator is made up of 20 inductive cavities that are charged to 1 MV each by complex pulsed power circuitry. The firing time of the machine components ranges between the microsecond and nanosecond timescales. This results in a variety of electromagnetic frequencies when the accelerator fires. Testing was done to identify the HERMES electromagnetic noise signal and to map it to the various accelerator trigger events. This report will show the measurement methods used tomore » capture the noise spectrum produced from the machine and correlate this noise signature with machine events.« less
Checkpointing for a hybrid computing node
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cher, Chen-Yong
2016-03-08
According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.
High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas
2017-04-01
Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise mapping application is composed of four principal modules: (1) pre-processing of raw data, (2) massive cross-correlation, (3) post-processing of correlation data based on computation of logarithmic energy ratio and (4) generation of source maps from post-processed data. Implementation of the solution posed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service oriented architecture for coordination of various sub-systems, and engineering an appropriate data storage solution. The present pilot version of the service implements noise source maps for Switzerland. Extension of the solution to Central Europe is planned for the next project phase.
Schooling in Times of Acceleration
ERIC Educational Resources Information Center
Buddeberg, Magdalena; Hornberg, Sabine
2017-01-01
Modern societies are characterised by forms of acceleration, which influence social processes. Sociologist Hartmut Rosa has systematised temporal structures by focusing on three categories of social acceleration: technical acceleration, acceleration of social change, and acceleration of the pace of life. All three processes of acceleration are…
Intelligent seismic risk mitigation system on structure building
NASA Astrophysics Data System (ADS)
Suryanita, R.; Maizir, H.; Yuniorto, E.; Jingga, H.
2018-01-01
Indonesia located on the Pacific Ring of Fire, is one of the highest-risk seismic zone in the world. The strong ground motion might cause catastrophic collapse of the building which leads to casualties and property damages. Therefore, it is imperative to properly design the structural response of building against seismic hazard. Seismic-resistant building design process requires structural analysis to be performed to obtain the necessary building responses. However, the structural analysis could be very difficult and time consuming. This study aims to predict the structural response includes displacement, velocity, and acceleration of multi-storey building with the fixed floor plan using Artificial Neural Network (ANN) method based on the 2010 Indonesian seismic hazard map. By varying the building height, soil condition, and seismic location in 47 cities in Indonesia, 6345 data sets were obtained and fed into the ANN model for the learning process. The trained ANN can predict the displacement, velocity, and acceleration responses with up to 96% of predicted rate. The trained ANN architecture and weight factors were later used to build a simple tool in Visual Basic program which possesses the features for prediction of structural response as mentioned previously.
Multilevel Summation of Electrostatic Potentials Using Graphics Processing Units*
Hardy, David J.; Stone, John E.; Schulten, Klaus
2009-01-01
Physical and engineering practicalities involved in microprocessor design have resulted in flat performance growth for traditional single-core microprocessors. The urgent need for continuing increases in the performance of scientific applications requires the use of many-core processors and accelerators such as graphics processing units (GPUs). This paper discusses GPU acceleration of the multilevel summation method for computing electrostatic potentials and forces for a system of charged atoms, which is a problem of paramount importance in biomolecular modeling applications. We present and test a new GPU algorithm for the long-range part of the potentials that computes a cutoff pair potential between lattice points, essentially convolving a fixed 3-D lattice of “weights” over all sub-cubes of a much larger lattice. The implementation exploits the different memory subsystems provided on the GPU to stream optimally sized data sets through the multiprocessors. We demonstrate for the full multilevel summation calculation speedups of up to 26 using a single GPU and 46 using multiple GPUs, enabling the computation of a high-resolution map of the electrostatic potential for a system of 1.5 million atoms in under 12 seconds. PMID:20161132
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
Seismic hazard in the Istanbul metropolitan area: A preliminary re-evaluation
Kalkan, E.; Gulkan, Polat; Ozturk, N.Y.; Celebi, M.
2008-01-01
In 1999, two destructive earthquakes (M7.4 Kocaeli and M7.2 Duzce) occurred in the north west of Turkey and resulted in major stress-drops on the western segment of the North Anatolian Fault system where it continues under the Marmara Sea. These undersea fault segments were recently explored using bathymetric and reflection surveys. These recent findings helped to reshape the seismotectonic environment of the Marmara basin, which is a perplexing tectonic domain. Based on collected new information, seismic hazard of the Marmara region, particularly Istanbul Metropolitan Area and its vicinity, were re-examined using a probabilistic approach. Two seismic source and alternate recurrence models combined with various indigenous and foreign attenuation relationships were adapted within a logic tree formulation to quantify and project the regional exposure on a set of hazard maps. The hazard maps show the peak horizontal ground acceleration and spectral acceleration at 1.0 s. These acceleration levels were computed for 2 and 10 % probabilities of transcendence in 50 years.
NASA Astrophysics Data System (ADS)
Mukherjee, Biswaroop; Peter, Christine; Kremer, Kurt
2017-09-01
Understanding the connections between the characteristic dynamical time scales associated with a coarse-grained (CG) and a detailed representation is central to the applicability of the coarse-graining methods to understand molecular processes. The process of coarse graining leads to an accelerated dynamics, owing to the smoothening of the underlying free-energy landscapes. Often a single time-mapping factor is used to relate the time scales associated with the two representations. We critically examine this idea using a model system ideally suited for this purpose. Single molecular transport properties are studied via molecular dynamics simulations of the CG and atomistic representations of a liquid crystalline, azobenzene containing mesogen, simulated in the smectic and the isotropic phases. The out-of-plane dynamics in the smectic phase occurs via molecular hops from one smectic layer to the next. Hopping can occur via two mechanisms, with and without significant reorientation. The out-of-plane transport can be understood as a superposition of two (one associated with each mode of transport) independent continuous time random walks for which a single time-mapping factor would be rather inadequate. A comparison of the free-energy surfaces, relevant to the out-of-plane transport, qualitatively supports the above observations. Thus, this work underlines the need for building CG models that exhibit both structural and dynamical consistency to the underlying atomistic model.
USDA-ARS?s Scientific Manuscript database
Single nucleotide polymorphism was employed in the construction of a high-resolution, expressed sequence tag (EST) map of Aegilops tauschii, the diploid source of the wheat D genome. Comparison of the map with the rice and sorghum genome sequences revealed 50 inversions and translocations; 2, 8, and...
Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian
Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less
Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran
NASA Astrophysics Data System (ADS)
Ney, B.; Askari, M.
2009-04-01
Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.
How to generate a sound-localization map in fish
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
2015-03-01
How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.
Sci—Thur AM: YIS - 08: Constructing an Attenuation map for a PET/MR Breast coil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrick, John C.; Imaging, Lawson Health Research Institute, Knoxville, TN; London Regional Cancer Program, Knoxville, TN
2014-08-15
In 2013, around 23000 Canadian women and 200 Canadian men were diagnosed with breast cancer. An estimated 5100 women and 55 men died from the disease. Using the sensitivity of MRI with the selectivity of PET, PET/MRI combines anatomical and functional information within the same scan and could help with early detection in high-risk patients. MRI requires radiofrequency coils for transmitting energy and receiving signal but the breast coil attenuates PET signal. To correct for this PET attenuation, a 3-dimensional map of linear attenuation coefficients (μ-map) of the breast coil must be created and incorporated into the PET reconstruction process.more » Several approaches have been proposed for building hardware μ-maps, some of which include the use of conventional kVCT and Dual energy CT. These methods can produce high resolution images based on the electron densities of materials that can be converted into μ-maps. However, imaging hardware containing metal components with photons in the kV range is susceptible to metal artifacts. These artifacts can compromise the accuracy of the resulting μ-map and PET reconstruction; therefore high-Z components should be removed. We propose a method for calculating μ-maps without removing coil components, based on megavoltage (MV) imaging with a linear accelerator that has been detuned for imaging at 1.0MeV. Containers of known geometry with F18 were placed in the breast coil for imaging. A comparison between reconstructions based on the different μ-map construction methods was made. PET reconstructions with our method show a maximum of 6% difference over the existing kVCT-based reconstructions.« less
GPU-Acceleration of Sequence Homology Searches with Database Subsequence Clustering
Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka
2016-01-01
Sequence homology searches are used in various fields and require large amounts of computation time, especially for metagenomic analysis, owing to the large number of queries and the database size. To accelerate computing analyses, graphics processing units (GPUs) are widely used as a low-cost, high-performance computing platform. Therefore, we mapped the time-consuming steps involved in GHOSTZ, which is a state-of-the-art homology search algorithm for protein sequences, onto a GPU and implemented it as GHOSTZ-GPU. In addition, we optimized memory access for GPU calculations and for communication between the CPU and GPU. As per results of the evaluation test involving metagenomic data, GHOSTZ-GPU with 12 CPU threads and 1 GPU was approximately 3.0- to 4.1-fold faster than GHOSTZ with 12 CPU threads. Moreover, GHOSTZ-GPU with 12 CPU threads and 3 GPUs was approximately 5.8- to 7.7-fold faster than GHOSTZ with 12 CPU threads. PMID:27482905
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
Toth, Alex; Ellis, J. Austin; Evans, Tom; ...
2017-10-26
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Computing Models for FPGA-Based Accelerators
Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt
2011-01-01
Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Ellis, J. Austin; Evans, Tom
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
The value of ERTS-1 imagery in resource inventorization on a national scale in South Africa
NASA Technical Reports Server (NTRS)
Malan, O. G.; Macvicar, C. N.; Edwards, D.; Temperley, B. N.; Claassen, L.
1974-01-01
It has been shown that ERTS imagery, particularly in the form of 1:500,000 scale false color photolithographic prints, can contribute very significantly towards facilitating and accelerating (dramatically, in some cases) resource surveys and geologic mapping. Fire mapping on a national scale becomes a feasability; numerous new geologic features, particularly lineaments, have been discovered, land use can be mapped efficiently on a regional scale and degraded areas identified. The first detailed tectonic and geomorphological maps of the Republic of South Africa will be published mainly owing to the availability of ERTS images.
Kantarski, Traci; Larson, Steve; Zhang, Xiaofei; DeHaan, Lee; Borevitz, Justin; Anderson, James; Poland, Jesse
2017-01-01
Development of the first consensus genetic map of intermediate wheatgrass gives insight into the genome and tools for molecular breeding. Intermediate wheatgrass (Thinopyrum intermedium) has been identified as a candidate for domestication and improvement as a perennial grain, forage, and biofuel crop and is actively being improved by several breeding programs. To accelerate this process using genomics-assisted breeding, efficient genotyping methods and genetic marker reference maps are needed. We present here the first consensus genetic map for intermediate wheatgrass (IWG), which confirms the species' allohexaploid nature (2n = 6x = 42) and homology to Triticeae genomes. Genotyping-by-sequencing was used to identify markers that fit expected segregation ratios and construct genetic maps for 13 heterogeneous parents of seven full-sib families. These maps were then integrated using a linear programming method to produce a consensus map with 21 linkage groups containing 10,029 markers, 3601 of which were present in at least two populations. Each of the 21 linkage groups contained between 237 and 683 markers, cumulatively covering 5061 cM (2891 cM--Kosambi) with an average distance of 0.5 cM between each pair of markers. Through mapping the sequence tags to the diploid (2n = 2x = 14) barley reference genome, we observed high colinearity and synteny between these genomes, with three homoeologous IWG chromosomes corresponding to each of the seven barley chromosomes, and mapped translocations that are known in the Triticeae. The consensus map is a valuable tool for wheat breeders to map important disease-resistance genes within intermediate wheatgrass. These genomic tools can help lead to rapid improvement of IWG and development of high-yielding cultivars of this perennial grain that would facilitate the sustainable intensification of agricultural systems.
Real-time photo-magnetic imaging.
Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin
2016-10-01
We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.
The Role of the Auroral Processes in the Formation of the Outer Electron Radiation Belt
NASA Astrophysics Data System (ADS)
Stepanova, M. V.; Antonova, E. E.; Pinto, V. A.; Moya, P. S.; Riazantseva, M.; Ovchinnikov, I.
2016-12-01
The role of the auroral processes in the formation of the outer electron radiation belt during storms is analyzed using the data of RBSP mission, low orbiting satellites and ground based observations. We analyze fluxes of the low energy precipitating ions using data of the Defense Meteorological Satellite Program (DMSP). The location of the auroral electrojet is obtained from the IMAGE magnetometer network, and of the electron distribution in the outer radiation belt from the RBSP mission. We take into account the latest results on the auroral oval mapping in accordance with which the most part of the auroral oval maps not to the plasma sheet. It maps into the surrounding the Earth plasma ring in which transverse currents are closed inside the magnetosphere. Such currents constitute the high latitude continuation of the ordinary ring current. The development of the ring current and its high latitude continuation generates strong distortion of the Earth's magnetic field and corresponding adiabatic variation of the relativistic electron fluxes. This adiabatic variation should be considered for the analysis of the processes of the acceleration of relativistic electrons and formation of the outer radiation belt. We also analyze the plasma pressure profiles during storms and demonstrate the formation of sharp plasma pressure peak at the equatorial boundary of the auroral oval. It is shown that the observed this peak is directly connected to the creation of the seed population of relativistic electrons. We discuss the possibility to predict the position of new radiation belt during recovery phase of the magnetic storm using data of low orbiting and ground based observations.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Cramer, C.H.; Mays, T.W.; ,
2005-01-01
The damaging 1886 moment magnitude ???7 Charleston, South Carolina earthquake is indicative of the moderately likely earthquake activity along this portion of the Atlantic Coast. A recurrence of such an earthquake today would have serious consequences for the nation. The national seismic hazard maps produced by the U.S. Geological Survey (USGS) provide a picture of the levels of seismic hazard across the nation based on the best and most current scientific information. The USGS national maps were updated in 2002 and will become part of the International Codes in 2006. In the past decade, improvements have occurred in the scientific understanding of the nature and character of earthquake activity and expected ground motions in the central and eastern U.S. The paper summarizes the new knowledge of expected earthquake locations, magnitudes, recurrence, and ground-motion decay with distance. New estimates of peak ground acceleration and 0.2 s and 1.0 s spectral acceleration are compared with those displayed in the 1996 national maps. The 2002 maps show increased seismic hazard in much of the coastal plain of South Carolina, but a decrease in long period (1 s and greater) hazard by up to 20% at distances of over 50 km from the Charleston earthquake zone. Although the national maps do not account for the effects of local or regional sediments, deep coastal-plain sediments can significally alter expected ground shaking, particularly at long period motions where it can be 100% higher than the national maps.
An Endogenous Accelerator for Viral Gene Expression Confers a Fitness Advantage
Teng, Melissa W.; Bolovan-Fritts, Cynthia; Dar, Roy D.; Womack, Andrew; Simpson, Michael L.; Shenk, Thomas; Weinberger, Leor S.
2012-01-01
Many signaling circuits face a fundamental tradeoff between accelerating their response speed while maintaining final levels below a cytotoxic threshold. Here, we describe a transcriptional circuitry that dynamically converts signaling inputs into faster rates without amplifying final equilibrium levels. Using time-lapse microscopy, we find that transcriptional activators accelerate human cytomegalovirus (CMV) gene expression in single cells without amplifying steady-state expression levels, and this acceleration generates a significant replication advantage. We map the accelerator to a highly self-cooperative transcriptional negative-feedback loop (Hill coefficient ~ 7) generated by homo-multimerization of the virus’s essential transactivator protein IE2 at nuclear PML bodies. Eliminating the IE2-accelerator circuit reduces transcriptional strength through mislocalization of incoming viral genomes away from PML bodies and carries a heavy fitness cost. In general, accelerators may provide a mechanism for signal-transduction circuits to respond quickly to external signals without increasing steady-state levels of potentially cytotoxic molecules. PMID:23260143
Seismic hazard maps of Mexico, the Caribbean, and Central and South America
Tanner, J.G.; Shedlock, K.M.
2004-01-01
The growth of megacities in seismically active regions around the world often includes the construction of seismically unsafe buildings and infrastructures due to an insufficient knowledge of existing seismic hazard and/or economic constraints. Minimization of the loss of life, property damage, and social and economic disruption due to earthquakes depends on reliable estimates of seismic hazard. We have produced a suite of seismic hazard estimates for Mexico, the Caribbean, and Central and South America. One of the preliminary maps in this suite served as the basis for the Caribbean and Central and South America portion of the Global Seismic Hazard Map (GSHM) published in 1999, which depicted peak ground acceleration (pga) with a 10% chance of exceedance in 50 years for rock sites. Herein we present maps depicting pga and 0.2 and 1.0 s spectral accelerations (SA) with 50%, 10%, and 2% chances of exceedance in 50 years for rock sites. The seismicity catalog used in the generation of these maps adds 3 more years of data to those used to calculate the GSH Map. Different attenuation functions (consistent with those used to calculate the U.S. and Canadian maps) were used as well. These nine maps are designed to assist in global risk mitigation by providing a general seismic hazard framework and serving as a resource for any national or regional agency to help focus further detailed studies required for regional/local needs. The largest seismic hazard values in Mexico, the Caribbean, and Central and South America generally occur in areas that have been, or are likely to be, the sites of the largest plate boundary earthquakes. High hazard values occur in areas where shallow-to-intermediate seismicity occurs frequently. ?? 2004 Elsevier B.V. All rights reserved.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.
In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla
NASA Astrophysics Data System (ADS)
Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart
2015-03-01
Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.
TAC Proton Accelerator Facility: The Status and Road Map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algin, E.; Akkus, B.; Caliskan, A.
2011-06-28
Proton Accelerator (PA) Project is at a stage of development, working towards a Technical Design Report under the roof of a larger-scale Turkish Accelerator Center (TAC) Project. The project is supported by the Turkish State Planning Organization. The PA facility will be constructed in a series of stages including a 3 MeV test stand, a 55 MeV linac which can be extended to 100+ MeV, and then a full 1-3 GeV proton synchrotron or superconducting linac. In this article, science applications, overview, and current status of the PA Project will be given.
Operation regimes of a dielectric laser accelerator
NASA Astrophysics Data System (ADS)
Hanuka, Adi; Schächter, Levi
2018-04-01
We investigate three operation regimes in dielectric laser driven accelerators: maximum efficiency, maximum charge, and maximum loaded gradient. We demonstrate, using a self-consistent approach, that loaded gradients of the order of 1 to 6 [GV/m], efficiencies of 20% to 80%, and electrons flux of 1014 [el/s] are feasible, without significant concerns regarding damage threshold fluence. The latter imposes that the total charge per squared wavelength is constant (a total of 106 per μm2). We conceive this configuration as a zero-order design that should be considered for the road map of future accelerators.
Consistent global structures of complex RNA states through multidimensional chemical mapping
Cheng, Clarence Yu; Chou, Fang-Chieh; Kladwang, Wipapat; Tian, Siqi; Cordero, Pablo; Das, Rhiju
2015-01-01
Accelerating discoveries of non-coding RNA (ncRNA) in myriad biological processes pose major challenges to structural and functional analysis. Despite progress in secondary structure modeling, high-throughput methods have generally failed to determine ncRNA tertiary structures, even at the 1-nm resolution that enables visualization of how helices and functional motifs are positioned in three dimensions. We report that integrating a new method called MOHCA-seq (Multiplexed •OH Cleavage Analysis with paired-end sequencing) with mutate-and-map secondary structure inference guides Rosetta 3D modeling to consistent 1-nm accuracy for intricately folded ncRNAs with lengths up to 188 nucleotides, including a blind RNA-puzzle challenge, the lariat-capping ribozyme. This multidimensional chemical mapping (MCM) pipeline resolves unexpected tertiary proximities for cyclic-di-GMP, glycine, and adenosylcobalamin riboswitch aptamers without their ligands and a loose structure for the recently discovered human HoxA9D internal ribosome entry site regulon. MCM offers a sequencing-based route to uncovering ncRNA 3D structure, applicable to functionally important but potentially heterogeneous states. DOI: http://dx.doi.org/10.7554/eLife.07600.001 PMID:26035425
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Communicating Flood Risk with Street-Level Data
NASA Astrophysics Data System (ADS)
Sanders, B. F.; Matthew, R.; Houston, D.; Cheung, W. H.; Karlin, B.; Schubert, J.; Gallien, T.; Luke, A.; Contreras, S.; Goodrich, K.; Feldman, D.; Basolo, V.; Serrano, K.; Reyes, A.
2015-12-01
Coastal communities around the world face significant and growing flood risks that require an accelerating adaptation response, and fine-resolution urban flood models could serve a pivotal role in enabling communities to meet this need. Such models depict impacts at the level of individual buildings and land parcels or "street level" - the same spatial scale at which individuals are best able to process flood risk information - constituting a powerful tool to help communities build better understandings of flood vulnerabilities and identify cost-effective interventions. To measure understanding of flood risk within a community and the potential impact of street-level models, we carried out a household survey of flood risk awareness in Newport Beach, California, a highly urbanized coastal lowland that presently experiences nuisance flooding from high tides, waves and rainfall and is expected to experience a significant increase in flood frequency and intensity with climate change. Interviews were completed with the aid of a wireless-enabled tablet device that respondents could use to identify areas they understood to be at risk of flooding and to view either a Federal Emergency Management Agency (FEMA) flood map or a more detailed map prepared with a hydrodynamic urban coastal flood model (UCI map) built with grid cells as fine as 3 m resolution and validated with historical flood data. Results indicate differences in the effectiveness of the UCI and FEMA maps at communicating the spatial distribution of flood risk, gender differences in how the maps affect flood understanding, and spatial biases in the perception of flood vulnerabilities.
Hanker, J; Giammara, B
1993-01-01
Recent studies in our laboratories have shown how microwave (MW) irradiation can accelerate a number of tissue-processing techniques, especially staining, to aid in the preparation of single specimens on glass microscope slides or coverslips for examination by light microscopy (and electron microscopy, if required) for diagnostic purposes. Techniques have been developed, which give permanently stained preparations, that can be studied initially by light microscopy, their areas of interest mapped, and computer-automated image analysis performed to obtain quantitative information. This is readily performed after MW-accelerated staining with silver methenamine by the Giammara-Hanker PATS or PATS-TS reaction. This variation of the PAS reaction gives excellent markers for specific infectious agents such as lipopolysaccharides for gram-negative bacteria or mannans for fungi. It is also an excellent stain for glycogen and basement membranes and an excellent marker for type III collagen or reticulin in the endoneurium or perineurium of peripheral nerve or in the capillary walls. Our improved MW-accelerated Feulgen reaction with silver methenamine for nuclear DNA is useful to show the nuclei of bacteria and fungi as well as of cells they are infecting. Improved coating and penetration of tissue surfaces by thiocarbohydrazide bridging of ruthenium red, applied under MW-acceleration, render biologic specimens sufficiently conductive for SEM so that sputter coating with gold is unnecessary. The specimens treated with these highly visible electron-opaque stains can be screened with the light microscope after mounting in polyethylene glycol (PEG) and the structures or areas selected for EM study are mapped with a Micro-Locator slide. After removal of the water soluble PEG the specimens are remounted in the usual EM media for scanning electron microscopy (SEM) or transmission electron microscopy (TEM) study of the mapped areas. By comparing duplicate smears from areas of infection, such as two coverslips of buffy coat smears of blood from a patient with septicemia, the microorganisms responsible can occasionally be classified for antimicrobial therapy long before culture results are available; gram-negative bacteria are positive with the Giammara-Hanker PATS-TS stain, and gram-positive bacteria are positive with the SIGMA HT40 Gram stain. The gram-positive as well as gram-negative bacteria are both initially stained by the crystal violet component of the Gram stain. The crystal violet stain is readily removed from the gram-negative (but not the gram-positive) bacteria when the specimens are rinsed with alcohol/acetone. If this rinse step is omitted, the crystal violet remains attached to both gram-negative and gram-positive bacteria. It can then be rendered insoluble, electron-opaque, and conductive by treatment with silver methenamine solution under MW-irradiation. This metallized crystal violet is a more effective silver stain than the PATS-TS stain for a number of gram-negative spirochetes such as Treponema pallidum, the microbe that causes syphilis.
Highly Efficient Proteolysis Accelerated by Electromagnetic Waves for Peptide Mapping
Chen, Qiwen; Liu, Ting; Chen, Gang
2011-01-01
Proteomics will contribute greatly to the understanding of gene functions in the post-genomic era. In proteome research, protein digestion is a key procedure prior to mass spectrometry identification. During the past decade, a variety of electromagnetic waves have been employed to accelerate proteolysis. This review focuses on the recent advances and the key strategies of these novel proteolysis approaches for digesting and identifying proteins. The subjects covered include microwave-accelerated protein digestion, infrared-assisted proteolysis, ultraviolet-enhanced protein digestion, laser-assisted proteolysis, and future prospects. It is expected that these novel proteolysis strategies accelerated by various electromagnetic waves will become powerful tools in proteome research and will find wide applications in high throughput protein digestion and identification. PMID:22379392
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen; ...
2016-11-21
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suram, Santosh K.; Xue, Yexiang; Bai, Junwen
Rapid construction of phase diagrams is a central tenet of combinatorial materials science with accelerated materials discovery efforts often hampered by challenges in interpreting combinatorial X-ray diffraction data sets, which we address by developing AgileFD, an artificial intelligence algorithm that enables rapid phase mapping from a combinatorial library of X-ray diffraction patterns. AgileFD models alloying-based peak shifting through a novel expansion of convolutional nonnegative matrix factorization, which not only improves the identification of constituent phases but also maps their concentration and lattice parameter as a function of composition. By incorporating Gibbs’ phase rule into the algorithm, physically meaningful phase mapsmore » are obtained with unsupervised operation, and more refined solutions are attained by injecting expert knowledge of the system. The algorithm is demonstrated through investigation of the V–Mn–Nb oxide system where decomposition of eight oxide phases, including two with substantial alloying, provides the first phase map for this pseudoternary system. This phase map enables interpretation of high-throughput band gap data, leading to the discovery of new solar light absorbers and the alloying-based tuning of the direct-allowed band gap energy of MnV 2O 6. Lastly, the open-source family of AgileFD algorithms can be implemented into a broad range of high throughput workflows to accelerate materials discovery.« less
Ground motion models used in the 2014 U.S. National Seismic Hazard Maps
Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.
2015-01-01
The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.
Spin dynamics modeling in the AGS based on a stepwise ray-tracing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutheil, Yann
The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less
Self-accelerated development of salt karst during flash floods along the Dead Sea Coast, Israel
NASA Astrophysics Data System (ADS)
Avni, Yoav; Lensky, Nadav; Dente, Elad; Shviro, Maayan; Arav, Reuma; Gavrieli, Ittai; Yechieli, Yoseph; Abelson, Meir; Lutzky, Hallel; Filin, Sagi; Haviv, Itai; Baer, Gidon
2016-01-01
We document and analyze the rapid development of a real-time karst system within the subsurface salt layers of the Ze'elim Fan, Dead Sea, Israel by a multidisciplinary study that combines interferometric synthetic aperture radar and light detection and ranging measurements, sinkhole mapping, time-lapse camera monitoring, groundwater level measurements and chemical and isotopic analyses of surface runoff and groundwater. The >1 m/yr drop of Dead Sea water level and the subsequent change in the adjacent groundwater system since the 1960s resulted in flushing of the coastal aquifer by fresh groundwater, subsurface salt dissolution, gradual land subsidence and formation of sinkholes. Since 2010 this process accelerated dramatically as flash floods at the Ze'elim Fan were drained by newly formed sinkholes. During and immediately after these flood events the dissolution rates of the subsurface salt layer increased dramatically, the overlying ground surface subsided, a large number of sinkholes developed over short time periods (hours to days), and salt-saturated water resurged downstream. Groundwater flow velocities increased by more than 2 orders of magnitudes compared to previously measured velocities along the Dead Sea. The process is self-accelerating as salt dissolution enhances subsidence and sinkhole formation, which in turn increase the ponding areas of flood water and generate additional draining conduits to the subsurface. The rapid terrain response is predominantly due to the highly soluble salt. It is enhanced by the shallow depth of the salt layer, the low competence of the newly exposed unconsolidated overburden and the moderate topographic gradients of the Ze'elim Fan.
To assess the value of satellite imagery in resource evaluation on a national scale
NASA Technical Reports Server (NTRS)
Malan, O. G. (Principal Investigator)
1973-01-01
The author has identified the following significant results. ERTS-1 imagery of South Africa, mainly in the form of 1:1,000,000 scale black and white prints of MSS bands, was evaluated for its information content with respect to: (1) soil and terrain mapping; (2) plant ecological mapping; (3) geological mapping; and (4) urban and regional land use mapping at scales below 1:250,000. It was concluded that ERTS-1 imagery can make a significant contribution to accelerate and lower the cost of such surveys. Production of 1:500,000 color composites will remove some of the limitations encountered.
NASA Astrophysics Data System (ADS)
Sturner, A. P.; Eriksson, S.; Gershman, D. J.; Plaschke, F.; Burch, J.
2017-12-01
Magnetopause current sheets have been fertile ground for understanding kinetic-scale physics of magnetic reconnection, but can also be used to study more macroscopic scale phenomena statistically. Post-reconnection, magnetic flux and plasma are accelerated away from the x-line into exhaust regions. As the exhausting plasma exits the electron diffusion region, electrons become remagnetized and are accelerated by the magnetic field into an E x B jet while the ions remain unmagnetized. Further along the exhaust, at the edge of the ion diffusion region, the ions become frozen into the magnetic field, and are accelerated to join the electrons in the exhaust jet. By assuming a constant reconnection rate of 0.1, we can infer the distance to the x-line from the normal width of the exhaust. We present a statistical study using the Magnetospheric Multiscale Mission (MMS) to map out the electron and ion remagnetization distances that define the edge of the electron and ion diffusion regions for magnetopause reconnection, and explore the effects of a guide magnetic field.
The NIH Common Fund Human Biomolecular Atlas Program (HuBMAP) aims to develop a framework for functional mapping the human body with cellular resolution to enhance our understanding of cellular organization-function. HuBMAP will accelerate the development of the next generation of tools and techniques to generate 3D tissue maps using validated high-content, high-throughput imaging and omics assays, and establish an open data platform for integrating, visualizing data to build multi-dimensional maps.
Accelerated West Antarctic ice mass loss continues to outpace East Antarctic gains
NASA Astrophysics Data System (ADS)
Harig, Christopher; Simons, Frederik J.
2015-04-01
While multiple data sources have confirmed that Antarctica is losing ice at an accelerating rate, different measurement techniques estimate the details of its geographically highly variable mass balance with different levels of accuracy, spatio-temporal resolution, and coverage. Some scope remains for methodological improvements using a single data type. In this study we report our progress in increasing the accuracy and spatial resolution of time-variable gravimetry from the Gravity Recovery and Climate Experiment (GRACE). We determine the geographic pattern of ice mass change in Antarctica between January 2003 and June 2014, accounting for glacio-isostatic adjustment (GIA) using the IJ05_R2 model. Expressing the unknown signal in a sparse Slepian basis constructed by optimization to prevent leakage out of the regions of interest, we use robust signal processing and statistical estimation methods. Applying those to the latest time series of monthly GRACE solutions we map Antarctica's mass loss in space and time as well as can be recovered from satellite gravity alone. Ignoring GIA model uncertainty, over the period 2003-2014, West Antarctica has been losing ice mass at a rate of - 121 ± 8 Gt /yr and has experienced large acceleration of ice mass losses along the Amundsen Sea coast of - 18 ± 5 Gt /yr2, doubling the mass loss rate in the past six years. The Antarctic Peninsula shows slightly accelerating ice mass loss, with larger accelerated losses in the southern half of the Peninsula. Ice mass gains due to snowfall in Dronning Maud Land have continued to add about half the amount of West Antarctica's loss back onto the continent over the last decade. We estimate the overall mass losses from Antarctica since January 2003 at - 92 ± 10 Gt /yr.
On the distortion of elevation dependent warming signals by quantile mapping
NASA Astrophysics Data System (ADS)
Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas
2017-04-01
Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.
Wu, Tongbo; Yang, Yufei; Chen, Wei; Wang, Jiayu; Yang, Ziyu; Wang, Shenlin; Xiao, Xianjin; Li, Mengyuan; Zhao, Meiping
2018-04-06
Lambda exonuclease (λ exo) plays an important role in the resection of DNA ends for DNA repair. Currently, it is also a widely used enzymatic tool in genetic engineering, DNA-binding protein mapping, nanopore sequencing and biosensing. Herein, we disclose two noncanonical properties of this enzyme and suggest a previously undescribed hydrophobic interaction model between λ exo and DNA substrates. We demonstrate that the length of the free portion of the substrate strand in the dsDNA plays an essential role in the initiation of digestion reactions by λ exo. A dsDNA with a 5' non-phosphorylated, two-nucleotide-protruding end can be digested by λ exo with very high efficiency. Moreover, we show that when a conjugated structure is covalently attached to an internal base of the dsDNA, the presence of a single mismatched base pair at the 5' side of the modified base may significantly accelerate the process of digestion by λ exo. A detailed comparison study revealed additional π-π stacking interactions between the attached label and the amino acid residues of the enzyme. These new findings not only broaden our knowledge of the enzyme but will also be very useful for research on DNA repair and in vitro processing of nucleic acids.
Song, Mengfei; Wei, Qingzhen; Wang, Jing; Fu, Wenyuan; Qin, Xiaodong; Lu, Xiumei; Cheng, Feng; Yang, Kang; Zhang, Lu; Yu, Xiaqing; Li, Ji; Chen, Jinfeng; Lou, Qunfeng
2018-01-01
Leaf color mutants in higher plants are ideal materials for investigating the structure and function of photosynthetic system. In this study, we identified a cucumber vyl (virescent-yellow leaf) mutant in the mutant library, which exhibited reduced pigment contents and delayed chloroplast development process. F2 and BC1 populations were constructed from the cross between vyl mutant and cucumber inbred line ‘Hazerd’ to identify that the vyl trait is controlled by a simply recessive gene designated as CsVYL. The CsVYL gene was mapped to a 3.8 cM interval on chromosome 4 using these 80 F2 individuals and BSA (bulked segregation analysis) approach. Fine genetic map was conducted with 1542 F2 plants and narrowed down the vyl locus to an 86.3 kb genomic region, which contains a total of 11 genes. Sequence alignment between the wild type (WT) and vyl only identified one single nucleotide mutation (C→T) in the first exon of gene Csa4G637110, which encodes a DnaJ-like zinc finger protein. Gene Expression analysis confirmed the differences in transcription level of Csa4G637110 between wild type and mutant plants. Map-based cloning of the CsVYL gene could accelerate the study of chloroplast development and chlorophyll synthesis of cucumber. PMID:29681911
Developmental Programming of Branching Morphogenesis in the Kidney
Schneider, Laura; Al-Awqati, Qais
2015-01-01
The kidney developmental program encodes the intricate branching and organization of approximately 1 million functional units (nephrons). Branching regulation is poorly understood, as is the source of a 10-fold variation in nephron number. Notably, low nephron count increases the risk for developing hypertension and renal failure. To better understand the source of this variation, we analyzed the complete gestational trajectory of mouse kidney development. We constructed a computerized architectural map of the branching process throughout fetal life and found that organogenesis is composed of two distinct developmental phases, each with stage-specific rate and morphologic parameters. The early phase is characterized by a rapid acceleration in branching rate and by branching divisions that repeat with relatively reproducible morphology. The latter phase, however, is notable for a significantly decreased yet constant branching rate and the presence of nonstereotyped branching events that generate progressive variability in tree morphology until birth. Our map identifies and quantitates the contribution of four developmental mechanisms that guide organogenesis: growth, patterning, branching rate, and nephron induction. When applied to organs that developed under conditions of malnutrition or in the setting of growth factor mutation, our normative map provided an essential link between kidney architecture and the fundamental morphogenetic mechanisms that guide development. This morphogenetic map is expected to find widespread applications and help identify modifiable targets to prevent developmental programming of common diseases. PMID:25644110
Developmental Programming of Branching Morphogenesis in the Kidney.
Sampogna, Rosemary V; Schneider, Laura; Al-Awqati, Qais
2015-10-01
The kidney developmental program encodes the intricate branching and organization of approximately 1 million functional units (nephrons). Branching regulation is poorly understood, as is the source of a 10-fold variation in nephron number. Notably, low nephron count increases the risk for developing hypertension and renal failure. To better understand the source of this variation, we analyzed the complete gestational trajectory of mouse kidney development. We constructed a computerized architectural map of the branching process throughout fetal life and found that organogenesis is composed of two distinct developmental phases, each with stage-specific rate and morphologic parameters. The early phase is characterized by a rapid acceleration in branching rate and by branching divisions that repeat with relatively reproducible morphology. The latter phase, however, is notable for a significantly decreased yet constant branching rate and the presence of nonstereotyped branching events that generate progressive variability in tree morphology until birth. Our map identifies and quantitates the contribution of four developmental mechanisms that guide organogenesis: growth, patterning, branching rate, and nephron induction. When applied to organs that developed under conditions of malnutrition or in the setting of growth factor mutation, our normative map provided an essential link between kidney architecture and the fundamental morphogenetic mechanisms that guide development. This morphogenetic map is expected to find widespread applications and help identify modifiable targets to prevent developmental programming of common diseases. Copyright © 2015 by the American Society of Nephrology.
Monitoring Subsidence in California with InSAR
NASA Astrophysics Data System (ADS)
Farr, T. G.; Jones, C. E.; Liu, Z.; Neff, K. L.; Gurrola, E. M.; Manipon, G.
2016-12-01
Subsidence caused by groundwater pumping in the rich agricultural area of California's Central Valley has been a problem for decades. Over the last few years, interferometric synthetic aperture radar (InSAR) observations from satellite and aircraft platforms have been used to produce maps of subsidence with cm accuracy. We are continuing work reported previously, using ESA's Sentinel-1 to extend our maps of subsidence in time and space, in order to eventually cover all of California. The amount of data to be processed has expanded exponentially in the course of our work and we are now transitioning to the use of the ARIA project at JPL to produce the time series. ARIA processing employs large Amazon cloud instances to process single or multiple frames each, scaling from one to many (20+) instances working in parallel to meet the demand (700 GB InSAR products within 3 hours). The data are stored in Amazon long-term storage and an http view of the products are available for users of the ARIA system to download the products. Higher resolution InSAR data were also acquired along the California Aqueduct by the NASA UAVSAR from 2013 - 2016. Using multiple scenes acquired by these systems, we are able to produce time series of subsidence at selected locations and transects showing how subsidence varies both spatially and temporally. The maps show that subsidence is continuing in areas with a history of subsidence and that the rates and areas affected have increased due to increased groundwater extraction during the extended western US drought. Our maps also identify and quantify new, localized areas of accelerated subsidence. The California Department of Water Resources (DWR) funded this work to provide the background and an update on subsidence in the Central Valley to support future policy. Geographic Information System (GIS) files are being furnished to DWR for further analysis of the 4 dimensional subsidence time-series maps. Part of this work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
Introduction to Particle Acceleration in the Cosmos
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Horwitz, J. L.; Perez, J.; Quenby, J.
2005-01-01
Accelerated charged particles have been used on Earth since 1930 to explore the very essence of matter, for industrial applications, and for medical treatments. Throughout the universe nature employs a dizzying array of acceleration processes to produce particles spanning twenty orders of magnitude in energy range, while shaping our cosmic environment. Here, we introduce and review the basic physical processes causing particle acceleration, in astrophysical plasmas from geospace to the outer reaches of the cosmos. These processes are chiefly divided into four categories: adiabatic and other forms of non-stochastic acceleration, magnetic energy storage and stochastic acceleration, shock acceleration, and plasma wave and turbulent acceleration. The purpose of this introduction is to set the stage and context for the individual papers comprising this monograph.
Bridging data models and terminologies to support adverse drug event reporting using EHR data.
Declerck, G; Hussain, S; Daniel, C; Yuksel, M; Laleci, G B; Twagirumukiza, M; Jaulent, M-C
2015-01-01
This article is part of the Focus Theme of METHODs of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". SALUS project aims at building an interoperability platform and a dedicated toolkit to enable secondary use of electronic health records (EHR) data for post marketing drug surveillance. An important component of this toolkit is a drug-related adverse events (AE) reporting system designed to facilitate and accelerate the reporting process using automatic prepopulation mechanisms. To demonstrate SALUS approach for establishing syntactic and semantic interoperability for AE reporting. Standard (e.g. HL7 CDA-CCD) and proprietary EHR data models are mapped to the E2B(R2) data model via SALUS Common Information Model. Terminology mapping and terminology reasoning services are designed to ensure the automatic conversion of source EHR terminologies (e.g. ICD-9-CM, ICD-10, LOINC or SNOMED-CT) to the target terminology MedDRA which is expected in AE reporting forms. A validated set of terminology mappings is used to ensure the reliability of the reasoning mechanisms. The percentage of data elements of a standard E2B report that can be completed automatically has been estimated for two pilot sites. In the best scenario (i.e. the available fields in the EHR have actually been filled), only 36% (pilot site 1) and 38% (pilot site 2) of E2B data elements remain to be filled manually. In addition, most of these data elements shall not be filled in each report. SALUS platform's interoperability solutions enable partial automation of the AE reporting process, which could contribute to improve current spontaneous reporting practices and reduce under-reporting, which is currently one major obstacle in the process of acquisition of pharmacovigilance data.
A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data
NASA Astrophysics Data System (ADS)
XU, R.; Jia, G.
2012-12-01
Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China
Ou-Yang, Si-sheng; Lu, Jun-yan; Kong, Xiang-qian; Liang, Zhong-jie; Luo, Cheng; Jiang, Hualiang
2012-01-01
Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process. Because of the dramatic increase in the availability of biological macromolecule and small molecule information, the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow, including target identification and validation, lead discovery and optimization and preclinical tests. Over the past decades, computational drug discovery methods such as molecular docking, pharmacophore modeling and mapping, de novo design, molecular similarity calculation and sequence-based virtual screening have been greatly improved. In this review, we present an overview of these important computational methods, platforms and successful applications in this field. PMID:22922346
Stratigraphic evolution of Chandeleur Islands, Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, J.R.; Penland, S.; Williams, S.J.
1988-09-01
Analyses of over 3000 km of high-resolution seismic profiles, supplemented by vibracores and soil borings, illustrate the evolution of the Chandeleur Islands through transgressive processes associated with the abandonment of the St. Bernard complex of the Mississippi delta some 1500 years ago. Historical maps show that the system has been eroding, migrating landward, and losing area for the last 100 years. At current rates, the subaerial integrity of the islands will be terminated in about 200 years. Hurricane impacts accelerate erosion and segment the islands, followed by limited recovery during fair weather periods. Relative sea level rise from both subsidencemore » and possible eustatic factors contributes to the loss of island area.« less
Laser interferometric measurement of ion electrode shape and charge exchange erosion
NASA Technical Reports Server (NTRS)
Macrae, Gregory S.; Mercer, Carolyn R.
1991-01-01
A projected fringe profilometry system was applied to surface contour measurements of an accelerator electrode from an ion thrustor. The system permitted noncontact, nondestructive evaluation of the fine and gross structure of the electrode. A 3-D surface map of a dished electrode was generated without altering the electrode surface. The same system was used to examine charge exchange erosion pits near the periphery of the electrode to determine the depth, location, and volume of material lost. This electro-optical measurement system allowed rapid, nondestructive, digital data acquisition coupled with automated computer data processing. In addition, variable sensitivity allowed both coarse and fine measurements of objects having various surface finishes.
Challenges in making a seismic hazard map for Alaska and the Aleutians
Wesson, R.L.; Boyd, O.S.; Mueller, C.S.; Frankel, A.D.; Freymueller, J.T.
2008-01-01
We present a summary of the data and analyses leading to the revision of the time-independent probabilistic seismic hazard maps of Alaska and the Aleutians. These maps represent a revision of existing maps based on newly obtained data, and reflect best current judgments about methodology and approach. They have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States, and will be proposed for adoption in future revisions to the International Building Code. We present example maps for peak ground acceleration, 0.2 s spectral amplitude (SA), and 1.0 s SA at a probability level of 2% in 50 years (annual probability of 0.000404). In this summary, we emphasize issues encountered in preparation of the maps that motivate or require future investigation and research.
NASA Astrophysics Data System (ADS)
Chang, Tsui-Yu; Cotton, Fabrice; Angelier, Jacques; Shin, Tzay-Chyn
2001-07-01
Attenuation laws are widely used in order to estimate the peak ground acceleration that may occur at a given locality during an earthquake, for hazard evaluation purposes. However, these simplified laws should be regarded acceptable only in the first approximation, because numerous significant parameters at the local and regional scales are often ignored. We examined the relationship between distance and peak acceleration based on examples from the dense accelerometric network of Taiwan, specifically for the Chichi destructive earthquake. We thus observed significant discrepancies between the predicted and observed accelerations, resulting from (1) near-field saturation, (2) amplification in sedimentary basins, and (3) hanging wall effect. We mapped the residual accelerations (difference between observed and predicted peak ground accelerations). This highlights the role of the regional structure, independently revealed by the geological analysis, as a significant factor that controls the transmission of the seismic accelerations.
Self-organizing map (SOM) of space acceleration measurement system (SAMS) data.
Sinha, A; Smith, A D
1999-01-01
In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.
Self-organizing map (SOM) of space acceleration measurement system (SAMS) data
NASA Technical Reports Server (NTRS)
Sinha, A.; Smith, A. D.
1999-01-01
In this paper, space acceleration measurement system (SAMS) data have been classified using self-organizing map (SOM) networks without any supervision; i.e., no a priori knowledge is assumed regarding input patterns belonging to a certain class. Input patterns are created on the basis of power spectral densities of SAMS data. Results for SAMS data from STS-50 and STS-57 missions are presented. Following issues are discussed in details: impact of number of neurons, global ordering of SOM weight vectors, effectiveness of a SOM in data classification, and effects of shifting time windows in the generation of input patterns. The concept of 'cascade of SOM networks' is also developed and tested. It has been found that a SOM network can successfully classify SAMS data obtained during STS-50 and STS-57 missions.
Source of the dayside cusp aurora.
Mende, S B; Frey, H U; Angelopoulos, V
2016-08-01
Monochromatic all-sky imagers at South Pole and other Antarctic stations of the Automatic Geophysical Observatory chain recorded the aurora in the region where the Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellites crossed the dayside magnetopause. In several cases the magnetic field lines threading the satellites when mapped to the atmosphere were inside the imagers' field of view. From the THEMIS magnetic field and the plasma density measurements, we were able to locate the position of the magnetopause crossings and map it to the ionosphere using the Tsyganenko-96 field model. Field line mapping is reasonably accurate on the dayside subsolar region where the field is strong, almost dipolar even though compressed. From these coordinated observations, we were able to prove that the dayside cusp aurora of high 630 nm brightness is on open field lines, and it is therefore direct precipitation from the magnetosheath. The cusp aurora contained significant highly structured N 2 + 427.8 nm emission. The THEMIS measurements of the magnetosheath particle energy and density taken just outside the magnetopause compared to the intensity of the structured N 2 + 427.8 nm emissions showed that the precipitating magnetosheath particles had to be accelerated. The most likely electron acceleration mechanism is by dispersive Alfvén waves propagating along the field line. Wave-accelerated suprathermal electrons were seen by FAST and DMSP. The 427.8 nm wavelength channel also shows the presence of a lower latitude hard-electron precipitation zone originating inside the magnetosphere.
Source of the dayside cusp aurora
Frey, H. U.; Angelopoulos, V.
2016-01-01
Abstract Monochromatic all‐sky imagers at South Pole and other Antarctic stations of the Automatic Geophysical Observatory chain recorded the aurora in the region where the Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellites crossed the dayside magnetopause. In several cases the magnetic field lines threading the satellites when mapped to the atmosphere were inside the imagers' field of view. From the THEMIS magnetic field and the plasma density measurements, we were able to locate the position of the magnetopause crossings and map it to the ionosphere using the Tsyganenko‐96 field model. Field line mapping is reasonably accurate on the dayside subsolar region where the field is strong, almost dipolar even though compressed. From these coordinated observations, we were able to prove that the dayside cusp aurora of high 630 nm brightness is on open field lines, and it is therefore direct precipitation from the magnetosheath. The cusp aurora contained significant highly structured N2 + 427.8 nm emission. The THEMIS measurements of the magnetosheath particle energy and density taken just outside the magnetopause compared to the intensity of the structured N2 + 427.8 nm emissions showed that the precipitating magnetosheath particles had to be accelerated. The most likely electron acceleration mechanism is by dispersive Alfvén waves propagating along the field line. Wave‐accelerated suprathermal electrons were seen by FAST and DMSP. The 427.8 nm wavelength channel also shows the presence of a lower latitude hard‐electron precipitation zone originating inside the magnetosphere. PMID:27867797
Fast 3D magnetic resonance fingerprinting for a whole-brain coverage.
Ma, Dan; Jiang, Yun; Chen, Yong; McGivney, Debra; Mehta, Bhairav; Gulani, Vikas; Griswold, Mark
2018-04-01
The purpose of this study was to accelerate the acquisition and reconstruction time of 3D magnetic resonance fingerprinting scans. A 3D magnetic resonance fingerprinting scan was accelerated by using a single-shot spiral trajectory with an undersampling factor of 48 in the x-y plane, and an interleaved sampling pattern with an undersampling factor of 3 through plane. Further acceleration came from reducing the waiting time between neighboring partitions. The reconstruction time was accelerated by applying singular value decomposition compression in k-space. Finally, a 3D premeasured B 1 map was used to correct for the B 1 inhomogeneity. The T 1 and T 2 values of the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology MRI phantom showed a good agreement with the standard values, with an average concordance correlation coefficient of 0.99, and coefficient of variation of 7% in the repeatability scans. The results from in vivo scans also showed high image quality in both transverse and coronal views. This study applied a fast acquisition scheme for a fully quantitative 3D magnetic resonance fingerprinting scan with a total acceleration factor of 144 as compared with the Nyquist rate, such that 3D T 1 , T 2 , and proton density maps can be acquired with whole-brain coverage at clinical resolution in less than 5 min. Magn Reson Med 79:2190-2197, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Laser-powered dielectric-structures for the production of high-brightness electron and x-ray beams
NASA Astrophysics Data System (ADS)
Travish, Gil; Yoder, Rodney B.
2011-05-01
Laser powered accelerators have been under intensive study for the past decade due to their promise of high gradients and leveraging of rapid technological progress in photonics. Of the various acceleration schemes under examination, those based on dielectric structures may enable the production of relativistic electron beams in breadbox sized systems. When combined with undulators having optical-wavelength periods, these systems could produce high brilliance x-rays which find application in, for instance, medical and industrial imaging. These beams also may open the way for table-top atto-second sciences. Development and testing of these dielectric structures faces a number of challenges including complex beam dynamics, new demands on lasers and optical coupling, beam injection schemes, and fabrication. We describe one approach being pursued at UCLA-the Micro Accelerator Platform (MAP). A structure similar to the MAP has also been designed which produces periodic deflections and acts as an undulator for radiation production, and the prospects for this device will be considered. The lessons learned from the multi-year effort to realize these devices will be presented. Challenges remain with acceleration of sub-relativistic beams, focusing, beam phase stability and extension of these devices to higher beam energies. Our progress in addressing these hurdles will be summarized. Finally, the demands on laser technology and optical coupling will be detailed.
Report of the Fourth International Workshop on human X chromosome mapping 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlessinger, D.; Mandel, J.L.; Monaco, A.P.
1993-12-31
Vigorous interactive efforts by the X chromosome community have led to accelerated mapping in the last six months. Seventy-five participants from 12 countries around the globe contributed progress reports to the Fourth International X Chromosome Workshop, at St. Louis, MO, May 9-12, 1993. It became clear that well over half the chromosome is now covered by YAC contigs that are being extended, verified, and aligned by their content of STSs and other markers placed by cytogenetic or linkage mapping techniques. The major aim of the workshop was to assemble the consensus map that appears in this report, summarizing both consensusmore » order and YAC contig information.« less
To assess the value of satellite imagery in resource evaluation on a national scale. [South Africa
NASA Technical Reports Server (NTRS)
Malan, O. G. (Principal Investigator)
1973-01-01
The author has identified the following significant results. It has been shown that ERTS imagery, particularly in the form of 1:500,000 scale false color photolithographic prints, can contribute very significantly towards facilitating and accelerating (dramatically, in the case of vegetation) resource surveys and geologic mapping. Fire mapping on a national scale becomes a feasibility, numerous new geologic features, particularly lineaments, have been discovered, land use can be mapped efficiently on a regional scale and degraded areas identified. The first detailed tectonic and geomorphological maps of the Republic of South Africa will be published in the near future mainly owing to the availability of ERTS-1 imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englbrecht, F; Lindner, F; Bin, J
2016-06-15
Purpose: To measure and simulate well-defined electron spectra using a linear accelerator and a permanent-magnetic wide-angle spectrometer to test the performance of a novel reconstruction algorithm for retrieval of unknown electron-sources, in view of application to diagnostics of laser-driven particle acceleration. Methods: Six electron energies (6, 9, 12, 15, 18 and 21 MeV, 40cm × 40cm field-size) delivered by a Siemens Oncor linear accelerator were recorded using a permanent-magnetic wide-angle electron spectrometer (150mT) with a one dimensional slit (0.2mm × 5cm). Two dimensional maps representing beam-energy and entrance-position along the slit were measured using different scintillating screens, read by anmore » online CMOS detector of high resolution (0.048mm × 0.048mm pixels) and large field of view (5cm × 10cm). Measured energy-slit position maps were compared to forward FLUKA simulations of electron transport through the spectrometer, starting from IAEA phase-spaces of the accelerator. The latter ones were validated against measured depth-dose and lateral profiles in water. Agreement of forward simulation and measurement was quantified in terms of position and shape of the signal distribution on the detector. Results: Measured depth-dose distributions and lateral profiles in the water phantom showed good agreement with forward simulations of IAEA phase-spaces, thus supporting usage of this simulation source in the study. Measured energy-slit position maps and those obtained by forward Monte-Carlo simulations showed satisfactory agreement in shape and position. Conclusion: Well-defined electron beams of known energy and shape will provide an ideal scenario to study the performance of a novel reconstruction algorithm using measured and simulated signal. Future work will increase the stability and convergence of the reconstruction-algorithm for unknown electron sources, towards final application to the electrons which drive the interaction of TW-class laser pulses with nanometer thin target foils to accelerate protons and ions to multi-MeV kinetic energy. Cluster of Excellence of the German Research Foundation (DFG) “Munich-Centre for Advanced Photonics”.« less
Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.
Fu, Zhaohao; Hao, Lingxin
2018-01-15
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Agent-based modeling of China's rural-urban migration and social network structure
NASA Astrophysics Data System (ADS)
Fu, Zhaohao; Hao, Lingxin
2018-01-01
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Heavy ion composition in the inner heliosphere: Predictions for Solar Orbiter
NASA Astrophysics Data System (ADS)
Lepri, S. T.; Livi, S. A.; Galvin, A. B.; Kistler, L. M.; Raines, J. M.; Allegrini, F.; Collier, M. R.; Zurbuchen, T.
2014-12-01
The Heavy Ion Sensor (HIS) on SO, with its high time resolution, will provide the first ever solar wind and surpathermal heavy ion composition and 3D velocity distribution function measurements inside the orbit of Mercury. These measurements will provide us the most in depth examination of the origin, structure and evolution of the solar wind. The near co-rotation phases of the orbiter will enable the most accurate mapping of in-situ structures back to their solar sources. Measurements of solar wind composition and heavy ion kinetic properties enable characterization of the sources, transport mechanisms and acceleration processes of the solar wind. This presentation will focus on the current state of in-situ studies of heavy ions in the solar wind and their implications for the sources of the solar wind, the nature of structure and variability in the solar wind, and the acceleration of particles. Additionally, we will also discuss opportunities for coordinated measurements across the payloads of Solar Orbiter and Solar Probe in order to answer key outstanding science questions of central interest to the Solar and Heliophysics communities.
NASA Astrophysics Data System (ADS)
Kleinwaechter, Tobias; Goldberg, Lars; Palmer, Charlotte; Schaper, Lucas; Schwinkendorf, Jan-Patrick; Osterhoff, Jens
2012-10-01
Laser-driven wakefield acceleration within capillary discharge waveguides has been used to generate high-quality electron bunches with GeV-scale energies. However, owing to fluctuations in laser and plasma conditions in combination with a difficult to control self-injection mechanism in the non-linear wakefield regime these bunches are often not reproducible and can feature large energy spreads. Specialized plasma targets with tailored density profiles offer the possibility to overcome these issues by controlling the injection and acceleration processes. This requires precise manipulation of the longitudinal density profile. Therefore our target concept is based on a capillary structure with multiple gas in- and outlets. Potential target designs are simulated using the fluid code OpenFOAM and those meeting the specified criteria are fabricated using femtosecond-laser machining of structures into sapphire plates. Density profiles are measured over a range of inlet pressures utilizing gas-density profilometry via Raman scattering and pressure calibration with longitudinal interferometry. In combination these allow absolute density mapping. Here we report the preliminary results.
REVIEWS OF TOPICAL PROBLEMS: Acceleration of cosmic rays by shock waves
NASA Astrophysics Data System (ADS)
Berezhko, E. G.; Krymskiĭ, G. F.
1988-01-01
Theoretical work on various processes by which shock waves accelerate cosmic rays is reviewed. The most efficient of these processes, Fermi acceleration, is singled out for special attention. A linear theory for this process is presented. The results found on the basis of nonlinear models of Fermi acceleration, which incorporate the modification of the structure caused by the accelerated particles, are reported. There is a discussion of various possibilities for explaining the generation of high-energy particles observed in interplanetary and interstellar space on the basis of a Fermi acceleration mechanism. The acceleration by shock waves from supernova explosions is discussed as a possible source of galactic cosmic rays. The most important unresolved questions in the theory of acceleration of charged particles by shock waves are pointed out.
NASA Astrophysics Data System (ADS)
Varisco, M. M.
2017-12-01
How do we live with nature? This simple question began a 10 year art-science journey into the dynamic and endangered wetlands of southeast Louisiana and its accelerated coastal decline. Since the 1930s, nearly 1,900 square miles of Louisiana's coast have been lost. How might artworks, informed by science, convey the seriousness and urgency of this loss to a wider public? Artist Michel Varisco engaged in dialogue with environmental scientist Doug Meffert and Dan Etheridge (of Meffert + Etheridge and The Center for Bioenvironmental Research at Tulane and Xavier) about the hydrological changes which have accelerated or mitigated Louisiana's land losses. She was also inspired by the unique underwater studies of biologist Suzanne Fredericq on pollutants in the Gulf from the BP oil spill and of marine ecologist Nancy Rabalais who assesses hypoxia dynamics and their impact on "dead zones." The art work that emerged includes Shifting and Fluid States, as well as current projects Below Sea Level and Turning: prayer wheels for the Mississippi River, an art commission awarded by the City of New Orleans on view during Prospect.4 Art Biennial and AGU. Shifting is a series of large-scale photos shot from the air and water that observe the dynamic movement of the Louisiana coastline over the course of a short but powerful geologic timeline and explores the consequences of human altercations to those lands and waters via land loss and sea level rise. Turning is based on the work of Kate Orff's maps from Petrochemical America and the 1944 maps of Harold Fisk. Fisk pioneered an understanding of alluvial and sedimentological processes of the Mississippi Valley, while Orff's maps describe the Mississippi River from Baton Rouge to New Orleans during three different eras: the wild un-leveed land building era; the plantation, slavery era; and the petrochemical era of present day land loss. Shifting has been exhibited around the world and Turning has already been seen by 50,000 people.
Use of incentives to encourage ITS deployment.
DOT National Transportation Integrated Search
2014-07-01
Moving Ahead for Progress in the 21st Century Act (MAP-21) identifies Intelligent Transportation Systems (ITS) as part of the solution to the Nations transportation needs and provides mechanisms for accelerating deployment of innovative technology...
Han, Yucui; Lv, Peng; Hou, Shenglin; Li, Suying; Ji, Guisu; Ma, Xue; Du, Ruiheng; Liu, Guoqing
2015-01-01
Sorghum is one of the most promising bioenergy crops. Stem juice yield, together with stem sugar concentration, determines sugar yield in sweet sorghum. Bulked segregant analysis (BSA) is a gene mapping technique for identifying genomic regions containing genetic loci affecting a trait of interest that when combined with deep sequencing could effectively accelerate the gene mapping process. In this study, a dry stem sorghum landrace was characterized and the stem water controlling locus, qSW6, was fine mapped using QTL analysis and the combined BSA and deep sequencing technologies. Results showed that: (i) In sorghum variety Jiliang 2, stem water content was around 80% before flowering stage. It dropped to 75% during grain filling with little difference between different internodes. In landrace G21, stem water content keeps dropping after the flag leaf stage. The drop from 71% at flowering time progressed to 60% at grain filling time. Large differences exist between different internodes with the lowest (51%) at the 7th and 8th internodes at dough stage. (ii) A quantitative trait locus (QTL) controlling stem water content mapped on chromosome 6 between SSR markers Ch6-2 and gpsb069 explained about 34.7-56.9% of the phenotypic variation for the 5th to 10th internodes, respectively. (iii) BSA and deep sequencing analysis narrowed the associated region to 339 kb containing 38 putative genes. The results could help reveal molecular mechanisms underlying juice yield of sorghum and thus to improve total sugar yield.
Mapping topographic plant location properties using a dense matching approach
NASA Astrophysics Data System (ADS)
Niederheiser, Robert; Rutzinger, Martin; Lamprecht, Andrea; Bardy-Durchhalter, Manfred; Pauli, Harald; Winkler, Manuela
2017-04-01
Within the project MEDIALPS (Disentangling anthropogenic drivers of climate change impacts on alpine plant species: Alps vs. Mediterranean mountains) six regions in Alpine and in Mediterranean mountain regions are investigated to assess how plant species respond to climate change. The project is embedded in the Global Observation Research Initiative in Alpine Environments (GLORIA), which is a well-established global monitoring initiative for systematic observation of changes in the plant species composition and soil temperature on mountain summits worldwide to discern accelerating climate change pressures on these fragile alpine ecosystems. Close-range sensing techniques such as terrestrial photogrammetry are well suited for mapping terrain topography of small areas with high resolution. Lightweight equipment, flexible positioning for image acquisition in the field, and independence on weather conditions (i.e. wind) make this a feasible method for in-situ data collection. New developments of dense matching approaches allow high quality 3D terrain mapping with less requirements for field set-up. However, challenges occur in post-processing and required data storage if many sites have to be mapped. Within MEDIALPS dense matching is used for mapping high resolution topography for 284 3x3 meter plots deriving information on vegetation coverage, roughness, slope, aspect and modelled solar radiation. This information helps identifying types of topography-dependent ecological growing conditions and evaluating the potential for existing refugial locations for specific plant species under climate change. This research is conducted within the project MEDIALPS - Disentangling anthropogenic drivers of climate change impacts on alpine plant species: Alps vs. Mediterranean mountains funded by the Earth System Sciences Programme of the Austrian Academy of Sciences.
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
Evansville Area Earthquake Hazards Mapping Project (EAEHMP) - Progress Report, 2008
Boyd, Oliver S.; Haase, Jennifer L.; Moore, David W.
2009-01-01
Maps of surficial geology, deterministic and probabilistic seismic hazard, and liquefaction potential index have been prepared by various members of the Evansville Area Earthquake Hazard Mapping Project for seven quadrangles in the Evansville, Indiana, and Henderson, Kentucky, metropolitan areas. The surficial geologic maps feature 23 types of surficial geologic deposits, artificial fill, and undifferentiated bedrock outcrop and include alluvial and lake deposits of the Ohio River valley. Probabilistic and deterministic seismic hazard and liquefaction hazard mapping is made possible by drawing on a wealth of information including surficial geologic maps, water well logs, and in-situ testing profiles using the cone penetration test, standard penetration test, down-hole shear wave velocity tests, and seismic refraction tests. These data were compiled and collected with contributions from the Indiana Geological Survey, Kentucky Geological Survey, Illinois State Geological Survey, United States Geological Survey, and Purdue University. Hazard map products are in progress and are expected to be completed by the end of 2009, with a public roll out in early 2010. Preliminary results suggest that there is a 2 percent probability that peak ground accelerations of about 0.3 g will be exceeded in much of the study area within 50 years, which is similar to the 2002 USGS National Seismic Hazard Maps for a firm rock site value. Accelerations as high as 0.4-0.5 g may be exceeded along the edge of the Ohio River basin. Most of the region outside of the river basin has a low liquefaction potential index (LPI), where the probability that LPI is greater than 5 (that is, there is a high potential for liquefaction) for a M7.7 New Madrid type event is only 20-30 percent. Within the river basin, most of the region has high LPI, where the probability that LPI is greater than 5 for a New Madrid type event is 80-100 percent.
Deuterated methanol map towards L1544
NASA Astrophysics Data System (ADS)
Chacón-Tanarro, A.; Caselli, P.; Bizzocchi, L.; Pineda, J. E.; Spezzano, S.; Giuliano, B. M.; Lattanzi, V.; Punanova, A.
Pre-stellar cores are self-gravitating starless dense cores with clear signs of contraction and chemical evolution (Crapsi et al. 2005), considered to represent the initial conditions in the process of star formation (Caselli & Ceccarelli 2012). Theoretical studies predict that CO is one of the precursors of complex organic molecules (COMs) during this cold and dense phase (Tielens et al. 1982; Watanabe et al. 2002). Moreover, when CO starts to deplete onto dust grains (at densities of a few 104 cm-3), the formation of deuterated species is enhanced, as CO accelerates the destruction of important precursors of deuterated molecules (Dalgarno & Lepp 1984). Here, we present the CH_2DOH/CH_3OH column density map toward the pre-stellar core L1544 (Chacón-Tanarro et al., in prep.), taken with the IRAM 30 m antenna. The results are compared with the C17O (1-0) distribution across L1544. As methanol is formed on dust grains via hydrogenation of frozen-out CO, this work allows us to measure the deuteration on surfaces and compared it with gas phase deuteration, as well as CO freeze-out and dust properties. This is important to shed light on the basic chemical processes just before the formation of a stellar system.
Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long
2018-01-01
With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637
A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field
Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.
2010-01-01
Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646
Beach, Myra Jo; Sions, Jacqueline A
2011-02-01
In 2007, a steering committee at West Virginia University Hospitals, Morgantown, began a three-year, accelerated design, computer implementation project to institute an automated perioperative record. The process included budgeting, selecting a vendor, designing and building the system, educating perioperative staff members, implementing the system, and re-evaluating the system for upgrades. Important steps in designing and building the system included mapping patient care and documentation processes, assessing software and hardware needs, and creating a new preference card system and surgical scheduling system. Staff members were educated to use the new computer applications via contests, inservice programs, hands-on learning modules, and a preimplementation rehearsal. Role-based security ensures that staff members are granted access to the computer applications they need to perform the work defined by their scope of practice. Planning ensures that the computer system will be maintained and enhanced over time. Copyright © 2011 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Research on stratified evolution of composite materials under four-point bending loading
NASA Astrophysics Data System (ADS)
Hao, M. J.; You, Q. J.; Zheng, J. C.; Yue, Z.; Xie, Z. P.
2017-12-01
In order to explore the effect of stratified evolution and delamination on the load capacity and service life of the composite materials under the four-point bending loading, the artificial tectonic defects of the different positions were set up. The four-point bending test was carried out, and the whole process was recorded by acoustic emission, and the damage degree of the composite layer was judged by the impact accumulation of the specimen - time-amplitude history chart, load-time-relative energy history chart, acoustic emission impact signal positioning map. The results show that the stratified defects near the surface of the specimen accelerate the process of material failure and expansion. The location of the delamination defects changes the bending performance of the composites to a great extent. The closer the stratification defects are to the surface of the specimen, the greater the damage, the worse the service capacity of the specimen.
Tempest: Accelerated MS/MS Database Search Software for Heterogeneous Computing Platforms.
Adamo, Mark E; Gerber, Scott A
2016-09-07
MS/MS database search algorithms derive a set of candidate peptide sequences from in silico digest of a protein sequence database, and compute theoretical fragmentation patterns to match these candidates against observed MS/MS spectra. The original Tempest publication described these operations mapped to a CPU-GPU model, in which the CPU (central processing unit) generates peptide candidates that are asynchronously sent to a discrete GPU (graphics processing unit) to be scored against experimental spectra in parallel. The current version of Tempest expands this model, incorporating OpenCL to offer seamless parallelization across multicore CPUs, GPUs, integrated graphics chips, and general-purpose coprocessors. Three protocols describe how to configure and run a Tempest search, including discussion of how to leverage Tempest's unique feature set to produce optimal results. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.
Discovery of novel drugs for promising targets.
Martell, Robert E; Brooks, David G; Wang, Yan; Wilcoxen, Keith
2013-09-01
Once a promising drug target is identified, the steps to actually discover and optimize a drug are diverse and challenging. The goal of this study was to provide a road map to navigate drug discovery. Review general steps for drug discovery and provide illustrating references. A number of approaches are available to enhance and accelerate target identification and validation. Consideration of a variety of potential mechanisms of action of potential drugs can guide discovery efforts. The hit to lead stage may involve techniques such as high-throughput screening, fragment-based screening, and structure-based design, with informatics playing an ever-increasing role. Biologically relevant screening models are discussed, including cell lines, 3-dimensional culture, and in vivo screening. The process of enabling human studies for an investigational drug is also discussed. Drug discovery is a complex process that has significantly evolved in recent years. © 2013 Elsevier HS Journals, Inc. All rights reserved.
Arcidiacono, Judith A; Bauer, Steven R; Kaplan, David S; Allocca, Clare M; Sarkar, Sumona; Lin-Gibson, Sheng
2018-06-01
The development of standards for the field of regenerative medicine has been noted as a high priority by several road-mapping activities. Additionally, the U.S. Congress recognizes the importance of standards in the 21st Century Cure Act. Standards will help to accelerate and streamline cell and gene therapy product development, ensure the quality and consistency of processes and products, and facilitate their regulatory approval. Although there is general agreement for the need of additional standards for regenerative medicine products, a shared understanding of standards is required for real progress toward the development of standards to advance regenerative medicine. Here, we describe the roles of standards in regenerative medicine as well as the process for standards development and the interactions of different entities in the standards development process. Highlighted are recent coordinated efforts between the U.S. Food and Drug Administration and the National Institute of Standards and Technology to facilitate standards development and foster science that underpins standards development. Published by Elsevier Inc.
Particle acceleration areas in two radio galaxies.
NASA Astrophysics Data System (ADS)
Andernach, H.
1989-04-01
Two edge-darkened, tailed radio galaxies (PKS 0123-01 and PKS 2247+11) were mapped with the VLA at 1.4 and 5 GHz at sub-arcmin resolution as well as with the Effelsberg 100-m telescope at 2.7, 5 and 10.7 GHz at arcmin resolution. With additional use of existing low-frequency maps the shape of the radio spectrum is analyzed point by point across the source extent. The shape is found to be concave (i.e. having high-frequency excess) over major parts of the source extent, in the case of 2247+11 even for a region in the far radio tail. Possible mechanisms causing this feature are proposed. Using a subset of maps at higher angular resolution most of the regions with spectral flattening turn out to coincide with bends and wiggles of the radio jets and/or tails. Polarization data are available at four frequencies and some problems in their interpretation are discussed. The following one consists of a 1-page "extended abstract" including two small figures. I attach to this message the processed postscript file which I would be happy to offer in ADS as a "scanned" paper. I include here the full extended abstract text which you could also offer as HTML code. I converted the four references to bibcodes.
NASA Astrophysics Data System (ADS)
Amri, Khairul; Nugraha, Loparedo; Barchia, Muhammad Faiz
2017-11-01
Land use changes in Manna watershed are caused degradation in the watershed functions. When water infiltration goes down, some water runs off flowing to Manna River cause submerged on the downstream. The aim of this study is to analyze how the Manna watershed overcoming environmentally degraded conditions. The critical level of the Manna catchment areas was determined by overlaying some digital maps based on procedure applying in the Ministry of Forestry, Republic of Indonesia (P.32/MENHUT-II/2009). Measuring the critical level of the catchment also needed natural and actual infiltrations map, and the interpretation process of the analysis used ArcGIS 10.1 software. Based on the spatial data analysis by overlaying maps of slope, soils, and rainfall, the natural infiltration rate in the Manna watershed categorized high level (44.1%). While, the critical level of the catchment areas of the Manna watershed classified in good condition cover about 64,5 % of the areas, and starting to degraded state cover about 35,5 % of the watershed areas. The environment degradation conditions indicated the land use changes in the Manna watershed could deteriorate infiltration rates. The cultivated agricultural activities neglected conservation rule could accelerate the critical catchment areas in the Manna watershed.
Intraspecific scaling of arterial blood pressure in the Burmese python.
Enok, Sanne; Slay, Christopher; Abe, Augusto S; Hicks, James W; Wang, Tobias
2014-07-01
Interspecific allometric analyses indicate that mean arterial blood pressure (MAP) increases with body mass of snakes and mammals. In snakes, MAP increases in proportion to the increased distance between the heart and the head, when the heart-head vertical distance is expressed as ρgh (where ρ is the density of blood, G: is acceleration due to gravity and h is the vertical distance above the heart), and the rise in MAP is associated with a larger heart to normalize wall stress in the ventricular wall. Based on measurements of MAP in Burmese pythons ranging from 0.9 to 3.7 m in length (0.20-27 kg), we demonstrate that although MAP increases with body mass, the rise in MAP is merely half of that predicted by heart-head distance. Scaling relationships within individual species, therefore, may not be accurately predicted by existing interspecific analyses. © 2014. Published by The Company of Biologists Ltd.
Using PAT to accelerate the transition to continuous API manufacturing.
Gouveia, Francisca F; Rahbek, Jesper P; Mortensen, Asmus R; Pedersen, Mette T; Felizardo, Pedro M; Bro, Rasmus; Mealy, Michael J
2017-01-01
Significant improvements can be realized by converting conventional batch processes into continuous ones. The main drivers include reduction of cost and waste, increased safety, and simpler scale-up and tech transfer activities. Re-designing the process layout offers the opportunity to incorporate a set of process analytical technologies (PAT) embraced in the Quality-by-Design (QbD) framework. These tools are used for process state estimation, providing enhanced understanding of the underlying variability in the process impacting quality and yield. This work describes a road map for identifying the best technology to speed-up the development of continuous processes while providing the basis for developing analytical methods for monitoring and controlling the continuous full-scale reaction. The suitability of in-line Raman, FT-infrared (FT-IR), and near-infrared (NIR) spectroscopy for real-time process monitoring was investigated in the production of 1-bromo-2-iodobenzene. The synthesis consists of three consecutive reaction steps including the formation of an unstable diazonium salt intermediate, which is critical to secure high yield and avoid formation of by-products. All spectroscopic methods were able to capture critical information related to the accumulation of the intermediate with very similar accuracy. NIR spectroscopy proved to be satisfactory in terms of performance, ease of installation, full-scale transferability, and stability to very adverse process conditions. As such, in-line NIR was selected to monitor the continuous full-scale production. The quantitative method was developed against theoretical concentration values of the intermediate since representative sampling for off-line reference analysis cannot be achieved. The rapid and reliable analytical system allowed the following: speeding up the design of the continuous process and a better understanding of the manufacturing requirements to ensure optimal yield and avoid unreacted raw materials and by-products in the continuous reactor effluent. Graphical Abstract Using PAT to accelerate the transition to continuous API manufacturing.
Reassessment of probabilistic seismic hazard in the Marmara region
Kalkan, Erol; Gulkan, Polat; Yilmaz, Nazan; Çelebi, Mehmet
2009-01-01
In 1999, the eastern coastline of the Marmara region (Turkey) witnessed increased seismic activity on the North Anatolian fault (NAF) system with two damaging earthquakes (M 7.4 Kocaeli and M 7.2 D??zce) that occurred almost three months apart. These events have reduced stress on the western segment of the NAF where it continues under the Marmara Sea. The undersea fault segments have been recently explored using bathymetric and reflection surveys. These recent findings helped scientists to understand the seismotectonic environment of the Marmara basin, which has remained a perplexing tectonic domain. On the basis of collected new data, seismic hazard of the Marmara region is reassessed using a probabilistic approach. Two different earthquake source models: (1) the smoothed-gridded seismicity model and (2) fault model and alternate magnitude-frequency relations, Gutenberg-Richter and characteristic, were used with local and imported ground-motion-prediction equations. Regional exposure is computed and quantified on a set of hazard maps that provide peak horizontal ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 sec on uniform firm-rock site condition (760 m=sec average shear wave velocity in the upper 30 m). These acceleration levels were computed for ground motions having 2% and 10% probabilities of exceedance in 50 yr, corresponding to return periods of about 2475 and 475 yr, respectively. The maximum PGA computed (at rock site) is 1.5g along the fault segments of the NAF zone extending into the Marmara Sea. The new maps generally show 10% to 15% increase for PGA, 0.2 and 1.0 sec spectral acceleration values across much of Marmara compared to previous regional hazard maps. Hazard curves and smooth design spectra for three site conditions: rock, soil, and soft-soil are provided for the Istanbul metropolitan area as possible tools in future risk estimates.
Probabilistic seismic hazard maps for Sinai Peninsula, Egypt
NASA Astrophysics Data System (ADS)
Deif, A.; Abou Elenean, K.; El Hadidy, M.; Tealeb, A.; Mohamed, A.
2009-09-01
Sinai experienced the largest Egyptian earthquake with moment magnitude (Mw) 7.2 in 1995 in the Gulf of Aqaba, 350 km from Cairo. It is characterized by the presence of many tourist projects in addition to different natural resources. The aim of the current study is to present, for the first time, the probabilistic spectral hazard maps for Sinai. Revised earthquake catalogues for Sinai and its surroundings, from 112 BC to 2006 AD with magnitude equal or greater than 3.0, are used to calculate seismic hazard in the region of interest between 27°N and 31.5°N and 32°E and 36°E. We declustered these catalogues to include only independent events. The catalogues were tested for the completeness of different magnitude ranges. 28 seismic source zones are used to define the seismicity. The recurrence rates and the maximum earthquakes across these zones were also determined from these modified catalogues. Strong ground motion relations for rock are used to produce 5% damped spectral acceleration values for four different periods (0.2, 0.5, 1.0 and 2.0 s) to define the uniform response spectra at each site (grid of 0.2° × 0.2° all over the area). Maps showing spectral acceleration values at 0.2, 0.5, 1.0 and 2.0 s periods as well as peak ground acceleration (PGA) for the return period of 475 years (equivalent to 90% probability on non-exceedence in 50 years) are presented. In addition, Uniform Hazard Spectra (UHS) at 25 different periods for the four main cities (Hurghda, Sharm El-Sheikh, Nuweibaa and Suez) are graphed. The highest hazard is found in the Gulf of Aqaba with maximum spectral accelerations 356 cm s-2 at a period of 0.22 s for a return period of 475 years.
Fuselage shell and cavity response measurements on a DC-9 test section
NASA Technical Reports Server (NTRS)
Simpson, M. A.; Mathur, G. P.; Cannon, M. R.; Tran, B. N.; Burge, P. L.
1991-01-01
A series of fuselage shell and cavity response measurements conducted on a DC-9 aircraft test section are described. The objectives of these measurements were to define the shell and cavity model characteristics of the fuselage, understand the structural-acoustic coupling characteristics of the fuselage, and measure the response of the fuselage to different types of acoustic and vibration excitation. The fuselage was excited with several combinations of acoustic and mechanical sources using interior and exterior loudspeakers and shakers, and the response to these inputs was measured with arrays of microphones and accelerometers. The data were analyzed to generate spatial plots of the shell acceleration and cabin acoustic pressure field, and corresponding acceleration and pressure wavenumber maps. Analysis and interpretation of the spatial plots and wavenumber maps provided the required information on modal characteristics, structural-acoustic coupling, and fuselage response.
Effect of increased surface tension and assisted ventilation on /sup 99m/Tc-DTPA clearance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jefferies, A.L.; Kawano, T.; Mori, S.
1988-02-01
Experiments were performed to determine the effects of conventional mechanical ventilation (CMV) and high-frequency oscillation (HFO) on the clearance of technetium-99m-labeled diethylenetriamine pentaacetate (/sup 99m/Tc-DTPA) from lungs with altered surface tension properties. A submicronic aerosol of /sup 99m/Tc-DTPA was insufflated into the lungs of anesthetized, tracheotomized rabbits before and 1 h after the administration of the aerosolized detergent dioctyl sodium sulfosuccinate (OT). Rabbits were ventilated by one of four methods: 1) spontaneous breathing; 2) CMV at 12 cmH2O mean airway pressure (MAP); 3) HFO at 12 cmH2O MAP; 4) HFO at 16 cmH2O MAP. Administration of OT resulted in decreasedmore » arterial PO2 (PaO2), increased lung wet-to-dry weight ratios, and abnormal lung pressure-volume relationships, compatible with increased surface tension. /sup 99m/Tc-DTPA clearance was accelerated after OT in all groups. The post-OT rate of clearance (k) was significantly faster (P less than 0.05) in the CMV at 12 cmH2O MAP (k = 7.57 +/- 0.71%/min (SE)) and HFO at 16 cmH2O MAP (k = 6.92 +/- 0.61%/min) groups than in the spontaneously breathing (k = 4.32 +/- 0.55%/min) and HFO at 12 cmH2O MAP (4.68 +/- 0.63%/min) groups. The clearance curves were biexponential in the former two groups. We conclude that pulmonary clearance of /sup 99m/Tc-DTPA is accelerated in high surface tension pulmonary edema, and this effect is enhanced by both conventional ventilation and HFO at high mean airway pressure.« less
Modelling element distributions in the atmospheres of magnetic Ap stars
NASA Astrophysics Data System (ADS)
Alecian, G.; Stift, M. J.
2007-11-01
Context: In recent papers convincing evidence has been presented for chemical stratification in Ap star atmospheres, and surface abundance maps have been shown to correlate with the magnetic field direction. Radiatively driven diffusion, which is known to be sensitive to the magnetic field strength and direction, is among the processes responsible for these inhomogeneities. Aims: Here we explore the hypothesis that equilibrium stratifications - such that the diffusive particle flux is close to zero throughout the atmosphere - can, in a number of cases, explain the observed abundance maps and vertical distributions of the various elements. Methods: An iterative scheme adjusts the abundances in such a way as to achieve either zero particle flux or zero effective acceleration throughout the atmosphere, taking strength and direction of the magnetic field into account. Results: The investigation of equilibrium stratifications in stellar atmospheres with temperatures from 8500 to 12 000 K and fields up to 10 kG reveals considerable variations in the vertical distribution of the 5 elements studied (Mg, Si, Ca, Ti, Fe), often with zones of large over- or under-abundances and with indications of other competing processes (such as mass loss). Horizontal magnetic fields can be very efficient in helping the accumulation of elements in higher layers. Conclusions: A comparison between our calculations and the vertical abundance profiles and surface maps derived by magnetic Doppler imaging reveals that equilibrium stratifications are in a number of cases consistent with the main trends inferred from observed spectra. However, it is not clear whether such equilibrium solutions will ever be reached during the evolution of an Ap star.
Transient aerodynamic characteristics of vans during the accelerated overtaking process
NASA Astrophysics Data System (ADS)
Liu, Li-ning; Wang, Xing-shen; Du, Guang-sheng; Liu, Zheng-gang; Lei, Li
2018-04-01
This paper studies the influence of the accelerated overtaking process on the vehicles' transient aerodynamic characteristics, through 3-D numerical simulations with dynamic meshes and sliding interface technique. Numerical accuracy is verified by experimental results. The aerodynamic characteristics of vehicles in the uniform overtaking process and the accelerated overtaking process are compared. It is shown that the speed variation of the overtaking van would influence the aerodynamic characteristics of the two vans, with greater influence on the overtaken van than on the overtaking van. The simulations of three different accelerated overtaking processes show that the greater the acceleration of the overtaking van, the larger the aerodynamic coefficients of the overtaken van. When the acceleration of the overtaking van increases by 1 m/s2, the maximum drag force, side force and yawing moment coefficients of the overtaken van all increase by more than 6%, to seriously affect the power performance and the stability of the vehicles. The analysis of the pressure fields under different accelerated conditions reveals the cause of variations of the aerodynamic characteristics of vehicles.
Making Basic Science Studies in Glaucoma More Clinically Relevant: The Need for a Consensus.
Toris, Carol B; Gelfman, Claire; Whitlock, Andy; Sponsel, William E; Rowe-Rendleman, Cheryl L
2017-09-01
Glaucoma is a chronic, progressive, and debilitating optic neuropathy that causes retinal damage and visual defects. The pathophysiologic mechanisms of glaucoma remain ill-defined, and there is an indisputable need for contributions from basic science researchers in defining pathways for translational research. However, glaucoma researchers today face significant challenges due to the lack of a map of integrated pathways from bench to bedside and the lack of consensus statements to guide in choosing the right research questions, techniques, and model systems. Here, we present the case for the development of such maps and consensus statements, which are critical for faster development of the most efficacious glaucoma therapy. We underscore that interrogating the preclinical path of both successful and unsuccessful clinical programs is essential to defining future research. One aspect of this is evaluation of available preclinical research tools. To begin this process, we highlight the utility of currently available animal models for glaucoma and emphasize that there is a particular need for models of glaucoma with normal intraocular pressure. In addition, we outline a series of discoveries from cell-based, animal, and translational research that begin to reveal a map of glaucoma from cell biology to physiology to disease pathology. Completion of these maps requires input and consensus from the global glaucoma research community. This article sets the stage by outlining various approaches to such a consensus. Together, these efforts will help accelerate basic science research, leading to discoveries with significant clinical impact for people with glaucoma.
NASA Astrophysics Data System (ADS)
Dolei, S.; Susino, R.; Sasso, C.; Bemporad, A.; Andretta, V.; Spadaro, D.; Ventura, R.; Antonucci, E.; Abbo, L.; Da Deppo, V.; Fineschi, S.; Focardi, M.; Frassetto, F.; Giordano, S.; Landini, F.; Naletto, G.; Nicolini, G.; Nicolosi, P.; Pancrazzi, M.; Romoli, M.; Telloni, D.
2018-05-01
We investigated the capability of mapping the solar wind outflow velocity of neutral hydrogen atoms by using synergistic visible-light and ultraviolet observations. We used polarised brightness images acquired by the LASCO/SOHO and Mk3/MLSO coronagraphs, and synoptic Lyα line observations of the UVCS/SOHO spectrometer to obtain daily maps of solar wind H I outflow velocity between 1.5 and 4.0 R⊙ on the SOHO plane of the sky during a complete solar rotation (from 1997 June 1 to 1997 June 28). The 28-days data sequence allows us to construct coronal off-limb Carrington maps of the resulting velocities at different heliocentric distances to investigate the space and time evolution of the outflowing solar plasma. In addition, we performed a parameter space exploration in order to study the dependence of the derived outflow velocities on the physical quantities characterising the Lyα emitting process in the corona. Our results are important in anticipation of the future science with the Metis instrument, selected to be part of the Solar Orbiter scientific payload. It was conceived to carry out near-sun coronagraphy, performing for the first time simultaneous imaging in polarised visible-light and ultraviolet H I Lyα line, so providing an unprecedented view of the solar wind acceleration region in the inner corona. The movie (see Sect. 4.2) is available at https://www.aanda.org
Does MRI scan acceleration affect power to track brain change?
Ching, Christopher R K; Hua, Xue; Hibar, Derrek P; Ward, Chadwick P; Gunter, Jeffrey L; Bernstein, Matt A; Jack, Clifford R; Weiner, Michael W; Thompson, Paul M
2015-01-01
The Alzheimer's Disease Neuroimaging Initiative recently implemented accelerated T1-weighted structural imaging to reduce scan times. Faster scans may reduce study costs and patient attrition by accommodating people who cannot tolerate long scan sessions. However, little is known about how scan acceleration affects the power to detect longitudinal brain change. Using tensor-based morphometry, no significant difference was detected in numerical summaries of atrophy rates from accelerated and nonaccelerated scans in subgroups of patients with Alzheimer's disease, early or late mild cognitive impairment, or healthy controls over a 6- and 12-month scan interval. Whole-brain voxelwise mapping analyses revealed some apparent regional differences in 6-month atrophy rates when comparing all subjects irrespective of diagnosis (n = 345). No such whole-brain difference was detected for the 12-month scan interval (n = 156). Effect sizes for structural brain changes were not detectably different in accelerated versus nonaccelerated data. Scan acceleration may influence brain measures but has minimal effects on tensor-based morphometry-derived atrophy measures, at least over the 6- and 12-month intervals examined here. Copyright © 2015 Elsevier Inc. All rights reserved.
Weights of Evidence Method for Landslide Susceptibility Mapping in Takengon, Central Aceh, Indonesia
NASA Astrophysics Data System (ADS)
Pamela; Sadisun, Imam A.; Arifianti, Yukni
2018-02-01
Takengon is an area prone to earthquake disaster and landslide. On July 2, 2013, Central Aceh earthquake induced large numbers of landslides in Takengon area, which resulted in casualties of 39 people. This location was chosen to assess the landslide susceptibility of Takengon, using a statistical method, referred to as the weight of evidence (WoE). This WoE model was applied to indicate the main factors influencing the landslide susceptible area and to derive landslide susceptibility map of Takengon. The 251 landslides randomly divided into two groups of modeling/training data (70%) and validation/test data sets (30%). Twelve thematic maps of evidence are slope degree, slope aspect, lithology, land cover, elevation, rainfall, lineament, peak ground acceleration, curvature, flow direction, distance to river and roads used as landslide causative factors. According to the AUC, the significant factor controlling the landslide is the slope, the slope aspect, peak ground acceleration, elevation, lithology, flow direction, lineament, and rainfall respectively. Analytical result verified by using test data of landslide shows AUC prediction rate is 0.819 and AUC success rate with all landslide data included is 0.879. This result showed the selective factors and WoE method as good models for assessing landslide susceptibility. The landslide susceptibility map of Takengon shows the probabilities, which represent relative degrees of susceptibility for landslide proneness in Takengon area.
Experimental evaluation of a neural-oscillator-driven active mass damper system
NASA Astrophysics Data System (ADS)
Iba, Daisuke; Hongu, Junichi
2014-03-01
This paper proposes a new active dynamic absorber control system for high-rise buildings using a neural oscillator and a map, which estimates the amplitude level of the oscillator, and shows some experimental results by using an apparatus, which realizes the proposed control algorithm. The proposed system decides the travel distance and direction of the auxiliary mass of the dynamic absorber using the output of oscillator, which is the filtering result of structure acceleration responses by the property of the oscillator, and Amplitude-Phase map (AP-map) for estimation of the structural response in specific frequency between synchronization region, and then, transfer the auxiliary mass to the predetermined location by using a position controller. In addition, the developed active dynamic absorber system is mounted on the top of the experimental single degree of freedom structure, which represents high-rise buildings, and consists of the auxiliary mass, a DC motor, a ball screw, a microcomputer, a laser displacement sensor, and an acceleration sensor. The proposed AP-map and the algorithm to determine the travel direction of the mass using the oscillator output are embedded in the microcomputer. This paper starts by illuminating the relation among subsystems of the proposed system with reference to a block diagram, and then, shows experimental responses of the whole system excited by earthquakes to confirm the validity of the proposed system.
A Model of RHIC Using the Unified Accelerator Libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilat, F.; Tepikian, S.; Trahern, C. G.
1998-01-01
The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homogeneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lattice description while assigning specificmore » characteristics to individual elements, such as measured field harmonics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator physics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC specific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less
Vibration measurement by temporal Fourier analyses of a digital hologram sequence.
Fu, Yu; Pedrini, Giancarlo; Osten, Wolfgang
2007-08-10
A method for whole-field noncontact measurement of displacement, velocity, and acceleration of a vibrating object based on image-plane digital holography is presented. A series of digital holograms of a vibrating object are captured by use of a high-speed CCD camera. The result of the reconstruction is a three-dimensional complex-valued matrix with noise. We apply Fourier analysis and windowed Fourier analysis in both the spatial and the temporal domains to extract the displacement, the velocity, and the acceleration. The instantaneous displacement is obtained by temporal unwrapping of the filtered phase map, whereas the velocity and acceleration are evaluated by Fourier analysis and by windowed Fourier analysis along the time axis. The combination of digital holography and temporal Fourier analyses allows for evaluation of the vibration, without a phase ambiguity problem, and smooth spatial distribution of instantaneous displacement, velocity, and acceleration of each instant are obtained. The comparison of Fourier analysis and windowed Fourier analysis in velocity and acceleration measurements is also presented.
Optical manipulation for optogenetics: otoliths manipulation in zebrafish (Conference Presentation)
NASA Astrophysics Data System (ADS)
Favre-Bulle, Itia A.; Scott, Ethan; Rubinsztein-Dunlop, Halina
2016-03-01
Otoliths play an important role in Zebrafish in terms of hearing and sense of balance. Many studies have been conducted to understand its structure and function, however the encoding of its movement in the brain remains unknown. Here we developed a noninvasive system capable of manipulating the otolith using optical trapping while we image its behavioral response and brain activity. We'll also present our tools for behavioral response detection and brain activity mapping. Acceleration is sensed through movements of the otoliths in the inner ear. Because experimental manipulations involve movements, electrophysiology and fluorescence microscopy are difficult. As a result, the neural codes underlying acceleration sensation are poorly understood. We have developed a technique for optically trapping otoliths, allowing us to simulate acceleration in stationary larval zebrafish. By applying forces to the otoliths, we can elicit behavioral responses consistent with compensation for perceived acceleration. Since the animal is stationary, we can use calcium imaging in these animals' brains to identify the functional circuits responsible for mediating responses to acceleration in natural settings.
Marshak Lectureship: The Turkish Accelerator Center, TAC
NASA Astrophysics Data System (ADS)
Yavas, Omer
2012-02-01
The Turkish Accelerator Center (TAC) project is comprised of five different electron and proton accelerator complexes, to be built over 15 years, with a phased approach. The Turkish Government funds the project. Currently there are 23 Universities in Turkey associated with the TAC project. The current funded project, which is to run until 2013 aims *To establish a superconducting linac based infra-red free electron laser and Bremsstrahlung Facility (TARLA) at the Golbasi Campus of Ankara University, *To establish the Institute of Accelerator Technologies in Ankara University, and *To complete the Technical Design Report of TAC. The proposed facilities are a 3^rd generation Synchrotron Radiation facility, SASE-FEL facility, a GeV scale Proton Accelerator facility and an electron-positron collider as a super charm factory. In this talk, an overview on the general status and road map of TAC project will be given. National and regional importance of TAC will be expressed and the structure of national and internatonal collaborations will be explained.
Polarization imaging of imperfect m-plane GaN surfaces
NASA Astrophysics Data System (ADS)
Sakai, Yuji; Kawayama, Iwao; Nakanishi, Hidetoshi; Tonouchi, Masayoshi
2017-04-01
Surface polar states in m-plane GaN wafers were studied using a laser terahertz (THz) emission microscope (LTEM). Femtosecond laser illumination excites THz waves from the surface due to photocarrier acceleration by local spontaneous polarization and/or the surface built-in electric field. The m-plane, in general, has a large number of unfavorable defects and unintentional polarization inversion created during the regrowth process. The LTEM images can visualize surface domains with different polarizations, some of which are hard to visualize with photoluminescence mapping, i.e., non-radiative defect areas. The present study demonstrates that the LTEM provides rich information about the surface polar states of GaN, which is crucial to improve the performance of GaN-based optoelectronic and power devices.
Towards industrial ultrafast laser microwelding: SiO2 and BK7 to aluminum alloy.
Carter, Richard M; Troughton, Michael; Chen, Jianyong; Elder, Ian; Thomson, Robert R; Daniel Esser, M J; Lamb, Robert A; Hand, Duncan P
2017-06-01
We report systematic analysis and comparison of ps-laser microwelding of industry relevant Al6082 parts to SiO 2 and BK7. Parameter mapping of pulse energy and focal depth on the weld strength is presented. The welding process was found to be strongly dependent on the focal plane but has a large tolerance to variation in pulse energy. Accelerated lifetime tests by thermal cycling from -50° to +90°C are presented. Welds in Al6082-BK7 parts survive over the full temperature range where the ratio of thermal expansion coefficients is 3.4:1. Welds in Al6082-SiO 2 parts (ratio 47.1:1) survive only a limited temperature range.
Yang, Yufei; Chen, Wei; Wang, Jiayu; Yang, Ziyu; Wang, Shenlin; Xiao, Xianjin; Li, Mengyuan
2018-01-01
Abstract Lambda exonuclease (λ exo) plays an important role in the resection of DNA ends for DNA repair. Currently, it is also a widely used enzymatic tool in genetic engineering, DNA-binding protein mapping, nanopore sequencing and biosensing. Herein, we disclose two noncanonical properties of this enzyme and suggest a previously undescribed hydrophobic interaction model between λ exo and DNA substrates. We demonstrate that the length of the free portion of the substrate strand in the dsDNA plays an essential role in the initiation of digestion reactions by λ exo. A dsDNA with a 5′ non-phosphorylated, two-nucleotide-protruding end can be digested by λ exo with very high efficiency. Moreover, we show that when a conjugated structure is covalently attached to an internal base of the dsDNA, the presence of a single mismatched base pair at the 5′ side of the modified base may significantly accelerate the process of digestion by λ exo. A detailed comparison study revealed additional π–π stacking interactions between the attached label and the amino acid residues of the enzyme. These new findings not only broaden our knowledge of the enzyme but will also be very useful for research on DNA repair and in vitro processing of nucleic acids. PMID:29490081
Ultrasonic cavitation erosion of 316L steel weld joint in liquid Pb-Bi eutectic alloy at 550°C.
Lei, Yucheng; Chang, Hongxia; Guo, Xiaokai; Li, Tianqing; Xiao, Longren
2017-11-01
Liquid lead-bismuth eutectic alloy (LBE) is applied in the Accelerator Driven transmutation System (ADS) as the high-power spallation neutron targets and coolant. A 19.2kHz ultrasonic device was deployed in liquid LBE at 550°C to induce short and long period cavitation erosion damage on the surface of weld joint, SEM and Atomic force microscopy (AFM) were used to map out the surface properties, and Energy Dispersive Spectrometer (EDS) was applied to the qualitative and quantitative analysis of elements in the micro region of the surface. The erosion mechanism for how the cavitation erosion evolved by studying the element changes, their morphology evolution, the surface hardness and the roughness evolution, was proposed. The results showed that the pits, caters and cracks appeared gradually on the erode surface after a period of cavitation. The surface roughness increased along with exposure time. Work hardening by the bubbles impact in the incubation stage strengthened the cavitation resistance efficiently. The dissolution and oxidation corrosion and cavitation erosion that simultaneously happened in liquid LBE accelerated corrosion-erosion process, and these two processes combined to cause more serious damage on the material surface. Contrast to the performance of weld metal, base metal exhibited a much better cavitation resistance. Copyright © 2017. Published by Elsevier B.V.
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
Otazo, Ricardo; Tsai, Shang-Yueh; Lin, Fa-Hsuan; Posse, Stefan
2007-12-01
MR spectroscopic imaging (MRSI) with whole brain coverage in clinically feasible acquisition times still remains a major challenge. A combination of MRSI with parallel imaging has shown promise to reduce the long encoding times and 2D acceleration with a large array coil is expected to provide high acceleration capability. In this work a very high-speed method for 3D-MRSI based on the combination of proton echo planar spectroscopic imaging (PEPSI) with regularized 2D-SENSE reconstruction is developed. Regularization was performed by constraining the singular value decomposition of the encoding matrix to reduce the effect of low-value and overlapped coil sensitivities. The effects of spectral heterogeneity and discontinuities in coil sensitivity across the spectroscopic voxels were minimized by unaliasing the point spread function. As a result the contamination from extracranial lipids was reduced 1.6-fold on average compared to standard SENSE. We show that the acquisition of short-TE (15 ms) 3D-PEPSI at 3 T with a 32 x 32 x 8 spatial matrix using a 32-channel array coil can be accelerated 8-fold (R = 4 x 2) along y-z to achieve a minimum acquisition time of 1 min. Maps of the concentrations of N-acetyl-aspartate, creatine, choline, and glutamate were obtained with moderate reduction in spatial-spectral quality. The short acquisition time makes the method suitable for volumetric metabolite mapping in clinical studies. (c) 2007 Wiley-Liss, Inc.
Modelling soil erosion in a head catchment of Jemma Basin on the Ethiopian highlands
NASA Astrophysics Data System (ADS)
Cama, Mariaelena; Schillaci, Calogero; Kropáček, Jan; Hochschild, Volker; Maerker, Michael
2017-04-01
Soil erosion represents one of the most important global issues with serious effects on agriculture and water quality especially in developing countries such as Ethiopia where rapid population growth and climatic changes affect wide mountainous areas. The catchment of Andit-Tid is a head catchment of Jemma Basin draining to the Blue Nile (Central Ethiopia). It is located in an extremely variable topographical environment and it is exposed to high degradation dynamics especially in the lower part of the catchment. The increasing agricultural activity and grazing, lead to an intense use of the steep slopes which altered the soil structure. As a consequence, water erosion processes accelerated leading to the evolution of sheet erosion, gullies and badlands. This study is aimed at a geomorphological assessment of soil erosion susceptibility. First, a geomorphological map is generated using high resolution digital elevation model (DEM) derived from high resolution stereoscopic satellite data, multispectral imagery from Rapid Eye satellite system . The map was then validated by a detailed field survey. The final maps contains three inventories of landforms: i) sheet, ii) gully erosion and iii) badlands. The water erosion susceptibility is calculated with a Maximum Entropy approach. In particular, three different models are built using the three inventories as dependent variables and a set of spatial attributes describing the lithology, terrain, vegetation and land cover from remote sensing data and DEMs as independent variables. The single susceptibility maps for sheet, gully erosion as well as badlands showed good to excellent predictive performances. Moreover, we reveal and discuss the importance of different sets of variables among the three models. In order to explore the mutual overlap of the three susceptibility maps we generated a combined map as color composite whereas each color represents one component of water erosion. The latter map yield a useful information for land use managers and planning purposes.
Accelerated deforestation in the humid tropics from the 1990s to the 2000s
NASA Astrophysics Data System (ADS)
Kim, Do-Hyung; Sexton, Joseph O.; Townshend, John R.
2015-05-01
Using a consistent, 20 year series of high- (30 m) resolution, satellite-based maps of forest cover, we estimate forest area and its changes from 1990 to 2010 in 34 tropical countries that account for the majority of the global area of humid tropical forests. Our estimates indicate a 62% acceleration in net deforestation in the humid tropics from the 1990s to the 2000s, contradicting a 25% reduction reported by the United Nations Food and Agriculture Organization Forest Resource Assessment. Net loss of forest cover peaked from 2000 to 2005. Gross gains accelerated slowly and uniformly between 1990-2000, 2000-2005, and 2005-2010. However, the gains were overwhelmed by gross losses, which peaked from 2000 to 2005 and decelerated afterward. The acceleration of humid tropical deforestation we report contradicts the assertion that losses decelerated from the 1990s to the 2000s.
Assessing the impact of graphical quality on automatic text recognition in digital maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang
2016-08-01
Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.
The Improved Locating Algorithm of Particle Filter Based on ROS Robot
NASA Astrophysics Data System (ADS)
Fang, Xun; Fu, Xiaoyang; Sun, Ming
2018-03-01
This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.
Advancing research and applications with lightning detection and mapping systems
NASA Astrophysics Data System (ADS)
MacGorman, Donald R.; Goodman, Steven J.
2011-11-01
Southern Thunder 2011 Workshop; Norman, Oklahoma, 11-14 July 2011 The Southern Thunder 2011 (ST11) Workshop was the fourth in a series intended to accelerate research and operational applications made possible by the expanding availability of ground-based and satellite systems that detect and map all types of lightning (in-cloud and cloud-to-ground). This community workshop, first held in 2004, brings together lightning data providers, algorithm developers, and operational users in government, academia, and industry.
Emitting electron spectra and acceleration processes in the jet of PKS 0447-439
NASA Astrophysics Data System (ADS)
Zhou, Yao; Yan, Dahai; Dai, Benzhong; Zhang, Li
2014-02-01
We investigate the electron energy distributions (EEDs) and the corresponding acceleration processes in the jet of PKS 0447-439, and estimate its redshift through modeling its observed spectral energy distribution (SED) in the frame of a one-zone synchrotron-self Compton (SSC) model. Three EEDs formed in different acceleration scenarios are assumed: the power-law with exponential cut-off (PLC) EED (shock-acceleration scenario or the case of the EED approaching equilibrium in the stochastic-acceleration scenario), the log-parabolic (LP) EED (stochastic-acceleration scenario and the acceleration dominating), and the broken power-law (BPL) EED (no acceleration scenario). The corresponding fluxes of both synchrotron and SSC are then calculated. The model is applied to PKS 0447-439, and modeled SEDs are compared to the observed SED of this object by using the Markov Chain Monte Carlo method. The results show that the PLC model fails to fit the observed SED well, while the LP and BPL models give comparably good fits for the observed SED. The results indicate that it is possible that a stochastic acceleration process acts in the emitting region of PKS 0447-439 and the EED is far from equilibrium (acceleration dominating) or no acceleration process works (in the emitting region). The redshift of PKS 0447-439 is also estimated in our fitting: z = 0.16 ± 0.05 for the LP case and z = 0.17 ± 0.04 for BPL case.
Electron Energization and Structure of the Diffusion Region During Asymmetric Reconnection
NASA Technical Reports Server (NTRS)
Chen, Li-Jen; Hesse, Michael; Wang, Shan; Bessho, Naoki; Daughton, William
2016-01-01
Results from particle-in-cell simulations of reconnection with asymmetric upstream conditions are reported to elucidate electron energization and structure of the electron diffusion region (EDR). Acceleration of unmagnetized electrons results in discrete structures in the distribution functions and supports the intense current and perpendicular heating in the EDR. The accelerated electrons are cyclotron turned by the reconnected magnetic field to produce the outflow jets, and as such, the acceleration by the reconnection electric field is limited, leading to resistivity without particle-particle or particle-wave collisions. A map of electron distributions is constructed, and its spatial evolution is compared with quantities previously proposed to be EDR identifiers to enable effective identifications of the EDR in terrestrial magnetopause reconnection.
Modeling Blazar Spectra by Solving an Electron Transport Equation
NASA Astrophysics Data System (ADS)
Lewis, Tiffany; Finke, Justin; Becker, Peter A.
2018-01-01
Blazars are luminous active galaxies across the entire electromagnetic spectrum, but the spectral formation mechanisms, especially the particle acceleration, in these sources are not well understood. We develop a new theoretical model for simulating blazar spectra using a self-consistent electron number distribution. Specifically, we solve the particle transport equation considering shock acceleration, adiabatic expansion, stochastic acceleration due to MHD waves, Bohm diffusive particle escape, synchrotron radiation, and Compton radiation, where we implement the full Compton cross-section for seed photons from the accretion disk, the dust torus, and 26 individual broad lines. We used a modified Runge-Kutta method to solve the 2nd order equation, including development of a new mathematical method for normalizing stiff steady-state ordinary differential equations. We show that our self-consistent, transport-based blazar model can qualitatively fit the IR through Fermi g-ray data for 3C 279, with a single-zone, leptonic configuration. We use the solution for the electron distribution to calculate multi-wavelength SED spectra for 3C 279. We calculate the particle and magnetic field energy densities, which suggest that the emitting region is not always in equipartition (a common assumption), but sometimes matter dominated. The stratified broad line region (based on ratios in quasar reverberation mapping, and thus adding no free parameters) improves our estimate of the location of the emitting region, increasing it by ~5x. Our model provides a novel view into the physics at play in blazar jets, especially the relative strength of the shock and stochastic acceleration, where our model is well suited to distinguish between these processes, and we find that the latter tends to dominate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Laura A.; Grefenstette, Brian W.; Harrison, Fiona A.
2015-12-01
We report results from deep observations (∼750 ks) of Tycho's supernova remnant (SNR) with NuSTAR. Using these data, we produce narrow-band images over several energy bands to identify the regions producing the hardest X-rays and to search for radioactive decay line emission from {sup 44}Ti. We find that the hardest (>10 keV) X-rays are concentrated in the southwest of Tycho, where recent Chandra observations have revealed high emissivity “stripes” associated with particles accelerated to the knee of the cosmic-ray spectrum. We do not find evidence of {sup 44}Ti, and we set limits on its presence and distribution within the SNR.more » These limits correspond to an upper-limit {sup 44}Ti mass of M{sub 44} < 2.4 × 10{sup −4} M{sub ⊙} for a distance of 2.3 kpc. We perform a spatially resolved spectroscopic analysis of 66 regions across Tycho. We map the best-fit rolloff frequency of the hard X-ray spectra, and we compare these results to measurements of the shock expansion and ambient density. We find that the highest energy electrons are accelerated at the lowest densities and in the fastest shocks, with a steep dependence of the rolloff frequency with shock velocity. Such a dependence is predicted by models where the maximum energy of accelerated electrons is limited by the age of the SNR rather than by synchrotron losses, but this scenario requires far lower magnetic field strengths than those derived from observations in Tycho. One way to reconcile these discrepant findings is through shock obliquity effects, and future observational work is necessary to explore the role of obliquity in the particle acceleration process.« less
Lopez, Laura A.; Grefenstette, Brian W.; Reynolds, Stephen P.; ...
2015-11-30
Here, we report results from deep observations (~750 ks) of Tycho's supernova remnant (SNR) with NuSTAR. Using these data, we produce narrow-band images over several energy bands to identify the regions producing the hardest X-rays and to search for radioactive decay line emission from 44Ti. We find that the hardest (>10 keV) X-rays are concentrated in the southwest of Tycho, where recent Chandra observations have revealed high emissivity "stripes" associated with particles accelerated to the knee of the cosmic-ray spectrum. We do not find evidence of 44Ti, and we set limits on its presence and distribution within the SNR. Furthermore,more » these limits correspond to an upper-limit 44Ti mass of M 44 < 2.4 × 10 -4 M⊙ for a distance of 2.3 kpc. We perform a spatially resolved spectroscopic analysis of 66 regions across Tycho. We map the best-fit rolloff frequency of the hard X-ray spectra, and we compare these results to measurements of the shock expansion and ambient density. We also find that the highest energy electrons are accelerated at the lowest densities and in the fastest shocks, with a steep dependence of the rolloff frequency with shock velocity. Such a dependence is predicted by models where the maximum energy of accelerated electrons is limited by the age of the SNR rather than by synchrotron losses, but this scenario requires far lower magnetic field strengths than those derived from observations in Tycho. One way to reconcile these discrepant findings is through shock obliquity effects, and future observational work is necessary to explore the role of obliquity in the particle acceleration process.« less
NASA Astrophysics Data System (ADS)
Radchenko, Andro
River bridge scour is an erosion process in which flowing water removes sediment materials (such as sand, rocks) from a bridge foundation, river beds and banks. As a result, the level of the river bed near a bridge pier is lowering such that the bridge foundation stability can be compromised, and the bridge can collapse. The scour is a dynamic process, which can accelerate rapidly during a flood event. Thus, regular monitoring of the scour progress is necessary to be performed at most river bridges. Present techniques are usually expensive, require large man/hour efforts, and often lack the real-time monitoring capabilities. In this dissertation a new method--'Smart Rocks Network for bridge scour monitoring' is introduced. The method is based on distributed wireless sensors embedded in ground underwater nearby the bridge pillars. The sensor nodes are unconstrained in movement, are equipped with years-lasting batteries and intelligent custom designed electronics, which minimizes power consumption during operation and communication. The electronic part consists of a microcontroller, communication interfaces, orientation and environment sensors (such as are accelerometer, magnetometer, temperature and pressure sensors), supporting power supplies and circuitries. Embedded in the soil nearby a bridge pillar the Smart Rocks can move/drift together with the sediments, and act as the free agent probes transmitting the unique signature signals to the base-station monitors. Individual movement of a Smart Rock can be remotely detected processing the orientation sensors reading. This can give an indication of the on-going scour progress, and set a flag for the on-site inspection. The map of the deployed Smart Rocks Network can be obtained utilizing the custom developed in-network communication protocol with signals intensity (RSSI) analysis. Particle Swarm Optimization (PSO) is applied for map reconstruction. Analysis of the map can provide detailed insight into the scour progress and topology. Smart Rocks Network wireless communication is based on the magnetoinductive (MI) link, at low (125 KHz) frequency, allowing for signal to penetrate through the water, rocks, and the bridge structure. The dissertation describes the Smart Rocks Network implementation, its electronic design and the electromagnetic/computational intelligence techniques used for the network mapping.
NASA Astrophysics Data System (ADS)
Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.
2015-12-01
Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate the count and density of Tweets with intensity level and show the importance of processing combined data sources at the earliest time stages after earthquake happens. This method can supplement existing approaches of intensity level detection, especially in the regions with high number of Twitter users and low density of seismic networks.
Boundary layer polarization and voltage in the 14 MLT region
NASA Astrophysics Data System (ADS)
Lundin, R.; Yamauchi, M.; Woch, J.; Marklund, G.
1995-05-01
Viking midlatitude observations of ions and electrons in the postnoon auroral region show that field-aligned acceleration of electrons and ions with energies up to a few kiloelectron volts takes place. The characteristics of the upgoing ion beams and the local transverse electric field observed by Viking indicate that parallel ion acceleration is primarily due to a quasi-electrostatic field-aligned acceleration process below Viking altitudes, i.e., below 10,000-13,500 km. A good correlation is found between the maximum upgoing ion beam energy and the depth of the local potential well determined by the Viking electric field experiment within dayside 'ion inverted Vs.' The total transverse potential throughout the entire region near the ion inverted Vs. is generally much higher than the field-aligned potential and may reach well above 10 kV. However, the detailed mapping of the transverse potential out to the boundary layer, a fundamental issue which remains controversial, was not attempted here. An important finding in this study is the strong correlation between the maximum up going ion beam energy of dayside ion inverted Vs and the solar wind velocity. This suggests a direct coupling of the solar wind plasma dynamo/voltage generator to the region of field-aligned particle acceleration. The fact that the center of dayside ion inverted Vs coincide with convection reversals/flow stagnation and upward Birkeland currents on what appears to be closed field lines (Woch et al., 1993), suggests that field-aligned potential structures connect to the inner part of an MHD dyanmo in the low-latitude boundary layer. Thus the Viking observations substantiate the idea of a solar wind induced boundary layer polarization where negatively charged perturbations in the postnoon sector persistently develops along the magnetic field lines, establishing accelerating potential drops along the geomagnetic field lines in the 0.5-10 kV range.
Mapping the Solar Wind from its Source Region into the Outer Corona
NASA Technical Reports Server (NTRS)
Esser, Ruth
1997-01-01
Knowledge of the radial variation of the plasma conditions in the coronal source region of the solar wind is essential to exploring coronal heating and solar wind acceleration mechanisms. The goal of the proposal was to determine as many plasma parameters in the solar wind acceleration region and beyond as possible by coordinating different observational techniques, such as Interplanetary Scintillation Observations, spectral line intensity observations, polarization brightness measurements and X-ray observations. The inferred plasma parameters were then used to constrain solar wind models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazkoz, Ruth; Escamilla-Rivera, Celia; Salzano, Vincenzo
Cosmography provides a model-independent way to map the expansion history of the Universe. In this paper we simulate a Euclid-like survey and explore cosmographic constraints from future Baryonic Acoustic Oscillations (BAO) observations. We derive general expressions for the BAO transverse and radial modes and discuss the optimal order of the cosmographic expansion that provides reliable cosmological constraints. Through constraints on the deceleration and jerk parameters, we show that future BAO data have the potential to provide a model-independent check of the cosmic acceleration as well as a discrimination between the standard ΛCDM model and alternative mechanisms of cosmic acceleration.
Maffei, Vincenzo; Mazzarella, Elisabetta; Piras, Fabrizio; Spalletta, Gianfranco; Caltagirone, Carlo; Lacquaniti, Francesco; Daprati, Elena
2016-05-01
Rich behavioral evidence indicates that the brain estimates the visual direction and acceleration of gravity quite accurately, and the underlying mechanisms have begun to be unraveled. While the neuroanatomical substrates of gravity direction processing have been studied extensively in brain-damaged patients, to our knowledge no such study exists for the processing of visual gravitational motion. Here we asked 31 stroke patients to intercept a virtual ball moving along the vertical under either natural gravity or artificial reversed gravity. Twenty-seven of them also aligned a luminous bar to the vertical direction (subjective visual vertical, SVV). Using voxel-based lesion-symptom mapping as well as lesion subtraction analysis, we found that lesions mainly centered on the posterior insula are associated with greater deviations of SVV, consistent with several previous studies. Instead, lesions mainly centered on the parietal operculum decrease the ability to discriminate natural from unnatural gravitational acceleration with a timed motor response in the interception task. Both the posterior insula and the parietal operculum belong to the vestibular cortex, and presumably receive multisensory information about the gravity vector. We speculate that an internal model estimating the effects of gravity on visual objects is constructed by transforming the vestibular estimates of mechanical gravity, which are computed in the brainstem and cerebellum, into internalized estimates of virtual gravity, which are stored in the cortical vestibular network. The present lesion data suggest a specific role for the parietal operculum in detecting the mismatch between predictive signals from the internal model and the online visual signals. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hulslander, D.
2011-12-01
As a global phenomenon, climate change produces global effects. However, many of these effects are more intense in coastal and high latitude regions. Current longer periods of ice-free conditions, in combination with a rising sea level and thawing permafrost, can result in accelerated Arctic Ocean coastline change and erosion. Areas dominantly composed of ice-cemented peats and silt-rich permafrost have proven to be especially susceptible to rapid erosion. Anderson et al. (2009; Geology News) have measured erosion rates at sites along the Alaskan Arctic Ocean coast of 15 m per year. The continental scope of these changes, as well as the remote and inhospitable nature of the study area make geologic remote sensing techniques particularly well suited for studying coastal erosion along the 45,000 km of Arctic Ocean coastline. While it is valuable to determine current patterns of erosion, it is equally important to map historic rates in order to determine if coastal erosion is accelerating, if it is in a new behavioral regime, if there are areas of emergent erosion patterns, or if what is currently measured is only a single instance in a complex and constantly shifting pattern of an overall balance of erosion and deposition at high latitudes. Even in relatively stable conditions, coastline processes are dynamic and complex, making it especially important to ensure the best possible accuracy in a study of this kind. Remote sensing solutions in the earth sciences have often run in to obstacles concerning a lack of historic data and baselines as well as issues in the systemization of accurate feature mapping. Using object-based image analysis techniques on Landsat archive data allows for the possibility of a multi-decadal map of Arctic Ocean coastline changes. Landsat data (from sensors MSS 1-3 and TM/ETM 4, 5, and 7) provide imagery as frequently as every 16 days since July 1972, are well-calibrated both radiometrically and geometrically, and are freely available from USGS EROS Data Center Archive. Hand-digitization of Arctic Ocean coastline changes over several decades would require an impractical amount of time and expense and would introduce additional error due to analyst differences in image feature interpretation. Object-based image analysis techniques have been shown (Hulslander, et al., 2008; GEOBIA 2008 Proceedings) to produce results similar to but more consistent than those from groups of human analysts. Earlier work has shown (Hulslander, 2010; AGU Fall Meeting) that using object-based analysis on Landsat Archive data can be used to map Arctic Ocean coastline change within a Landsat scene. Here, results show that this approach can be extended and automated to stably map Arctic Ocean coastline change in Landsat datasets distributed both geographically and temporally. Furthermore, these preliminary results indicate the possibility of producing a pan-Arctic Ocean coastline map on a roughly triennial basis for the past 30-plus years.
Early Experiences Writing Performance Portable OpenMP 4 Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joubert, Wayne; Hernandez, Oscar R
In this paper, we evaluate the recently available directives in OpenMP 4 to parallelize a computational kernel using both the traditional shared memory approach and the newer accelerator targeting capabilities. In addition, we explore various transformations that attempt to increase application performance portability, and examine the expressiveness and performance implications of using these approaches. For example, we want to understand if the target map directives in OpenMP 4 improve data locality when mapped to a shared memory system, as opposed to the traditional first touch policy approach in traditional OpenMP. To that end, we use recent Cray and Intel compilersmore » to measure the performance variations of a simple application kernel when executed on the OLCF s Titan supercomputer with NVIDIA GPUs and the Beacon system with Intel Xeon Phi accelerators attached. To better understand these trade-offs, we compare our results from traditional OpenMP shared memory implementations to the newer accelerator programming model when it is used to target both the CPU and an attached heterogeneous device. We believe the results and lessons learned as presented in this paper will be useful to the larger user community by providing guidelines that can assist programmers in the development of performance portable code.« less
Plasma inverse transition acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Ming
It can be proved fundamentally from the reciprocity theorem with which the electromagnetism is endowed that corresponding to each spontaneous process of radiation by a charged particle there is an inverse process which defines a unique acceleration mechanism, from Cherenkov radiation to inverse Cherenkov acceleration (ICA) [1], from Smith-Purcell radiation to inverse Smith-Purcell acceleration (ISPA) [2], and from undulator radiation to inverse undulator acceleration (IUA) [3]. There is no exception. Yet, for nearly 30 years after each of the aforementioned inverse processes has been clarified for laser acceleration, inverse transition acceleration (ITA), despite speculation [4], has remained the least understood,more » and above all, no practical implementation of ITA has been found, until now. Unlike all its counterparts in which phase synchronism is established one way or the other such that a particle can continuously gain energy from an acceleration wave, the ITA to be discussed here, termed plasma inverse transition acceleration (PITA), operates under fundamentally different principle. As a result, the discovery of PITA has been delayed for decades, waiting for a conceptual breakthrough in accelerator physics: the principle of alternating gradient acceleration [5, 6, 7, 8, 9, 10]. In fact, PITA was invented [7, 8] as one of several realizations of the new principle.« less
Andersson, Leif; Archibald, Alan L; Bottema, Cynthia D; Brauning, Rudiger; Burgess, Shane C; Burt, Dave W; Casas, Eduardo; Cheng, Hans H; Clarke, Laura; Couldrey, Christine; Dalrymple, Brian P; Elsik, Christine G; Foissac, Sylvain; Giuffra, Elisabetta; Groenen, Martien A; Hayes, Ben J; Huang, LuSheng S; Khatib, Hassan; Kijas, James W; Kim, Heebal; Lunney, Joan K; McCarthy, Fiona M; McEwan, John C; Moore, Stephen; Nanduri, Bindu; Notredame, Cedric; Palti, Yniv; Plastow, Graham S; Reecy, James M; Rohrer, Gary A; Sarropoulou, Elena; Schmidt, Carl J; Silverstein, Jeffrey; Tellam, Ross L; Tixier-Boichard, Michele; Tosser-Klopp, Gwenola; Tuggle, Christopher K; Vilkki, Johanna; White, Stephen N; Zhao, Shuhong; Zhou, Huaijun
2015-03-25
We describe the organization of a nascent international effort, the Functional Annotation of Animal Genomes (FAANG) project, whose aim is to produce comprehensive maps of functional elements in the genomes of domesticated animal species.
USDA-ARS?s Scientific Manuscript database
We describe the organization of a nascent international effort - the "Functional Annotation of ANimal Genomes" project - whose aim is to produce comprehensive maps of functional elements in the genomes of domesticated animal species....
Visualization and analysis of pulsed ion beam energy density profile with infrared imaging
NASA Astrophysics Data System (ADS)
Isakova, Y. I.; Pushkarev, A. I.
2018-03-01
Infrared imaging technique was used as a surface temperature-mapping tool to characterize the energy density distribution of intense pulsed ion beams on a thin metal target. The technique enables the measuring of the total ion beam energy and the energy density distribution along the cross section and allows one to optimize the operation of an ion diode and control target irradiation mode. The diagnostics was tested on the TEMP-4M accelerator at TPU, Tomsk, Russia and on the TEMP-6 accelerator at DUT, Dalian, China. The diagnostics was applied in studies of the dynamics of the target cooling in vacuum after irradiation and in the experiments with target ablation. Errors caused by the target ablation and target cooling during measurements have been analyzed. For Fluke Ti10 and Fluke Ti400 infrared cameras, the technique can achieve surface energy density sensitivity of 0.05 J/cm2 and spatial resolution of 1-2 mm. The thermal imaging diagnostics does not require expensive consumed materials. The measurement time does not exceed 0.1 s; therefore, this diagnostics can be used for the prompt evaluation of the energy density distribution of a pulsed ion beam and during automation of the irradiation process.
The delineation and interpretation of the Earth's gravity field
NASA Technical Reports Server (NTRS)
Marsh, B. D.
1983-01-01
The observed changes in velocity with time are reduced relative to the well-determined low degree and order GEM field model and accelerations are found by analytical differentiation of the range rates. This new map is essentially identical to the first map and we have produced a composite map by combining all 90 passes of SST data. The resolution of the map is at worst about 5 deg and much better in most places. A comparison of this map with conventional GEM models shows very good agreement. A reduction of the SEASAT altimeter data has also been carried out for an additional comparison. Although the SEASAT geoid contains much more high frequency information, it agrees very well with both the SST and GEM fields. The maps are dominated (especially in the east) by a pattern of roughly east-west anomalies with a transverse wavelength of about 2000 km. A further comparison with regional bathymetric data shows a remarkably close correlation with plate age.
Stability and perturbations of countable Markov maps
NASA Astrophysics Data System (ADS)
Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas
2018-04-01
Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann
NASA Astrophysics Data System (ADS)
Glæsner, Nadia; Leue, Marin; Magid, Jacob; Gerke, Horst H.
2016-04-01
Understanding the heterogeneous nature of soil, i.e. properties and processes occurring specifically at local scales is essential for best managing our soil resources for agricultural production. Examination of intact soil structures in order to obtain an increased understanding of how soil systems operate from small to large scale represents a large gap within soil science research. Dissolved chemicals, nutrients and particles are transported through the disturbed plow layer of agricultural soil, where after flow through the lower soil layers occur by preferential flow via macropores. Rapid movement of water through macropores limit the contact between the preferentially moving water and the surrounding soil matrix, therefore contact and exchange of solutes in the water is largely restricted to the surface area of the macropores. Organomineral complex coated surfaces control sorption and exchange properties of solutes, as well as availability of essential nutrients to plant roots and to the preferentially flowing water. DRIFT (Diffuse Reflectance infrared Fourier Transform) Mapping has been developed to examine composition of organic matter coated macropores. In this study macropore surfaces structures will be determined for organic matter composition using DRIFT from a long-term field experiment on waste application to agricultural soil (CRUCIAL, close to Copenhagen, Denmark). Parcels with 5 treatments; accelerated household waste, accelerated sewage sludge, accelerated cattle manure, NPK and unfertilized, will be examined in order to study whether agricultural management have an impact on the organic matter composition of intact structures.
Jiao, Bin-Bin; Wang, Jian-Jun; Zhu, Xu-Dong; Zeng, Long-Jun; Li, Qun; He, Zu-Hua
2012-01-01
Leaf senescence, a type of programmed cell death (PCD) characterized by chlorophyll degradation, is important to plant growth and crop productivity. It emerges that autophagy is involved in chloroplast degradation during leaf senescence. However, the molecular mechanism(s) involved in the process is not well understood. In this study, the genetic and physiological characteristics of the rice rls1 (rapid leaf senescence 1) mutant were identified. The rls1 mutant developed small, yellow-brown lesions resembling disease scattered over the whole surfaces of leaves that displayed earlier senescence than those of wild-type plants. The rapid loss of chlorophyll content during senescence was the main cause of accelerated leaf senescence in rls1. Microscopic observation indicated that PCD was misregulated, probably resulting in the accelerated degradation of chloroplasts in rls1 leaves. Map-based cloning of the RLS1 gene revealed that it encodes a previously uncharacterized NB (nucleotide-binding site)-containing protein with an ARM (armadillo) domain at the carboxyl terminus. Consistent with its involvement in leaf senescence, RLS1 was up-regulated during dark-induced leaf senescence and down-regulated by cytokinin. Intriguingly, constitutive expression of RLS1 also slightly accelerated leaf senescence with decreased chlorophyll content in transgenic rice plants. Our study identified a previously uncharacterized NB-ARM protein involved in PCD during plant growth and development, providing a unique tool for dissecting possible autophagy-mediated PCD during senescence in plants.
A probabilistic estimate of maximum acceleration in rock in the contiguous United States
Algermissen, Sylvester Theodore; Perkins, David M.
1976-01-01
This paper presents a probabilistic estimate of the maximum ground acceleration to be expected from earthquakes occurring in the contiguous United States. It is based primarily upon the historic seismic record which ranges from very incomplete before 1930 to moderately complete after 1960. Geologic data, primarily distribution of faults, have been employed only to a minor extent, because most such data have not been interpreted yet with earthquake hazard evaluation in mind.The map provides a preliminary estimate of the relative hazard in various parts of the country. The report provides a method for evaluating the relative importance of the many parameters and assumptions in hazard analysis. The map and methods of evaluation described reflect the current state of understanding and are intended to be useful for engineering purposes in reducing the effects of earthquakes on buildings and other structures.Studies are underway on improved methods for evaluating the relativ( earthquake hazard of different regions. Comments on this paper are invited to help guide future research and revisions of the accompanying map.The earthquake hazard in the United States has been estimated in a variety of ways since the initial effort by Ulrich (see Roberts and Ulrich, 1950). In general, the earlier maps provided an estimate of the severity of ground shaking or damage but the frequency of occurrence of the shaking or damage was not given. Ulrich's map showed the distribution of expected damage in terms of no damage (zone 0), minor damage (zone 1), moderate damage (zone 2), and major damage (zone 3). The zones were not defined further and the frequency of occurrence of damage was not suggested. Richter (1959) and Algermissen (1969) estimated the ground motion in terms of maximum Modified Mercalli intensity. Richter used the terms "occasional" and "frequent" to characterize intensity IX shaking and Algermissen included recurrence curves for various parts of the country in the paper accompanying his map.The first probabilistic hazard maps covering portions of the United States were by Milne and Davenport (1969a). Recently, Wiggins, Hirshberg and Bronowicki (1974) prepared a probabilistic map of maximum particle velocity and Modified Mercalli intensity for the entire United States. The maps are based on an analysis of the historical seismicity. In general, geological data were not incorporated into the development of the maps.
Blocking the association of HDAC4 with MAP1S accelerates autophagy clearance of mutant Huntingtin
Yue, Fei; Li, Wenjiao; Zou, Jing; Chen, Qi; Xu, Guibin; Huang, Hai; Xu, Zhen; Zhang, Sheng; Gallinari, Paola; Wang, Fen; McKeehan, Wallace L.; Liu, Leyuan
2015-01-01
Autophagy controls and executes the turnover of abnormally aggregated proteins. MAP1S interacts with the autophagy marker LC3 and positively regulates autophagy flux. HDAC4 associates with the aggregation-prone mutant huntingtin protein (mHTT) that causes Huntington's disease, and colocalizes with it in cytosolic inclusions. It was suggested HDAC4 interacts with MAP1S in a yeast two-hybrid screening. Here, we found that MAP1S interacts with HDAC4 via a HDAC4-binding domain (HBD). HDAC4 destabilizes MAP1S, suppresses autophagy flux and promotes the accumulation of mHTT aggregates. This occurs by an increase in the deacetylation of the acetylated MAP1S. Either suppression of HDAC4 with siRNA or overexpression of the MAP1S HBD leads to stabilization of MAP1S, activation of autophagy flux and clearance of mHTT aggregates. Therefore, specific interruption of the HDAC4-MAP1S interaction with short peptides or small molecules to enhance autophagy flux may relieve the toxicity of mHTT associated with Huntington's disease and improve symptoms of HD patients. PMID:26540094
Using sketch-map coordinates to analyze and bias molecular dynamics simulations
Tribello, Gareth A.; Ceriotti, Michele; Parrinello, Michele
2012-01-01
When examining complex problems, such as the folding of proteins, coarse grained descriptions of the system drive our investigation and help us to rationalize the results. Oftentimes collective variables (CVs), derived through some chemical intuition about the process of interest, serve this purpose. Because finding these CVs is the most difficult part of any investigation, we recently developed a dimensionality reduction algorithm, sketch-map, that can be used to build a low-dimensional map of a phase space of high-dimensionality. In this paper we discuss how these machine-generated CVs can be used to accelerate the exploration of phase space and to reconstruct free-energy landscapes. To do so, we develop a formalism in which high-dimensional configurations are no longer represented by low-dimensional position vectors. Instead, for each configuration we calculate a probability distribution, which has a domain that encompasses the entirety of the low-dimensional space. To construct a biasing potential, we exploit an analogy with metadynamics and use the trajectory to adaptively construct a repulsive, history-dependent bias from the distributions that correspond to the previously visited configurations. This potential forces the system to explore more of phase space by making it desirable to adopt configurations whose distributions do not overlap with the bias. We apply this algorithm to a small model protein and succeed in reproducing the free-energy surface that we obtain from a parallel tempering calculation. PMID:22427357
Acceleration of short and long DNA read mapping without loss of accuracy using suffix array.
Tárraga, Joaquín; Arnau, Vicente; Martínez, Héctor; Moreno, Raul; Cazorla, Diego; Salavert-Torres, José; Blanquer-Espert, Ignacio; Dopazo, Joaquín; Medina, Ignacio
2014-12-01
HPG Aligner applies suffix arrays for DNA read mapping. This implementation produces a highly sensitive and extremely fast mapping of DNA reads that scales up almost linearly with read length. The approach presented here is faster (over 20× for long reads) and more sensitive (over 98% in a wide range of read lengths) than the current state-of-the-art mappers. HPG Aligner is not only an optimal alternative for current sequencers but also the only solution available to cope with longer reads and growing throughputs produced by forthcoming sequencing technologies. https://github.com/opencb/hpg-aligner. © The Author 2014. Published by Oxford University Press.
CARHTA GENE: multipopulation integrated genetic and radiation hybrid mapping.
de Givry, Simon; Bouchez, Martin; Chabrier, Patrick; Milan, Denis; Schiex, Thomas
2005-04-15
CAR(H)(T)A GENE: is an integrated genetic and radiation hybrid (RH) mapping tool which can deal with multiple populations, including mixtures of genetic and RH data. CAR(H)(T)A GENE: performs multipoint maximum likelihood estimations with accelerated expectation-maximization algorithms for some pedigrees and has sophisticated algorithms for marker ordering. Dedicated heuristics for framework mapping are also included. CAR(H)(T)A GENE: can be used as a C++ library, through a shell command and a graphical interface. The XML output for companion tools is integrated. The program is available free of charge from www.inra.fr/bia/T/CarthaGene for Linux, Windows and Solaris machines (with Open Source). tschiex@toulouse.inra.fr.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Neubauer, W
2001-01-01
To understand the development of prehistoric cultural and economic activities, archaeologists try to obtain as much relevant information as possible. For this purpose, large numbers of similar sites must be identified, usually by non-destructive prospection methods such as aerial photography and geophysical prospection. Aerial archaeology is most effective in locating sites and the use of digital photogrammetry provides maps with high accuracy. For geophysical prospection mainly geomagnetic and geoelectrical methods or the ground-penetrating radar method are used. Near-surface measurements of the respective contrasts within physical properties of the archaeological structures and the surrounding material allows detailed mapping of the inner structures of the sites investigated. Applying specially developed wheeled instrumentation, high-resolution magnetic surveys can be carried out in a standard raster of 0.125 x 0.5 m covering up to 5 ha per day. Measurements of ground resistivity or radar surveys in a raster of 0.5 or 0.5 x 0.05 m, respectively, are used to gain information on archaeological structures and on the main stratigraphic sequence of sites covering up to 0.5 ha per day. Data on intensities of the Earth's magnetic field, apparent resistivities of the ground or amplitudinal information of radar reflections are processed using a digital image processing technique to visualize the otherwise invisible archaeological structures or monuments buried in the ground. Archaeological interpretation, in the sense of detecting, mapping and describing the archaeological structures, is done using GIS technology by combining all relevant prospection data. As most of the Middle European archaeological heritage is under a massive threat of destruction, dramatically accelerated by intensive agriculture or industrial transformation of the landscape, the prospection techniques presented here represent an approach towards an efficient documentation of the disappearing remains of our ancestors.
Mapping QTL for Omega-3 Content in Hybrid Saline Tilapia.
Lin, Grace; Wang, Le; Ngoh, Si Te; Ji, Lianghui; Orbán, Laszlo; Yue, Gen Hua
2018-02-01
Tilapia is one of most important foodfish species. The low omega-3 to omega-6 fatty acid ratio in freshwater tilapia meat is disadvantageous for human health. Increasing omega-3 content is an important task in breeding to increase the nutritional value of tilapia. However, conventional breeding to increase omega-3 content is difficult and slow. To accelerate the increase of omega-3 through marker-assisted selection (MAS), we conducted QTL mapping for fatty acid contents and profiles in a F 2 family of saline tilapia generated by crossing red tilapia and Mozambique tilapia. The total omega-3 content in F 2 hybrid tilapia was 2.5 ± 1.0 mg/g, higher than that (2.00 mg/g) in freshwater tilapia. Genotyping by sequencing (GBS) technology was used to discover and genotype SNP markers, and microsatellites were also genotyped. We constructed a linkage map with 784 markers (151 microsatellites and 633 SNPs). The linkage map was 2076.7 cM long and consisted of 22 linkage groups. Significant and suggestive QTL for total lipid content were mapped on six linkage groups (LG3, -4, -6, -8, -13, and -15) and explained 5.8-8.3% of the phenotypic variance. QTL for omega-3 fatty acids were located on four LGs (LG11, -18, -19, and -20) and explained 5.0 to 7.5% of the phenotypic variance. Our data suggest that the total lipid and omega-3 fatty acid content were determined by multiple genes in tilapia. The markers flanking the QTL for omega-3 fatty acids can be used in MAS to accelerate the genetic improvements of these traits in salt-tolerant tilapia.
Evaluation of asymmetric quadrupoles for a non-scaling fixed field alternating gradient accelerator
NASA Astrophysics Data System (ADS)
Lee, Sang-Hun; Park, Sae-Hoon; Kim, Yu-Seok
2017-12-01
A non-scaling fixed field alternating gradient (NS-FFAG) accelerator was constructed, which employs conventional quadrupoles. The possible demerit is the beam instability caused by the variable focusing strength when the orbit radius of the beam changes. To overcome this instability, it was suggested that the asymmetric quadrupole has different current flows in each coil. The magnetic field of the asymmetric quadrupole was found to be more similar to the magnetic field required for the FFAG accelerator than the constructed NS-FFAG accelerator. In this study, a simulation of the beam dynamics was carried out to evaluate the improvement to the beam stability for the NS-FFAG accelerator using the SIMION program. The beam dynamics simulation was conducted with the `hard edge' model; it ignored the fringe field at the end of the magnet. The magnetic field map of the suggested magnet was created using the SIMION program. The lattices for the simulation combined the suggested magnets. The magnets were evaluated for beam stability in the lattices through the SIMION program.
SGA-WZ: A New Strapdown Airborne Gravimeter
Huang, Yangming; Olesen, Arne Vestergaard; Wu, Meiping; Zhang, Kaidong
2012-01-01
Inertial navigation systems and gravimeters are now routinely used to map the regional gravitational quantities from an aircraft with mGal accuracy and a spatial resolution of a few kilometers. However, airborne gravimeter of this kind is limited by the inaccuracy of the inertial sensor performance, the integrated navigation technique and the kinematic acceleration determination. As the GPS technique developed, the vehicle acceleration determination is no longer the limiting factor in airborne gravity due to the cancellation of the common mode acceleration in differential mode. A new airborne gravimeter taking full advantage of the inertial navigation system is described with improved mechanical design, high precision time synchronization, better thermal control and optimized sensor modeling. Apart from the general usage, the Global Positioning System (GPS) after differentiation is integrated to the inertial navigation system which provides not only more precise altitude information along with the navigation aiding, but also an effective way to calculate the vehicle acceleration. Design description and test results on the performance of the gyroscopes and accelerations will be emphasized. Analysis and discussion of the airborne field test results are also given. PMID:23012545
Analysis of Vehicle-Following Heterogeneity Using Self-Organizing Feature Maps
Cheu, Ruey Long; Guo, Xiucheng; Romo, Alicia
2014-01-01
A self-organizing feature map (SOM) was used to represent vehicle-following and to analyze the heterogeneities in vehicle-following behavior. The SOM was constructed in such a way that the prototype vectors represented vehicle-following stimuli (the follower's velocity, relative velocity, and gap) while the output signals represented the response (the follower's acceleration). Vehicle trajectories collected at a northbound segment of Interstate 80 Freeway at Emeryville, CA, were used to train the SOM. The trajectory information of two selected pairs of passenger cars was then fed into the trained SOM to identify similar stimuli experienced by the followers. The observed responses, when the stimuli were classified by the SOM into the same category, were compared to discover the interdriver heterogeneity. The acceleration profile of another passenger car was analyzed in the same fashion to observe the interdriver heterogeneity. The distribution of responses derived from data sets of car-following-car and car-following-truck, respectively, was compared to ascertain inter-vehicle-type heterogeneity. PMID:25538767
Acceleration of runaway electrons and Joule heating in solar flares
NASA Technical Reports Server (NTRS)
Holman, G. D.
1985-01-01
The electric field acceleration of electrons out of a thermal plasma and the simultaneous Joule heating of the plasma are studied. Acceleration and heating timescales are derived and compared, and upper limits are obtained on the acceleration volume and the rate at which electrons can be accelerated. These upper limits, determined by the maximum magnetic field strength observed in flaring regions, place stringent restrictions upon the acceleration process. The role of the plasma resistivity in these processes is examined, and possible sources of anomalous resistivity are summarized. The implications of these results for the microwave and hard X-ray emission from solar flares are examined.
Acceleration of runaway electrons and Joule heating in solar flares
NASA Technical Reports Server (NTRS)
Holman, G. D.
1984-01-01
The electric field acceleration of electrons out of a thermal plasma and the simultaneous Joule heating of the plasma are studied. Acceleration and heating timescales are derived and compared, and upper limits are obtained on the acceleration volume and the rate at which electrons can be accelerated. These upper limits, determined by the maximum magnetic field strength observed in flaring regions, place stringent restrictions upon the acceleration process. The role of the plasma resistivity in these processes is examined, and possible sources of anomalous resistivity are summarized. The implications of these results for the microwave and hard X-ray emission from solar flares are examined.
Satellite mapping of Nile Delta coastal changes
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Taylor, P. T.; Roark, J. H.
1989-01-01
Multitemporal, multispectral scanner (MSS) landsat data have been used to monitor erosion and sedimentation along the Rosetta Promontory of the Nile Delta. These processes have accelerated significantly since the completion of the Aswan High Dam in 1964. Digital differencing of four MSS data sets, using standard algorithms, show that changes observed over a single year period generally occur as strings of single mixed pixels along the coast. Therefore, these can only be used qualitatively to indicate areas where changes occur. Areas of change recorded over a multi-year period are generally larger and thus identified by clusters of pixels; this reduces errors introduced by mixed pixels. Satellites provide a synoptic perspective utilizing data acquired at frequent time intervals. This permits multiple year monitoring of delta evolution on a regional scale.
Seismic Hazard Management in Mexico City
NASA Astrophysics Data System (ADS)
Wintergerst, L.
2007-05-01
Mexico City is one of the largest cities in the world. More than 8.5 million residents and 4.5 million floating population are in the city itself, but with the surrounding suburbs the number of people that could be affected by natural and man-made hazards rises to approximately 20 million. The main risk to the city as a whole is a large magnitude earthquake. Since there is reason to prepare for a credible seismic scenario of Mw = 8.2, which would exceed the damages caused during the 1985 earthquake (Mw = 8.1), we founded the Metropolitan Geologic Service (MGS) in 1998. The MGS has developed geologic and seismic hazard maps for the city (http:www.proteccioncivil.df.gob.mx). The maps include three separate risk maps for low height (3 stories), medium height (10 stories) and tall buildings (10 stories). The maps were prepared by using the maximum horizontal accelerations documented during the 1985 earthquake, and wave propagation modeling for buildings of different resonant periods (T = 0.0, 1.0 and 2.0 sec). In all cases, the risk zones were adjusted to include documented damage during the 1957, 1979 and 1985 earthquakes. All three maps show a high risk zone in the north-central portion of the city, elongated in a N-S direction, which corresponds with a narrow graben where the thickness of alluvial sediments is particularly large, and where wave amplification is accentuated. Preparation of these maps, and others used for planning, has been facilitated by the ongoing elaboration of a Dynamic Geographical Information System, which is based on geo-scientific information, includes all types of risks, and incorporates vulnerability models. From the risk management standpoint, we have elaborated the Permanent Contingency Plan for Mexico City, which in its Earthquakes chapter includes plans for coordination and for organizing attention to the population in the event of a seismic disaster. This Permanent Plan follows the philosophy of Descartes' Method, has 11 processes (6 main and 5 support processes), and is coordinated by Center of Control of Operations under the overall direction of the Head of Government of the Federal District. We are also working on the definition of the Basic Elements for a New Paradigm in the Prevention of Disasters, to investigate the origins, causes and effects of disaster phenomena, and to plan and implement a suitable response from the Government to better protect the population.
Lecture Notes on Topics in Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Alex W.
These are lecture notes that cover a selection of topics, some of them under current research, in accelerator physics. I try to derive the results from first principles, although the students are assumed to have an introductory knowledge of the basics. The topics covered are: (1) Panofsky-Wenzel and Planar Wake Theorems; (2) Echo Effect; (3) Crystalline Beam; (4) Fast Ion Instability; (5) Lawson-Woodward Theorem and Laser Acceleration in Free Space; (6) Spin Dynamics and Siberian Snakes; (7) Symplectic Approximation of Maps; (8) Truncated Power Series Algebra; and (9) Lie Algebra Technique for nonlinear Dynamics. The purpose of these lectures ismore » not to elaborate, but to prepare the students so that they can do their own research. Each topic can be read independently of the others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deadrick, F.J.; Griffith, L.V.
1990-08-17
Flux line alignment of the solenoidal focus magnets used on the ETA-II linear induction accelerator is a key element leading to a reduction of beam corkscrew motion. Two techniques have been used on the ETA-II accelerator to measure and establish magnet alignment. A low energy electron beam has been used to directly map magnetic field lines, and recent work has utilized a pulsed stretched wire technique to measure magnet tilts and offsets with respect to a reference axis. This paper reports on the techniques used in the ETA-II accelerator alignment, and presents results from those measurements which show that acceleratormore » is magnetically aligned to within {approximately}{plus minus}200 microns. 3 refs., 8 figs.« less
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls. PMID:28700633
Re-evaluation and updating of the seismic hazard of Lebanon
NASA Astrophysics Data System (ADS)
Huijer, Carla; Harajli, Mohamed; Sadek, Salah
2016-01-01
This paper presents the results of a study undertaken to evaluate the implications of the newly mapped offshore Mount Lebanon Thrust (MLT) fault system on the seismic hazard of Lebanon and the current seismic zoning and design parameters used by the local engineering community. This re-evaluation is critical, given that the MLT is located at close proximity to the major cities and economic centers of the country. The updated seismic hazard was assessed using probabilistic methods of analysis. The potential sources of seismic activities that affect Lebanon were integrated along with any/all newly established characteristics within an updated database which includes the newly mapped fault system. The earthquake recurrence relationships of these sources were developed from instrumental seismology data, historical records, and earlier studies undertaken to evaluate the seismic hazard of neighboring countries. Maps of peak ground acceleration contours, based on 10 % probability of exceedance in 50 years (as per Uniform Building Code (UBC) 1997), as well as 0.2 and 1 s peak spectral acceleration contours, based on 2 % probability of exceedance in 50 years (as per International Building Code (IBC) 2012), were also developed. Finally, spectral charts for the main coastal cities of Beirut, Tripoli, Jounieh, Byblos, Saida, and Tyre are provided for use by designers.
Chidori, Kazuhiro; Yamamoto, Yuji
2017-01-01
The aim of this study was to evaluate the effects of the lateral amplitude and regularity of upper body fluctuation on step time variability. Return map analysis was used to clarify the relationship between step time variability and a history of falling. Eleven healthy, community-dwelling older adults and twelve younger adults participated in the study. All of the subjects walked 25 m at a comfortable speed. Trunk acceleration was measured using triaxial accelerometers attached to the third lumbar vertebrae (L3) and the seventh cervical vertebrae (C7). The normalized average magnitude of acceleration, the coefficient of determination ($R^2$) of the return map, and the step time variabilities, were calculated. Cluster analysis using the average fluctuation and the regularity of C7 fluctuation identified four walking patterns in the mediolateral (ML) direction. The participants with higher fluctuation and lower regularity showed significantly greater step time variability compared with the others. Additionally, elderly participants who had fallen in the past year had higher amplitude and a lower regularity of fluctuation during walking. In conclusion, by focusing on the time evolution of each step, it is possible to understand the cause of stride and/or step time variability that is associated with a risk of falls.
Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.
2015-01-01
When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spentzouris, Panagiotis; /Fermilab; Cary, John
The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less
Relationship between Alfvén Wave and Quasi-Static Acceleration in Earth's Auroral Zone
NASA Astrophysics Data System (ADS)
Mottez, Fabrice
2016-02-01
There are two main categories of acceleration processes in the Earth's auroral zone: those based on quasi-static structures, and those based on Alfvén wave (AW). AWs play a nonnegligible role in the global energy budget of the plasma surrounding the Earth because they participate in auroral acceleration, and because auroral acceleration conveys a large portion of the energy flux across the magnetosphere. Acceleration events by double layers (DLs) and by AW have mostly been investigated separately, but many studies cited in this chapter show that they are not independent: these processes can occur simultaneously, and one process can be the cause of the other. The quasi-simultaneous occurrences of acceleration by AW and by quasi-static structures have been observed predominantly at the polar cap boundary of auroral arc systems, where often new bright arcs develop or intensify.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
BAIT: Organizing genomes and mapping rearrangements in single cells.
Hills, Mark; O'Neill, Kieran; Falconer, Ester; Brinkman, Ryan; Lansdorp, Peter M
2013-01-01
Strand-seq is a single-cell sequencing technique to finely map sister chromatid exchanges (SCEs) and other rearrangements. To analyze these data, we introduce BAIT, software which assigns templates and identifies and localizes SCEs. We demonstrate BAIT can refine completed reference assemblies, identifying approximately 21 Mb of incorrectly oriented fragments and placing over half (2.6 Mb) of the orphan fragments in mm10/GRCm38. BAIT also stratifies scaffold-stage assemblies, potentially accelerating the assembling and finishing of reference genomes. BAIT is available at http://sourceforge.net/projects/bait/.
McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K
2012-10-01
Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Risk intelligence: making profit from uncertainty in data processing system.
Zheng, Si; Liao, Xiangke; Liu, Xiaodong
2014-01-01
In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput.
Cortical plasticity associated with Braille learning.
Hamilton, R H; Pascual-Leone, A
1998-05-01
Blind subjects who learn to read Braille must acquire the ability to extract spatial information from subtle tactile stimuli. In order to accomplish this, neuroplastic changes appear to take place. During Braille learning, the sensorimotor cortical area devoted to the representation of the reading finger enlarges. This enlargement follows a two-step process that can be demonstrated with transcranial magnetic stimulation mapping and suggests initial unmasking of existing connections and eventual establishment of more stable structural changes. In addition, Braille learning appears to be associated with the recruitment of parts of the occipital, formerly `visual', cortex (V1 and V2) for tactile information processing. In blind, proficient Braille readers, the occipital cortex can be shown not only to be associated with tactile Braille reading but also to be critical for reading accuracy. Recent studies suggest the possibility of applying non-invasive neurophysiological techniques to guide and improve functional outcomes of these plastic changes. Such interventions might provide a means of accelerating functional adjustment to blindness.
Empowerment model of biomass in west java
NASA Astrophysics Data System (ADS)
Mulyana, C.; Fitriani, N. I.; Saad, A.; Yuliah, Y.
2017-06-01
Scarcity of fossil energy accelerates the search of renewable energy sources as the substitution. In West Java, biomass has potential to be developed into bio-briquette because the resources are abundant. The objectives of this research are mapping the potency of biomass as bio-briquette in West Java, and making the model of the empowerment biomass potential involving five fundamental step which are raw material, pre-processing process, conversion mechanism, products, and end user. The main object of this model focused on 3 forms which are solid, liquid, and gas which was made by involving the community component as the owner biomass, district government, academics and researcher communities, related industries as users of biomass, and the central government as the policy holders and investors as a funder. In the model was described their respective roles and mutual relationship one with another so that the bio-briquette as a substitute of fossil fuels can be realized. Application of this model will provide the benefits in renewability energy sources, environmental, socio economical and energy security.
Risk Intelligence: Making Profit from Uncertainty in Data Processing System
Liao, Xiangke; Liu, Xiaodong
2014-01-01
In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput. PMID:24883392
Accelerating Commercial Remote Sensing
NASA Technical Reports Server (NTRS)
1995-01-01
Through the Visiting Investigator Program (VIP) at Stennis Space Center, Community Coffee was able to use satellites to forecast coffee crops in Guatemala. Using satellite imagery, the company can produce detailed maps that separate coffee cropland from wild vegetation and show information on the health of specific crops. The data can control coffee prices and eventually may be used to optimize application of fertilizers, pesticides and irrigation. This would result in maximal crop yields, minimal pollution and lower production costs. VIP is a mechanism involving NASA funding designed to accelerate the growth of commercial remote sensing by promoting general awareness and basic training in the technology.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
Design of the central region in the Gustaf Werner cyclotron at the Uppsala university
NASA Astrophysics Data System (ADS)
Toprek, Dragan; Reistad, Dag; Lundstrom, Bengt; Wessman, Dan
2002-07-01
This paper describes the design of the central region in the Gustaf Werner cyclotron for h=1, 2 and 3 modes of acceleration. The electric field distribution in the inflector and in the four acceleration gaps has been numerically calculated from an electric potential map produced by the program RELAX3D. The geometry of the central region has been tested with the computations of orbits carried out by means of the computer code CYCLONE. The optical properties of the spiral inflector and the central region were studied by using the programs CASINO and CYCLONE, respectively.
The Origin of Cosmic Rays: What can GLAST Say?
NASA Technical Reports Server (NTRS)
Ormes, Jonathan F.; Digel, Seith; Moskalenko, Igor V.; Moiseev, Alexander; Williamson, Roger
2000-01-01
Gamma rays in the band from 30 MeV to 300 GeV, used in combination with direct measurements and with data from radio and X-ray bands, provide a powerful tool for studying the origin of Galactic cosmic rays. Gamma-ray Large Area Space Telescope (GLAST) with its fine 10-20 arcmin angular resolution will be able to map the sites of acceleration of cosmic rays and their interactions with interstellar matter, It will provide information that is necessary to study the acceleration of energetic particles in supernova shocks, their transport in the interstellar medium and penetration into molecular clouds.
Cloud Computing and Validated Learning for Accelerating Innovation in IoT
ERIC Educational Resources Information Center
Suciu, George; Todoran, Gyorgy; Vulpe, Alexandru; Suciu, Victor; Bulca, Cristina; Cheveresan, Romulus
2015-01-01
Innovation in Internet of Things (IoT) requires more than just creation of technology and use of cloud computing or big data platforms. It requires accelerated commercialization or aptly called go-to-market processes. To successfully accelerate, companies need a new type of product development, the so-called validated learning process.…
Ziegler, G; Ridgway, G R; Dahnke, R; Gaser, C
2014-08-15
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18-94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Ziegler, G.; Ridgway, G.R.; Dahnke, R.; Gaser, C.
2014-01-01
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18–94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. PMID:24742919
TPL-2 restricts Ccl24-dependent immunity to Heligmosomoides polygyrus
Kannan, Yashaswini; Entwistle, Lewis J.; Pelly, Victoria S.; Perez-Lloret, Jimena; Ley, Steven C.
2017-01-01
TPL-2 (COT, MAP3K8) kinase activates the MEK1/2-ERK1/2 MAPK signaling pathway in innate immune responses following TLR, TNFR1 and IL-1R stimulation. TPL-2 contributes to type-1/Th17-mediated autoimmunity and control of intracellular pathogens. We recently demonstrated TPL-2 reduces severe airway allergy to house dust mite by negatively regulating type-2 responses. In the present study, we found that TPL-2 deficiency resulted in resistance to Heligmosomoides polygyrus infection, with accelerated worm expulsion, reduced fecal egg burden and reduced worm fitness. Using co-housing experiments, we found resistance to infection in TPL-2 deficient mice (Map3k8–/–) was independent of microbiota alterations in H. polygyrus infected WT and Map3k8–/–mice. Additionally, our data demonstrated immunity to H. polygyrus infection in TPL-2 deficient mice was not due to dysregulated type-2 immune responses. Genome-wide analysis of intestinal tissue from infected TPL-2-deficient mice identified elevated expression of genes involved in chemotaxis and homing of leukocytes and cells, including Ccl24 and alternatively activated genes. Indeed, Map3k8–/–mice had a significant influx of eosinophils, neutrophils, monocytes and Il4GFP+ T cells. Conditional knockout experiments demonstrated that specific deletion of TPL-2 in CD11c+ cells, but not Villin+ epithelial cells, LysM+ myeloid cells or CD4+ T cells, led to accelerated resistance to H. polygyrus. In line with a central role of CD11c+ cells, CD11c+ CD11b+ cells isolated from TPL-2-deficient mice had elevated Ccl24. Finally, Ccl24 neutralization in TPL-2 deficient mice significantly decreased the expression of Arg1, Retnla, Chil3 and Ear11 correlating with a loss of resistance to H. polygyrus. These observations suggest that TPL-2-regulated Ccl24 in CD11c+CD11b+ cells prevents accelerated type-2 mediated immunity to H. polygyrus. Collectively, this study identifies a previously unappreciated role for TPL-2 controlling immune responses to H. polygyrus infection by restricting Ccl24 production. PMID:28759611
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
On Entropy Production in the Madelung Fluid and the Role of Bohm's Potential in Classical Diffusion
NASA Astrophysics Data System (ADS)
Heifetz, Eyal; Tsekov, Roumen; Cohen, Eliahu; Nussinov, Zohar
2016-07-01
The Madelung equations map the non-relativistic time-dependent Schrödinger equation into hydrodynamic equations of a virtual fluid. While the von Neumann entropy remains constant, we demonstrate that an increase of the Shannon entropy, associated with this Madelung fluid, is proportional to the expectation value of its velocity divergence. Hence, the Shannon entropy may grow (or decrease) due to an expansion (or compression) of the Madelung fluid. These effects result from the interference between solutions of the Schrödinger equation. Growth of the Shannon entropy due to expansion is common in diffusive processes. However, in the latter the process is irreversible while the processes in the Madelung fluid are always reversible. The relations between interference, compressibility and variation of the Shannon entropy are then examined in several simple examples. Furthermore, we demonstrate that for classical diffusive processes, the "force" accelerating diffusion has the form of the positive gradient of the quantum Bohm potential. Expressing then the diffusion coefficient in terms of the Planck constant reveals the lower bound given by the Heisenberg uncertainty principle in terms of the product between the gas mean free path and the Brownian momentum.
The Louisiana Accelerated Schools Project First Year Evaluation Report.
ERIC Educational Resources Information Center
St. John, Edward P.; And Others
The Louisiana Accelerated Schools Project (LASP) is a statewide network of schools that are changing from the traditional mode of schooling for at-risk students, which stresses remediation, to one of acceleration, which stresses accelerated learning for all students. The accelerated schools process provides a systematic approach to the…
Understanding of Object Detection Based on CNN Family and YOLO
NASA Astrophysics Data System (ADS)
Du, Juan
2018-04-01
As a key use of image processing, object detection has boomed along with the unprecedented advancement of Convolutional Neural Network (CNN) and its variants since 2012. When CNN series develops to Faster Region with CNN (R-CNN), the Mean Average Precision (mAP) has reached 76.4, whereas, the Frame Per Second (FPS) of Faster R-CNN remains 5 to 18 which is far slower than the real-time effect. Thus, the most urgent requirement of object detection improvement is to accelerate the speed. Based on the general introduction to the background and the core solution CNN, this paper exhibits one of the best CNN representatives You Only Look Once (YOLO), which breaks through the CNN family’s tradition and innovates a complete new way of solving the object detection with most simple and high efficient way. Its fastest speed has achieved the exciting unparalleled result with FPS 155, and its mAP can also reach up to 78.6, both of which have surpassed the performance of Faster R-CNN greatly. Additionally, compared with the latest most advanced solution, YOLOv2 achieves an excellent tradeoff between speed and accuracy as well as an object detector with strong generalization ability to represent the whole image.
Patterns and comparisons of human-induced changes in river flood impacts in cities
NASA Astrophysics Data System (ADS)
Clark, Stephanie; Sharma, Ashish; Sisson, Scott A.
2018-03-01
In this study, information extracted from the first global urban fluvial flood risk data set (Aqueduct) is investigated and visualized to explore current and projected city-level flood impacts driven by urbanization and climate change. We use a novel adaption of the self-organizing map (SOM) method, an artificial neural network proficient at clustering, pattern extraction, and visualization of large, multi-dimensional data sets. Prevalent patterns of current relationships and anticipated changes over time in the nonlinearly-related environmental and social variables are presented, relating urban river flood impacts to socioeconomic development and changing hydrologic conditions. Comparisons are provided between 98 individual cities. Output visualizations compare baseline and changing trends of city-specific exposures of population and property to river flooding, revealing relationships between the cities based on their relative map placements. Cities experiencing high (or low) baseline flood impacts on population and/or property that are expected to improve (or worsen), as a result of anticipated climate change and development, are identified and compared. This paper condenses and conveys large amounts of information through visual communication to accelerate the understanding of relationships between local urban conditions and global processes.
An incremental anomaly detection model for virtual machines.
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform.
An incremental anomaly detection model for virtual machines
Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu
2017-01-01
Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245
PRISM software—Processing and review interface for strong-motion data
Jones, Jeanne M.; Kalkan, Erol; Stephens, Christopher D.; Ng, Peter
2017-11-28
Rapidly available and accurate ground-motion acceleration time series (seismic recordings) and derived data products are essential to quickly providing scientific and engineering analysis and advice after an earthquake. To meet this need, the U.S. Geological Survey National Strong Motion Project has developed a software package called PRISM (Processing and Review Interface for Strong-Motion data). PRISM automatically processes strong-motion acceleration records, producing compatible acceleration, velocity, and displacement time series; acceleration, velocity, and displacement response spectra; Fourier amplitude spectra; and standard earthquake-intensity measures. PRISM is intended to be used by strong-motion seismic networks, as well as by earthquake engineers and seismologists.
Marine and Hydrokinetic Research | Water Power | NREL
. Resource Characterization and Maps NREL develops measurement systems, simulation tools, and web-based models and tools to evaluate the economic potential of power-generating devices for all technology Acceleration NREL analysts study the potential impacts that developing a robust MHK market could have on
Muon Accelerator Program (MAP) | Neutrino Factory | Research Goals
; Committees Research Goals Research & Development Design & Simulation Technology Development Systems Demonstrations Activities MASS Muon Cooling MuCool Test Area MICE Experiment MERIT Muon Collider Research Goals Why Muons at the Energy Frontier? How does it work? Graphics Animation Neutrino Factory Research Goals
Global Mapping Project - Applications and Development of Version 2 Dataset
NASA Astrophysics Data System (ADS)
Ubukawa, T.; Nakamura, T.; Otsuka, T.; Iimura, T.; Kishimoto, N.; Nakaminami, K.; Motojima, Y.; Suga, M.; Yatabe, Y.; Koarai, M.; Okatani, T.
2012-07-01
The Global Mapping Project aims to develop basic geospatial information of the whole land area of the globe, named Global Map, through the cooperation of National Mapping Organizations (NMOs) around the world. The Global Map data can be a base of global geospatial infrastructure and is composed of eight layers: Boundaries, Drainage, Transportation, Population Centers, Elevation, Land Use, Land Cover and Vegetation. The Global Map Version 1 was released in 2008, and the Version 2 will be released in 2013 as the data are to be updated every five years. In 2009, the International Steering Committee for Global Mapping (ISCGM) adopted new Specifications to develop the Global Map Version 2 with a change of its format so that it is compatible with the international standards, namely ISO 19136 and ISO 19115. With the support of the secretariat of ISCGM, the project participating countries are accelerating their data development toward the completion of the global coverage in 2013, while some countries have already released their Global Map version 2 datasets since 2010. Global Map data are available from the Internet free of charge for non-commercial purposes, which can be used to predict, assess, prepare for and cope with global issues by combining with other spatial data. There are a lot of Global Map applications in various fields, and further utilization of Global Map is expected. This paper summarises the activities toward the development of the Global Map Version 2 as well as some examples of the Global Map applications in various fields.
NASA Astrophysics Data System (ADS)
Gunn, J. P.; Petržílka, V.; Fuchs, V.; Ekedahl, A.; Goniche, M.; Hillaret, J.; Kočan, M.; Saint-Laurent, F.
2009-11-01
According to theory, Landau damping transfers the power carried by the high n//>50 components of the lower hybrid (LH) wave to thermal SOL electrons and stochastically accelerates them up to a few keV [1]. What amounts to a few percent of the injected LH power is thus transported along field lines and strikes plasma facing components, leading to the formation of well known "LH hot spots." We report on the first measurements of both the energy from 0 to 1 keV and the radial-poloidal distributions of the accelerated electrons using a retarding field analyzer. Two distinct electron populations are present : a cold, thermal population with temperatures between 10 and 30 eV, and a suprathermal component. Only partial attenuation of the electron flux was achieved at maximum applied voltage, indicating energies greater than 1 keV. Detailed 2D mapping of the hot spots was obtained by varying the safety factor stepwise during a single discharge. The radial width of the suprathermal electron beam at full power is rather large, at least about 5-6 cm, in contrast to Landau damping theory of the launched wave that predicts the radial width of the hot spots should not exceed a few millimetres [2]. The electron flux far from the grill is intermittent, with a typical burst rate of the order of 10 kHz.
Kole, J S; Beekman, F J
2006-02-21
Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.
NASA Astrophysics Data System (ADS)
Ayu Rahmalia, Diah; Nilamprasasti, Hesti
2017-04-01
We have analyzed the earthquakes data in West Sumatra province to determine peak ground acceleration value. The peak ground acceleration is a parameter that describes the strength of the tremor that ever happened. This paper aims to compare the value of the peak ground acceleration by considering the b-value before and after the Padang earthquake 2009. This research was carried out in stages, starting by taking the earthquake data in West Sumatra province with boundary coordinates 0.923° LU - 2.811° LS and 97.075° - 102.261° BT, before and after the 2009 Padang earthquake with a magnitude ≥ 3 and depth of ≤ 300 km, calculation of the b-value, and ended by creating peak ground acceleration map based on Mc. Guirre empirical formula with Excel and Surfer software. Based on earthquake data from 2002 until before Padang earthquake 2009, the b-value is 0.874 while the b-value after the Padang earthquake in 2009 to 2016 is 0.891. Considering b value, it can be known that peak ground acceleration before and after the 2009 Padang earthquake might be different. Based on the seismic data before 2009, the peak ground acceleration value of West Sumatra province is ranged from 7,002 to 308.875 gal. This value will be compared by the value of the peak ground acceleration after the Padang earthquake in 2009 which ranged from 7,946 to 372,736 gal.
Gulick, Sean P S; Jaeger, John M; Mix, Alan C; Asahi, Hirofumi; Bahlburg, Heinrich; Belanger, Christina L; Berbel, Glaucia B B; Childress, Laurel; Cowan, Ellen; Drab, Laureen; Forwick, Matthias; Fukumura, Akemi; Ge, Shulan; Gupta, Shyam; Kioka, Arata; Konno, Susumu; LeVay, Leah J; März, Christian; Matsuzaki, Kenji M; McClymont, Erin L; Moy, Chris; Müller, Juliane; Nakamura, Atsunori; Ojima, Takanori; Ribeiro, Fabiana R; Ridgway, Kenneth D; Romero, Oscar E; Slagle, Angela L; Stoner, Joseph S; St-Onge, Guillaume; Suto, Itsuki; Walczak, Maureen D; Worthington, Lindsay L; Bailey, Ian; Enkelmann, Eva; Reece, Robert; Swartz, John M
2015-12-08
Erosion, sediment production, and routing on a tectonically active continental margin reflect both tectonic and climatic processes; partitioning the relative importance of these processes remains controversial. Gulf of Alaska contains a preserved sedimentary record of the Yakutat Terrane collision with North America. Because tectonic convergence in the coastal St. Elias orogen has been roughly constant for 6 My, variations in its eroded sediments preserved in the offshore Surveyor Fan constrain a budget of tectonic material influx, erosion, and sediment output. Seismically imaged sediment volumes calibrated with chronologies derived from Integrated Ocean Drilling Program boreholes show that erosion accelerated in response to Northern Hemisphere glacial intensification (∼ 2.7 Ma) and that the 900-km-long Surveyor Channel inception appears to correlate with this event. However, tectonic influx exceeded integrated sediment efflux over the interval 2.8-1.2 Ma. Volumetric erosion accelerated following the onset of quasi-periodic (∼ 100-ky) glacial cycles in the mid-Pleistocene climate transition (1.2-0.7 Ma). Since then, erosion and transport of material out of the orogen has outpaced tectonic influx by 50-80%. Such a rapid net mass loss explains apparent increases in exhumation rates inferred onshore from exposure dates and mapped out-of-sequence fault patterns. The 1.2-My mass budget imbalance must relax back toward equilibrium in balance with tectonic influx over the timescale of orogenic wedge response (millions of years). The St. Elias Range provides a key example of how active orogenic systems respond to transient mass fluxes, and of the possible influence of climate-driven erosive processes that diverge from equilibrium on the million-year scale.
Jaeger, John M.; Mix, Alan C.; Asahi, Hirofumi; Bahlburg, Heinrich; Belanger, Christina L.; Berbel, Glaucia B. B.; Childress, Laurel; Cowan, Ellen; Drab, Laureen; Forwick, Matthias; Fukumura, Akemi; Ge, Shulan; Gupta, Shyam; Konno, Susumu; LeVay, Leah J.; März, Christian; McClymont, Erin L.; Moy, Chris; Müller, Juliane; Nakamura, Atsunori; Ojima, Takanori; Ribeiro, Fabiana R.; Ridgway, Kenneth D.; Romero, Oscar E.; Slagle, Angela L.; Stoner, Joseph S.; St-Onge, Guillaume; Suto, Itsuki; Walczak, Maureen D.; Worthington, Lindsay L.; Bailey, Ian; Enkelmann, Eva; Reece, Robert; Swartz, John M.
2015-01-01
Erosion, sediment production, and routing on a tectonically active continental margin reflect both tectonic and climatic processes; partitioning the relative importance of these processes remains controversial. Gulf of Alaska contains a preserved sedimentary record of the Yakutat Terrane collision with North America. Because tectonic convergence in the coastal St. Elias orogen has been roughly constant for 6 My, variations in its eroded sediments preserved in the offshore Surveyor Fan constrain a budget of tectonic material influx, erosion, and sediment output. Seismically imaged sediment volumes calibrated with chronologies derived from Integrated Ocean Drilling Program boreholes show that erosion accelerated in response to Northern Hemisphere glacial intensification (∼2.7 Ma) and that the 900-km-long Surveyor Channel inception appears to correlate with this event. However, tectonic influx exceeded integrated sediment efflux over the interval 2.8–1.2 Ma. Volumetric erosion accelerated following the onset of quasi-periodic (∼100-ky) glacial cycles in the mid-Pleistocene climate transition (1.2–0.7 Ma). Since then, erosion and transport of material out of the orogen has outpaced tectonic influx by 50–80%. Such a rapid net mass loss explains apparent increases in exhumation rates inferred onshore from exposure dates and mapped out-of-sequence fault patterns. The 1.2-My mass budget imbalance must relax back toward equilibrium in balance with tectonic influx over the timescale of orogenic wedge response (millions of years). The St. Elias Range provides a key example of how active orogenic systems respond to transient mass fluxes, and of the possible influence of climate-driven erosive processes that diverge from equilibrium on the million-year scale. PMID:26598689
Masini, Laura; Donis, Laura; Loi, Gianfranco; Mones, Eleonora; Molina, Elisa; Bolchini, Cesare; Krengli, Marco
2014-01-01
The aim of this study was to analyze the application of the failure modes and effects analysis (FMEA) to intracranial stereotactic radiation surgery (SRS) by linear accelerator in order to identify the potential failure modes in the process tree and adopt appropriate safety measures to prevent adverse events (AEs) and near-misses, thus improving the process quality. A working group was set up to perform FMEA for intracranial SRS in the framework of a quality assurance program. FMEA was performed in 4 consecutive tasks: (1) creation of a visual map of the process; (2) identification of possible failure modes; (3) assignment of a risk probability number (RPN) to each failure mode based on tabulated scores of severity, frequency of occurrence and detectability; and (4) identification of preventive measures to minimize the risk of occurrence. The whole SRS procedure was subdivided into 73 single steps; 116 total possible failure modes were identified and a score of severity, occurrence, and detectability was assigned to each. Based on these scores, RPN was calculated for each failure mode thus obtaining values from 1 to 180. In our analysis, 112/116 (96.6%) RPN values were <60, 2 (1.7%) between 60 and 125 (63, 70), and 2 (1.7%) >125 (135, 180). The 2 highest RPN scores were assigned to the risk of using the wrong collimator's size and incorrect coordinates on the laser target localizer frame. Failure modes and effects analysis is a simple and practical proactive tool for systematic analysis of risks in radiation therapy. In our experience of SRS, FMEA led to the adoption of major changes in various steps of the SRS procedure.
Istanbul Earthquake Early Warning and Rapid Response System
NASA Astrophysics Data System (ADS)
Erdik, M. O.; Fahjan, Y.; Ozel, O.; Alcik, H.; Aydin, M.; Gul, M.
2003-12-01
As part of the preparations for the future earthquake in Istanbul a Rapid Response and Early Warning system in the metropolitan area is in operation. For the Early Warning system ten strong motion stations were installed as close as possible to the fault zone. Continuous on-line data from these stations via digital radio modem provide early warning for potentially disastrous earthquakes. Considering the complexity of fault rupture and the short fault distances involved, a simple and robust Early Warning algorithm, based on the exceedance of specified threshold time domain amplitude levels is implemented. The band-pass filtered accelerations and the cumulative absolute velocity (CAV) are compared with specified threshold levels. When any acceleration or CAV (on any channel) in a given station exceeds specific threshold values it is considered a vote. Whenever we have 2 station votes within selectable time interval, after the first vote, the first alarm is declared. In order to specify the appropriate threshold levels a data set of near field strong ground motions records form Turkey and the world has been analyzed. Correlations among these thresholds in terms of the epicenter distance the magnitude of the earthquake have been studied. The encrypted early warning signals will be communicated to the respective end users by UHF systems through a "service provider" company. The users of the early warning signal will be power and gas companies, nuclear research facilities, critical chemical factories, subway system and several high-rise buildings. Depending on the location of the earthquake (initiation of fault rupture) and the recipient facility the alarm time can be as high as about 8s. For the rapid response system one hundred 18 bit-resolution strong motion accelerometers were placed in quasi-free field locations (basement of small buildings) in the populated areas of the city, within an area of approximately 50x30km, to constitute a network that will enable early damage assessment and rapid response information after a damaging earthquake. Early response information is achieved through fast acquisition and analysis of processed data obtained from the network. The stations are routinely interrogated on regular basis by the main data center. After triggered by an earthquake, each station processes the streaming strong motion data to yield the spectral accelerations at specific periods, 12Hz filtered PGA and PGV and will send these parameters in the form of SMS messages at every 20s directly to the main data center through a designated GSM network and through a microwave system. A shake map and damage distribution map (using aggregate building inventories and fragility curves) will be automatically generated using the algorithm developed for this purpose. Loss assessment studies are complemented by a large citywide digital database on the topography, geology, soil conditions, building, infrastructure and lifeline inventory. The shake and damage maps will be conveyed to the governor's and mayor's offices, fire, police and army headquarters within 3 minutes using radio modem and GPRS communication. An additional forty strong motion recorders were placed on important structures in several interconnected clusters to monitor the health of these structures after a damaging earthquake.
NASA Astrophysics Data System (ADS)
Pashaei, Ali; Piella, Gemma; Planes, Xavier; Duchateau, Nicolas; de Caralt, Teresa M.; Sitges, Marta; Frangi, Alejandro F.
2013-03-01
It has been demonstrated that the acceleration signal has potential to monitor heart function and adaptively optimize Cardiac Resynchronization Therapy (CRT) systems. In this paper, we propose a non-invasive method for computing myocardial acceleration from 3D echocardiographic sequences. Displacement of the myocardium was estimated using a two-step approach: (1) 3D automatic segmentation of the myocardium at end-diastole using 3D Active Shape Models (ASM); (2) propagation of this segmentation along the sequence using non-rigid 3D+t image registration (temporal di eomorphic free-form-deformation, TDFFD). Acceleration was obtained locally at each point of the myocardium from local displacement. The framework has been tested on images from a realistic physical heart phantom (DHP-01, Shelley Medical Imaging Technologies, London, ON, CA) in which the displacement of some control regions was known. Good correlation has been demonstrated between the estimated displacement function from the algorithms and the phantom setup. Due to the limited temporal resolution, the acceleration signals are sparse and highly noisy. The study suggests a non-invasive technique to measure the cardiac acceleration that may be used to improve the monitoring of cardiac mechanics and optimization of CRT.
The use of process mapping in healthcare quality improvement projects.
Antonacci, Grazia; Reed, Julie E; Lennox, Laura; Barlow, James
2018-05-01
Introduction Process mapping provides insight into systems and processes in which improvement interventions are introduced and is seen as useful in healthcare quality improvement projects. There is little empirical evidence on the use of process mapping in healthcare practice. This study advances understanding of the benefits and success factors of process mapping within quality improvement projects. Methods Eight quality improvement projects were purposively selected from different healthcare settings within the UK's National Health Service. Data were gathered from multiple data-sources, including interviews exploring participants' experience of using process mapping in their projects and perceptions of benefits and challenges related to its use. These were analysed using inductive analysis. Results Eight key benefits related to process mapping use were reported by participants (gathering a shared understanding of the reality; identifying improvement opportunities; engaging stakeholders in the project; defining project's objectives; monitoring project progress; learning; increased empathy; simplicity of the method) and five factors related to successful process mapping exercises (simple and appropriate visual representation, information gathered from multiple stakeholders, facilitator's experience and soft skills, basic training, iterative use of process mapping throughout the project). Conclusions Findings highlight benefits and versatility of process mapping and provide practical suggestions to improve its use in practice.
Electro-optic spatial decoding on the spherical-wavefront Coulomb fields of plasma electron sources.
Huang, K; Esirkepov, T; Koga, J K; Kotaki, H; Mori, M; Hayashi, Y; Nakanii, N; Bulanov, S V; Kando, M
2018-02-13
Detections of the pulse durations and arrival timings of relativistic electron beams are important issues in accelerator physics. Electro-optic diagnostics on the Coulomb fields of electron beams have the advantages of single shot and non-destructive characteristics. We present a study of introducing the electro-optic spatial decoding technique to laser wakefield acceleration. By placing an electro-optic crystal very close to a gas target, we discovered that the Coulomb field of the electron beam possessed a spherical wavefront and was inconsistent with the previously widely used model. The field structure was demonstrated by experimental measurement, analytic calculations and simulations. A temporal mapping relationship with generality was derived in a geometry where the signals had spherical wavefronts. This study could be helpful for the applications of electro-optic diagnostics in laser plasma acceleration experiments.
Evaluation of Horizontal Seismic Hazard of Shahrekord, Iran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, G. Ghodrati; Dehkordi, M. Raeisi; Amrei, S. A. Razavian
2008-07-08
This paper presents probabilistic horizontal seismic hazard assessment of Shahrekord, Iran. It displays the probabilistic estimate of Peak Ground Horizontal Acceleration (PGHA) for the return period of 75, 225, 475 and 2475 years. The output of the probabilistic seismic hazard analysis is based on peak ground acceleration (PGA), which is the most common criterion in designing of buildings. A catalogue of seismic events that includes both historical and instrumental events was developed and covers the period from 840 to 2007. The seismic sources that affect the hazard in Shahrekord were identified within the radius of 150 km and the recurrencemore » relationships of these sources were generated. Finally four maps have been prepared to indicate the earthquake hazard of Shahrekord in the form of iso-acceleration contour lines for different hazard levels by using SEISRISK III software.« less
Ye, Huihui; Cauley, Stephen F; Gagoski, Borjan; Bilgic, Berkin; Ma, Dan; Jiang, Yun; Du, Yiping P; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-05-01
To develop a reconstruction method to improve SMS-MRF, in which slice acceleration is used in conjunction with highly undersampled in-plane acceleration to speed up MRF acquisition. In this work two methods are employed to efficiently perform the simultaneous multislice magnetic resonance fingerprinting (SMS-MRF) data acquisition and the direct-spiral slice-GRAPPA (ds-SG) reconstruction. First, the lengthy training data acquisition is shortened by employing the through-time/through-k-space approach, in which similar k-space locations within and across spiral interleaves are grouped and are associated with a single set of kernel. Second, inversion recovery preparation (IR prepped), variable flip angle (FA), and repetition time (TR) are used for the acquisition of the training data, to increase signal variation and to improve the conditioning of the kernel fitting. The grouping of k-space locations enables a large reduction in the number of kernels required, and the IR-prepped training data with variable FA and TR provide improved ds-SG kernels and reconstruction performance. With direct-spiral slice-GRAPPA, tissue parameter maps comparable to that of conventional MRF were obtained at multiband (MB) = 3 acceleration using t-blipped SMS-MRF acquisition with 32-channel head coil at 3 Tesla (T). The proposed reconstruction scheme allows MB = 3 accelerated SMS-MRF imaging with high-quality T 1 , T 2 , and off-resonance maps, and can be used to significantly shorten MRF acquisition and aid in its adoption in neuro-scientific and clinical settings. Magn Reson Med 77:1966-1974, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Zhou, Tong; Chen, Dong; Liu, Weining
2018-03-01
Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.
2012-01-01
Background Cotton is the world’s most important natural textile fiber and a significant oilseed crop. Decoding cotton genomes will provide the ultimate reference and resource for research and utilization of the species. Integration of high-density genetic maps with genomic sequence information will largely accelerate the process of whole-genome assembly in cotton. Results In this paper, we update a high-density interspecific genetic linkage map of allotetraploid cultivated cotton. An additional 1,167 marker loci have been added to our previously published map of 2,247 loci. Three new marker types, InDel (insertion-deletion) and SNP (single nucleotide polymorphism) developed from gene information, and REMAP (retrotransposon-microsatellite amplified polymorphism), were used to increase map density. The updated map consists of 3,414 loci in 26 linkage groups covering 3,667.62 cM with an average inter-locus distance of 1.08 cM. Furthermore, genome-wide sequence analysis was finished using 3,324 informative sequence-based markers and publicly-available Gossypium DNA sequence information. A total of 413,113 EST and 195 BAC sequences were physically anchored and clustered by 3,324 sequence-based markers. Of these, 14,243 ESTs and 188 BACs from different species of Gossypium were clustered and specifically anchored to the high-density genetic map. A total of 2,748 candidate unigenes from 2,111 ESTs clusters and 63 BACs were mined for functional annotation and classification. The 337 ESTs/genes related to fiber quality traits were integrated with 132 previously reported cotton fiber quality quantitative trait loci, which demonstrated the important roles in fiber quality of these genes. Higher-level sequence conservation between different cotton species and between the A- and D-subgenomes in tetraploid cotton was found, indicating a common evolutionary origin for orthologous and paralogous loci in Gossypium. Conclusion This study will serve as a valuable genomic resource for tetraploid cotton genome assembly, for cloning genes related to superior agronomic traits, and for further comparative genomic analyses in Gossypium. PMID:23046547
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
NASA Astrophysics Data System (ADS)
Dubinov, Alexander E.; Ochkina, Elena I.
2018-05-01
State-of-the-art compact recirculating electron accelerators operating at intermediate energies (tens of MeV) are reviewed. The acceleration schemes implemented in the rhodotron, ridgetron, fantron, and cylindertron machines are discussed. Major accelerator components such as the electron guns, accelerating cavities, and bending magnets are described. The parameters of currently operating recirculating accelerators are tabulated, and applications of these accelerators in different processes of irradiation are exemplified.
Vacuum Plasma Spray Forming of Tungsten Lorentz Force Accelerator Components
NASA Technical Reports Server (NTRS)
Zimmerman, Frank R.
2001-01-01
The Vacuum Plasma Spray (VPS) Laboratory at NASA's Marshall Space Flight Center has developed and demonstrated a fabrication technique using the VPS process to form anode sections for a Lorentz force accelerator from tungsten. Lorentz force accelerators are an attractive form of electric propulsion that provides continuous, high-efficiency propulsion at useful power levels for such applications as orbit transfers or deep space missions. The VPS process is used to deposit refractory metals such as tungsten onto a graphite mandrel of the desired shape. Because tungsten is reactive at high temperatures, it is thermally sprayed in an inert environment where the plasma gun melts and accelerates the metal powder onto the mandrel. A three-axis robot inside the chamber controls the motion of the plasma spray torch. A graphite mandrel acts as a male mold, forming the required contour and dimensions of the inside surface of the anode. This paper describes the processing techniques, design considerations, and process development associated with the VPS forming of the Lorentz force accelerator.
Results of the NFIRAOS RTC trade study
NASA Astrophysics Data System (ADS)
Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent L.; Gilles, Luc; Herriot, Glen; Kerley, Daniel A.; Ljusic, Zoran; McVeigh, Eric A.; Prior, Robert; Smith, Malcolm; Wang, Lianqi
2014-07-01
With two large deformable mirrors with a total of more than 7000 actuators that need to be driven from the measurements of six 60x60 LGS WFSs (total 1.23Mpixels) at 800Hz with a latency of less than one frame, NFIRAOS presents an interesting real-time computing challenge. This paper reports on a recent trade study to evaluate which current technology could meet this challenge, with the plan to select a baseline architecture by the beginning of NFIRAOS construction in 2014. We have evaluated a number of architectures, ranging from very specialized layouts with custom boards to more generic architectures made from commercial off-the-shelf units (CPUs with or without accelerator boards). For each architecture, we have found the most suitable algorithm, mapped it onto the hardware and evaluated the performance through benchmarking whenever possible. We have evaluated a large number of criteria, including cost, power consumption, reliability and flexibility, and proceeded with scoring each architecture based on these criteria. We have found that, with today's technology, the NFIRAOS requirements are well within reach of off-the-shelf commercial hardware running a parallel implementation of the straightforward matrix-vector multiply (MVM) algorithm for wave-front reconstruction. Even accelerators such as GPUs and Xeon Phis are no longer necessary. Indeed, we have found that the entire NFIRAOS RTC can be handled by seven 2U high-end PC-servers using 10GbE connectivity. Accelerators are only required for the off-line process of updating the matrix control matrix every ~10s, as observing conditions change.
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
NASA Technical Reports Server (NTRS)
Hung, R. J.
1995-01-01
A set of mathematical formulation is adopted to study vapor deposition from source materials driven by heat transfer process under normal and oblique directions of gravitational acceleration with extremely low pressure environment of 10(exp -2) mm Hg. A series of time animation of the initiation and development of flow and temperature profiles during the course of vapor deposition has been obtained through the numerical computation. Computations show that the process of vapor deposition has been accomplished by the transfer of vapor through a fairly complicated flow pattern of recirculation under normal direction gravitational acceleration. It is obvious that there is no way to produce a homogeneous thin crystalline films with fine grains under such a complicated flow pattern of recirculation with a non-uniform temperature distribution under normal direction gravitational acceleration. There is no vapor deposition due to a stably stratified medium without convection for reverse normal direction gravitational acceleration. Vapor deposition under oblique direction gravitational acceleration introduces a reduced gravitational acceleration in vertical direction which is favorable to produce a homogeneous thin crystalline films. However, oblique direction gravitational acceleration also induces an unfavorable gravitational acceleration along horizontal direction which is responsible to initiate a complicated flow pattern of recirculation. In other words, it is necessary to carry out vapor deposition under a reduced gravity in the future space shuttle experiments with extremely low pressure environment to process vapor deposition with a homogeneous crystalline films with fine grains. Fluid mechanics simulation can be used as a tool to suggest most optimistic way of experiment with best setup to achieve the goal of processing best nonlinear optical materials.
Waste Information Management System: One Year After Web Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoffner, P.A.; Geisler, T.J.; Upadhyay, H.
2008-07-01
The implementation of the Department of Energy (DOE) mandated accelerated cleanup program created significant potential technical impediments. The schedule compression required close coordination and a comprehensive review and prioritization of the barriers that impeded treatment and disposition of the waste streams at each site. Many issues related to site waste treatment and disposal were potential critical path issues under the accelerated schedules. In order to facilitate accelerated cleanup initiatives, waste managers at DOE field sites and at DOE Headquarters in Washington, D.C., needed timely waste forecast information regarding the volumes and types of waste that would be generated by DOEmore » sites over the next 30 years. Each local DOE site has historically collected, organized, and displayed site waste forecast information in separate and unique systems. However, waste information from all sites needed a common application to allow interested parties to understand and view the complete complex-wide picture. A common application allows identification of total waste volumes, material classes, disposition sites, choke points, and technological or regulatory barriers to treatment and disposal. The Applied Research Center (ARC) at Florida International University (FIU) in Miami, Florida, has completed the deployment of this fully operational, web-based forecast system. New functional modules and annual waste forecast data updates have been added to ensure the long-term viability and value of this system. In conclusion: WIMS continues to successfully accomplish the goals and objectives set forth by DOE for this project. WIMS has replaced the historic process of each DOE site gathering, organizing, and reporting their waste forecast information utilizing different database and display technologies. In addition, WIMS meets DOE's objective to have the complex-wide waste forecast information available to all stakeholders and the public in one easy-to-navigate system. The enhancements to WIMS made over the year since its web deployment include the addition of new DOE sites, an updated data set, and the ability to easily print the forecast data tables, the disposition maps, and the GIS maps. Future enhancements will include a high-level waste summary, a display of waste forecast by mode of transportation, and a user help module. The waste summary display module will provide a high-level summary view of the waste forecast data based on the selection of sites, facilities, material types, and forecast years. The waste summary report module will allow users to build custom filtered reports in a variety of formats, such as MS Excel, MS Word, and PDF. The user help module will provide a step-by-step explanation of various modules, using screen shots and general tutorials. The help module will also provide instructions for printing and margin/layout settings to assist users in using their local printers to print maps and reports. (authors)« less
The spatial variation of the infrared-to-radio ratio in spiral galaxies
NASA Technical Reports Server (NTRS)
Marsh, K. A.; Helou, G.
1995-01-01
We have produced two-dimensional maps of the intensity ratio, Q(sub 60), of 60 micron infrared to 20 cm radio continuum emission, for a set of 25 nearby galaxies, mostly spirals. The ratio maps were obtained from infrared images made using IRAS data with the maximum correlation method, and radio images made using VLA data. Before taking the ratio, the radio images were processed so as to have the same resolution properties as the infrared images; the final spatial resolution in all cases is approximately 1 min, corresponding to 1 - 2 kpc for most galaxies. This resolution represents a significant improvement over previous studies. Our new high-resolution maps confirm the slow decrease of Q(sub 60) with increasing radial distance from the nucleus, but show additional structure which is probably associated with separate sites of active star formation in the spiral arms. The maps show Q(sub 60) to be more closely related to infrared surface brightness than to the radial distance r in the galaxy disk. We note also that the Q(sub 60) gradients are absent (or at least reduced) for the edge-on galaxies, a property which can be attributed to the dilution of contrast due to the averaging of the additional structure along the line of sight. The results are all in qualitative agreement with the suggestion that the radio image represents a smeared version of the infrared image, as would be expected on the basis of current models in which the infrared-radio correlation is driven by the formation of massive stars, and the intensity distribution of radio emission is smeared as a result of the propagation of energetic electrons accelerated during the supernova phase.
Slope Instability Risk Analysys of the Municipality of Comala, Colima , Mexico
NASA Astrophysics Data System (ADS)
Ramirez-Ruiz, J. J.
2017-12-01
Every year during the rainy season occur the problem of mass landslide in some areas of the community of Comala, Colima Mexico. Slope instability is studied in this volcanic region which is located in the southern part of the Volcan de Fuego de Colima. It occurs due to the combination of different factors existing in this area as: Precipitation, topography contrast, type and mechanical properties of deposits that constitute the rocks and soils of the region and the erosion due to the elimination of vegetation deck to develop and grow urban areas. To these geological factors we can extend the tectonic activity of the Western part of Mexico that originate high seismicity by the interaction of Cocos plate and North America plate forming the region of Graben de Colima, were is located this area. Here we will present a Zonification and determination of Slope Instability Risk Maps due to the rain and seismicity accelerators factors. This Study is parto of a proyect to reduce the risk of this phenomenon, it was carried out as part of the National Risk Map of Mexico analized using the CENAPRED methodology to zonificate the risk areas. The instability of slopes is determined both in its origin and in its development, by different mechanisms. In such a way that this process of instability can be grouped into four main categories: Falls or landslides, Flows, Slips and expansions or lateral landslides. Here it is presented the Risk analisis to this volcanic area that cover the municipality of Comala in the State of Colima, Mexico using the Susceptibility map, Risk Map and Risk analisis of the Municipality.
NASA Astrophysics Data System (ADS)
Crake, Calum; Meral, F. Can; Burgess, Mark T.; Papademetriou, Iason T.; McDannold, Nathan J.; Porter, Tyrone M.
2017-08-01
Focused ultrasound (FUS) has the potential to enable precise, image-guided noninvasive surgery for the treatment of cancer in which tumors are identified and destroyed in a single integrated procedure. However, success of the method in highly vascular organs has been limited due to heat losses to perfusion, requiring development of techniques to locally enhance energy absorption and heating. In addition, FUS procedures are conventionally monitored using MRI, which provides excellent anatomical images and can map temperature, but is not capable of capturing the full gamut of available data such as the acoustic emissions generated during this inherently acoustically-driven procedure. Here, we employed phase-shift nanoemulsions (PSNE) embedded in tissue phantoms to promote cavitation and hence temperature rise induced by FUS. In addition, we incorporated passive acoustic mapping (PAM) alongside simultaneous MR thermometry in order to visualize both acoustic emissions and temperature rise, within the bore of a full scale clinical MRI scanner. Focal cavitation of PSNE could be resolved using PAM and resulted in accelerated heating and increased the maximum elevated temperature measured via MR thermometry compared to experiments without nanoemulsions. Over time, the simultaneously acquired acoustic and temperature maps show translation of the focus of activity towards the FUS transducer, and the magnitude of the increase in cavitation and focal shift both increased with nanoemulsion concentration. PAM results were well correlated with MRI thermometry and demonstrated greater sensitivity, with the ability to detect cavitation before enhanced heating was observed. The results suggest that PSNE could be beneficial for enhancement of thermal focused ultrasound therapies and that PAM could be a critical tool for monitoring this process.
Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation
NASA Astrophysics Data System (ADS)
Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco
2017-11-01
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.
Sokolov, V.; Wald, D.J.
2002-01-01
We compare two methods of seismic-intensity estimation from ground-motion records for the two recent strong earthquakes: the 1999 (M 7.1) Hector Mine, California, and the 1999 (M 7.6) Chi-Chi, Taiwan. The first technique utilizes the peak ground acceleration (PGA) and velocity (PGV), and it is used for rapid generation of the instrumental intensity map in California. The other method is based on the revised relationships between intensity and Fourier amplitude spectrum (FAS). The results of using the methods are compared with independently observed data and between the estimations from the records. For the case of the Hector Mine earthquake, the calculated intensities in general agree with the observed values. For the case of the Chi-Chi earthquake, the areas of maximum calculated intensity correspond to the areas of the greatest damage and highest number of fatalities. However, the FAS method producees higher-intensity values than those of the peak amplitude method. The specific features of ground-motion excitation during the large, shallow, thrust earthquake may be considered a reason for the discrepancy. The use of PGA and PGV is simple; however, the use of FAS provides a natural consideration of site amplification by means of generalized or site-specific spectral ratios. Because the calculation of seismic-intensity maps requires rapid processing of data from a large network, it is very practical to generate a "first-order" map from the recorded peak motions. Then, a "second-order" map may be compiled using an amplitude-spectra method on the basis of available records and numerical modeling of the site-dependent spectra for the regions of sparse station spacing.
NASA Astrophysics Data System (ADS)
Burns, Jack
Galaxy clusters are assembled through large and small mergers which are the most energetic events ( bangs ) since the Big Bang. Cluster mergers stir the ICM creating shocks and turbulence which are illuminated by Mpc-sized radio features called relics and halos. These shocks heat the ICM and are detected in x-rays via thermal emission. Disturbed morphologies in x-ray surface brightness and temperatures are direct evidence for cluster mergers. In the radio, relics (in the outskirts of the clusters) and halos (located near the cluster core) are clear signposts of recent mergers. Our recent cosmological simulations suggest that around a merger event, radio emission peaks very sharply (and briefly) while the x-ray emission rises and decays slowly. Hence, a sample of galaxy clusters that shows both luminous x-ray and radio relics/halos are clear candidates for very recent mergers. We propose to analyze a unique sample of 48 galaxy clusters with (i) known radio relics and/or halos and (ii) significant archival x-ray observations (e 50 ksec) from Chandra and/or XMM. We will use a new x-ray data analysis pipeline, implemented on a parallelprocessor supercomputer, to create x-ray surface brightness, high fidelity temperature, and pressure maps of these clusters in order to study merging activity. In addition, we will use a control sample of clusters from the HIFLUGCS catalog which do not show radio relics/halos or any significant x-ray surface brightness substructure, thus devoid of recent mergers. The temperature maps will be made using 3 different map-making techniques: Weighted Voronoi Tessellation, Adaptive Circular Binning, and Contour Binning. We also plan to use archival Suzaku data for 22 clusters in our sample and study the x-ray temperatures at the outskirts of the clusters. All 48 clusters have archival radio data at d1.4 GHz which will be re-analyzed using advanced algorithms in NRAO s CASA software. We also have new radio data on a subset of these clusters and have proposed to observe more of them with the increased sensitivity of the JVLA and GMRT at 0.25-1.4 GHz. Using the systematically analyzed x-ray and radio data, we propose to pursue the detailed link between cluster mergers and the formation of radio relics/halos. (a) How do radio relics form? Radio relics are believed to be created via re-acceleration of cosmic ray electrons through diffusive shock acceleration, a 1st order Fermi mechanism. Hence, there should be a correlation between shocks detected in the x-ray and radio. We plan to use our newly developed 2-D shock-finder using jumps within xray temperature maps, and complement the results with radio Mach numbers derived from radio spectral indices. Shocks detected in our simulations using a 3-D shock-finder will be used to understand the effects of projections in observations. (b) How do radio halos form? It is not clear if the formation of radio halos is due to turbulent acceleration (2nd order Fermi process) or due to more efficient 1st order Fermi mechanism via distributed small-scale shocks. Since radio halos reside in merging clusters, the x-ray temperature structure should show the un-relaxed nature of the cluster. We will study this through temperature asymmetry and power ratios (between two multipoles). We also propose to use pressure maps to derive a 2-D power spectrum of pressure fluctuations and deduce the turbulent velocity field. We will then derive the associated radio power and spectral indices to compare with the radio observations. We will test our results using clusters with and without radio halos. We will make these high fidelity temperature, surface brightness, pressure and entropy maps available to the astronomical community via the National Virtual Observatory. We will also make our x-ray temperature map-making scripts implemented on parallel supercomputers available for community use.
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F.; Du, Yiping; Wald, Lawrence L.; Griswold, Mark A.; Setsompop, Kawin
2015-01-01
Purpose We incorporate Simultaneous Multi-Slice (SMS) acquisition into MR Fingerprinting (MRF) to accelerate the MRF acquisition. Methods The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Further, by varying the Gz blip area and/or polarity as a function of TR, the slices’ differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. Results In Monte Carlo simulation, the parameter mapping from Multi-band factor (MB)=2 t-Blipped SMS-MRF shows good accuracy and precision when compared to results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. Conclusions The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. PMID:26059430
Ye, Huihui; Ma, Dan; Jiang, Yun; Cauley, Stephen F; Du, Yiping; Wald, Lawrence L; Griswold, Mark A; Setsompop, Kawin
2016-05-01
We incorporate simultaneous multislice (SMS) acquisition into MR fingerprinting (MRF) to accelerate the MRF acquisition. The t-Blipped SMS-MRF method is achieved by adding a Gz blip before each data acquisition window and balancing it with a Gz blip of opposing polarity at the end of each TR. Thus the signal from different simultaneously excited slices are encoded with different phases without disturbing the signal evolution. Furthermore, by varying the Gz blip area and/or polarity as a function of repetition time, the slices' differential phase can also be made to vary as a function of time. For reconstruction of t-Blipped SMS-MRF data, we demonstrate a combined slice-direction SENSE and modified dictionary matching method. In Monte Carlo simulation, the parameter mapping from multiband factor (MB) = 2 t-Blipped SMS-MRF shows good accuracy and precision when compared with results from reference conventional MRF data with concordance correlation coefficients (CCC) of 0.96 for T1 estimates and 0.90 for T2 estimates. For in vivo experiments, T1 and T2 maps from MB=2 t-Blipped SMS-MRF have a high agreement with ones from conventional MRF. The MB=2 t-Blipped SMS-MRF acquisition/reconstruction method has been demonstrated and validated to provide more rapid parameter mapping in the MRF framework. © 2015 Wiley Periodicals, Inc.
Molecular mapping and genomics of soybean seed protein: a review and perspective for the future.
Patil, Gunvant; Mian, Rouf; Vuong, Tri; Pantalone, Vince; Song, Qijian; Chen, Pengyin; Shannon, Grover J; Carter, Tommy C; Nguyen, Henry T
2017-10-01
Genetic improvement of soybean protein meal is a complex process because of negative correlation with oil, yield, and temperature. This review describes the progress in mapping and genomics, identifies knowledge gaps, and highlights the need of integrated approaches. Meal protein derived from soybean [Glycine max (L) Merr.] seed is the primary source of protein in poultry and livestock feed. Protein is a key factor that determines the nutritional and economical value of soybean. Genetic improvement of soybean seed protein content is highly desirable, and major quantitative trait loci (QTL) for soybean protein have been detected and repeatedly mapped on chromosomes (Chr.) 20 (LG-I), and 15 (LG-E). However, practical breeding progress is challenging because of seed protein content's negative genetic correlation with seed yield, other seed components such as oil and sucrose, and interaction with environmental effects such as temperature during seed development. In this review, we discuss rate-limiting factors related to soybean protein content and nutritional quality, and potential control factors regulating seed storage protein. In addition, we describe advances in next-generation sequencing technologies for precise detection of natural variants and their integration with conventional and high-throughput genotyping technologies. A syntenic analysis of QTL on Chr. 15 and 20 was performed. Finally, we discuss comprehensive approaches for integrating protein and amino acid QTL, genome-wide association studies, whole-genome resequencing, and transcriptome data to accelerate identification of genomic hot spots for allele introgression and soybean meal protein improvement.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
R&D Toward a Neutrino Factory and Muon Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zisman, Michael S
2011-03-20
Significant progress has been made in recent years in R&D towards a neutrino factory and muon collider. The U.S. Muon Accelerator Program (MAP) has been formed recently to expedite the R&D efforts. This paper will review the U.S. MAP R&D programs for a neutrino factory and muon collider. Muon ionization cooling research is the key element of the program. The first muon ionization cooling demonstration experiment, MICE (Muon Ionization Cooling Experiment), is under construction now at RAL (Rutherford Appleton Laboratory) in the UK. The current status of MICE will be described.
Global Seismic Hazard Assessment Program (GSHAP) in continental Asia
Zhang, Peizhen; Yang, Zhi-xian; Gupta, Harsh K.; Bhatia, Satish C.; Shedlock, Kaye M.
1999-01-01
The regional hazard mapping for the whole Eastern Asia was coordinated by the SSB Regional Centre in Beijing, originating from the expansion of the test area initially established in the border region of China-India-Nepal-Myanmar- Bangla Dash, in coordination with the other Regional Centres (JIPE, Moscow, and AGSO, Canberra) and with the direct assistance of the USGS. All Eastern Asian countries have participated directly in this regional effort, with the addition of Japan, for which an existing national hazard map was incorporated. The regional hazard depicts the expected peak ground acceleration with 10% exceedance probability in 50 years.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Probabilistic seismic hazard zonation for the Cuban building code update
NASA Astrophysics Data System (ADS)
Garcia, J.; Llanes-Buron, C.
2013-05-01
A probabilistic seismic hazard assessment has been performed in response to a revision and update of the Cuban building code (NC-46-99) for earthquake-resistant building construction. The hazard assessment have been done according to the standard probabilistic approach (Cornell, 1968) and importing the procedures adopted by other nations dealing with the problem of revising and updating theirs national building codes. Problems of earthquake catalogue treatment, attenuation of peak and spectral ground acceleration, as well as seismic source definition have been rigorously analyzed and a logic-tree approach was used to represent the inevitable uncertainties encountered through the whole seismic hazard estimation process. The seismic zonation proposed here, is formed by a map where it is reflected the behaviour of the spectral acceleration values for short (0.2 seconds) and large (1.0 seconds) periods on rock conditions with a 1642 -year return period, which being considered as maximum credible earthquake (ASCE 07-05). In addition, other three design levels are proposed (severe earthquake: with a 808 -year return period, ordinary earthquake: with a 475 -year return period and minimum earthquake: with a 225 -year return period). The seismic zonation proposed here fulfils the international standards (IBC-ICC) as well as the world tendencies in this thematic.
Wetland Loss Patterns and Inundation-Productivity ...
Tidal salt marsh is a key defense against, yet is especially vulnerable to, the effects of accelerated sea level rise. To determine whether salt marshes in southern New England will be stable given increasing inundation over the coming decades, we examined current loss patterns, inundation-productivity feedbacks, and sustaining processes. A multi-decadal analysis of salt marsh aerial extent using historic imagery and maps revealed that salt marsh vegetation loss is both widespread and accelerating, with vegetation loss rates over the past four decades summing to 17.3 %. Landward retreat of the marsh edge, widening and headward expansion of tidal channel networks, loss of marsh islands, and the development and enlargement of interior depressions found on the marsh platform contributed to vegetation loss. Inundation due to sea level rise is strongly suggested as a primary driver: vegetation loss rates were significantly negatively correlated with marsh elevation (r2 = 0.96; p = 0.0038), with marshes situated below mean high water (MHW) experiencing greater declines than marshes sitting well above MHW. Growth experiments with Spartina alterniflora, the Atlantic salt marsh ecosystem dominant, across a range of elevations and inundation regimes further established that greater inundation decreases belowground biomass production of S. alterniflora and, thus, negatively impacts organic matter accumulation. These results suggest that southern New England salt ma
High Voltage Hall Accelerator Propulsion System Development for NASA Science Missions
NASA Technical Reports Server (NTRS)
Kamhawi, Hani; Haag, Thomas; Huang, Wensheng; Shastry, Rohit; Pinero, Luis; Peterson, Todd; Dankanich, John; Mathers, Alex
2013-01-01
NASA Science Mission Directorates In-Space Propulsion Technology Program is sponsoring the development of a 3.8 kW-class engineering development unit Hall thruster for implementation in NASA science and exploration missions. NASA Glenn Research Center and Aerojet are developing a high fidelity high voltage Hall accelerator (HiVHAc) thruster that can achieve specific impulse magnitudes greater than 2,700 seconds and xenon throughput capability in excess of 300 kilograms. Performance, plume mappings, thermal characterization, and vibration tests of the HiVHAc engineering development unit thruster have been performed. In addition, the HiVHAc project is also pursuing the development of a power processing unit (PPU) and xenon feed system (XFS) for integration with the HiVHAc engineering development unit thruster. Colorado Power Electronics and NASA Glenn Research Center have tested a brassboard PPU for more than 1,500 hours in a vacuum environment, and a new brassboard and engineering model PPU units are under development. VACCO Industries developed a xenon flow control module which has undergone qualification testing and will be integrated with the HiVHAc thruster extended duration tests. Finally, recent mission studies have shown that the HiVHAc propulsion system has sufficient performance for four Discovery- and two New Frontiers-class NASA design reference missions.
Dash, Debasis; Mukerji, Mitali
2014-01-01
Admixture mapping has been enormously resourceful in identifying genetic variations linked to phenotypes, adaptation, and diseases. In this study through analysis of copy number variable regions (CNVRs), we report extensive restructuring in the genomes of the recently admixed African-Indian population (OG-W-IP) that inhabits a highly saline environment in Western India. The study included subjects from OG-W-IP (OG), five different Indian and three HapMap populations that were genotyped using Affymetrix version 6.0 arrays. Copy number variations (CNVs) detected using Birdsuite were used to define CNVRs. Population structure with respect to CNVRs was delineated using random forest approach. OG genomes have a surprising excess of CNVs in comparison to other studied populations. Individual ancestry proportions computed using STRUCTURE also reveals a unique genetic component in OGs. Population structure analysis with CNV genotypes indicates OG to be distant from both the African and Indian ancestral populations. Interestingly, it shows genetic proximity with respect to CNVs to only one Indian population IE-W-LP4, which also happens to reside in the same geographical region. We also observe a significant enrichment of molecular processes related to ion binding and receptor activity in genes encompassing OG-specific CNVRs. Our results suggest that retention of CNVRs from ancestral natives and de novo acquisition of CNVRs could accelerate the process of adaptation especially in an extreme environment. Additionally, this population would be enormously useful for dissecting genes and delineating the involvement of CNVs in salt adaptation. PMID:25398783
A beam current density monitor for intense electron beams
NASA Astrophysics Data System (ADS)
Fiorito, R. B.; Raleigh, M.; Seltzer, S. M.
1983-12-01
The authors describe a new type of electric probe for mapping the radial current density profile of high-energy, high current electron beams. The idea of developing an electrically sensitive probe for these conditions was originally suggested to one of the authors during a year's visit to the Lawrence Livermore National Laboratory. The resulting probe is intended for use on the Experimental Test Accelerator (ETA) and the Advanced Test Accelerator at that laboratory. This report discusses in detail: the mechanical design, the electrical response, and temperature effects, as they pertain to the electric probe, and describe the first experimental results obtained using this probe on ETA.
Muon Accelerator Program (MAP) | Homepage
collider and neutrino factory Scientists around the world are developing the technologies necessary for a factory or a muon collider. Read more: Neutrino factory Muon collider Developing a muon source One of the developing and testing RF cavities and magnets for a muon beamline. The facility allows scientists to test
Central State University: Phase I Report
ERIC Educational Resources Information Center
Ohio Board of Regents, 2012
2012-01-01
In December of 2011, a team of eight consultants authored a report to the Ohio Board of Regents and Central State University titled "Accentuating Strengths/Accelerating Progress (AS/AP)." AS/AP provided a road map for the administration, faculty, and staff of CSU to achieve the excellence it has sought under the leadership of President…
Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator
NASA Astrophysics Data System (ADS)
Chitarin, G.; Agostinetti, P.; Gallo, A.; Marconato, N.; Nakano, H.; Serianni, G.; Takeiri, Y.; Tsumori, K.
2011-09-01
For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of the BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.
Experimental Mapping and Benchmarking of Magnetic Field Codes on the LHD Ion Accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chitarin, G.; University of Padova, Dept. of Management and Engineering, strad. S. Nicola, 36100 Vicenza; Agostinetti, P.
2011-09-26
For the validation of the numerical models used for the design of the Neutral Beam Test Facility for ITER in Padua [1], an experimental benchmark against a full-size device has been sought. The LHD BL2 injector [2] has been chosen as a first benchmark, because the BL2 Negative Ion Source and Beam Accelerator are geometrically similar to SPIDER, even though BL2 does not include current bars and ferromagnetic materials. A comprehensive 3D magnetic field model of the LHD BL2 device has been developed based on the same assumptions used for SPIDER. In parallel, a detailed experimental magnetic map of themore » BL2 device has been obtained using a suitably designed 3D adjustable structure for the fine positioning of the magnetic sensors inside 27 of the 770 beamlet apertures. The calculated values have been compared to the experimental data. The work has confirmed the quality of the numerical model, and has also provided useful information on the magnetic non-uniformities due to the edge effects and to the tolerance on permanent magnet remanence.« less
NASA Astrophysics Data System (ADS)
Dombeck, J. P.; Cattell, C. A.; Prasad, N.; Sakher, A.; Hanson, E.; McFadden, J. P.; Strangeway, R. J.
2016-12-01
Field-aligned currents (FACs) provide a fundamental driver and means of Magnetosphere-Ionosphere (M-I) coupling. These currents need to be supported by local physics along the entire field line generally with quasi-static potential structures, but also supporting the time-evolution of the structures and currents, producing Alfvén waves and Alfvénic electron acceleration. In regions of upward current, precipitating auroral electrons are accelerated earthward. These processes can result in ion outflow, changes in ionospheric conductivity, and affect the particle distributions on the field line, affecting the M-I coupling processes supporting the individual FACs and potentially the entire FAC system. The FAST mission was well suited to study both the FACs and the electron auroral acceleration processes. We present the results of the comparisons between meso- and small-scale FACs determined from FAST using the method of Peria, et al., 2000, and our FAST auroral acceleration mechanism study when such identification is possible for the entire ˜13 year FAST mission. We also present the latest results of the electron energy (and number) flux ionospheric input based on acceleration mechanism (and FAC characteristics) from our FAST auroral acceleration mechanism study.
Neural processing of gravity information
NASA Technical Reports Server (NTRS)
Schor, Robert H.
1992-01-01
The goal of this project was to use the linear acceleration capabilities of the NASA Vestibular Research Facility (VRF) at Ames Research Center to directly examine encoding of linear accelerations in the vestibular system of the cat. Most previous studies, including my own, have utilized tilt stimuli, which at very low frequencies (e.g., 'static tilt') can be considered a reasonably pure linear acceleration (e.g., 'down'); however, higher frequencies of tilt, necessary for understanding the dynamic processing of linear acceleration information, necessarily involves rotations which can stimulate the semicircular canals. The VRF, particularly the Long Linear Sled, has promise to provide controlled pure linear accelerations at a variety of stimulus frequencies, with no confounding angular motion.
Enhancements to Demilitarization Process Maps Program (ProMap)
2016-10-14
map tool, ProMap, was improved by implementing new features, and sharing data with MIDAS and AMDIT databases . Specifically, process efficiency was...improved by 1) providing access to APE information contained in the AMDIT database directly from inside ProMap when constructing a process map, 2...what equipment can be efficiently used to demil a particular munition. Associated with this task was the upgrade of the AMDIT database so that
NASA Technical Reports Server (NTRS)
Fowler, John W.; Aumann, H. H.
1994-01-01
The High-Resolution image construction program (HiRes) used at IPAC is based on the Maximum Correlation Method. After HiRes intensity images are constructed from IRAS data, additional images are needed to aid in scientific interpretation. Some of the images that are available for this purpose show the fitting noise, estimates of the achieved resolution, and detector track maps. Two methods have been developed for creating color maps without discarding any more spatial information than absolutely necessary: the 'cross-band simulation' and 'prior-knowledge' methods. These maps are demonstrated using the survey observations of a 2 x 2 degree field centered on M31. Prior knowledge may also be used to achieve super-resolution and to suppress ringing around bright point sources observed against background emission. Tools to suppress noise spikes and for accelerating convergence are also described.
Fermilab | Fermilab Disclaimer
Accelerator Science and Technology Facility LHC, LCLS-II and future accelerators Accelerators for science and usefulness of any information, apparatus, product or process disclosed, or represents that its use would not
Five-centimeter diameter ion thruster development
NASA Technical Reports Server (NTRS)
Weigand, A. J.
1972-01-01
All system components were tested for endurance and steady state and cyclic operation. The following results were obtained: acceleration system (electrostatic type), 3100 hours continuous running; acceleration system (translation type), 2026 hours continuous running; cathode-isolator-vaporizer assembly, 5000 hours continuous operation and 190 restart cycles with 1750 hours operation; mercury expulsion system, 5000 hours continuous running; and neutralizer, 5100 hours continuous operation. The results of component optimization studies such as neutralizer position, neutralizer keeper hole, and screen grid geometry are included. Extensive mapping of the magnet field within and immediately outside the thruster are shown. A technique of electroplating the molybdenum accelerator grid with copper to study erosion patterns is described. Results of tests being conducted to more fully understand the operation of the hollow cathode are also given. This type of 5-cm thruster will be space tested on the Communication Technology Satellite in 1975.
Generation of three-dimensional optical cusp beams with ultrathin metasurfaces.
Liu, Weiwei; Zhang, Yuchao; Gao, Jie; Yang, Xiaodong
2018-06-22
Cusp beams are one type of complex structured beams with unique multiple self-accelerating channels and needle-like field structures owning great potentials to advance applications such as particle micromanipulation and super-resolution imaging. The traditional method to generate optical catastrophe is based on cumbrous reflective diffraction optical elements, which makes optical system complicated and hinders the nanophotonics integration. Here we design geometric phase based ultrathin plasmonic metasurfaces made of nanoslit antennas to produce three-dimensional (3D) optical cusp beams with variable numbers of self-accelerating channels in a broadband wavelength range. The entire beam propagation profiles of the cusp beams generated from the metasurfaces are mapped theoretically and experimentally. The special self-accelerating behavior and caustics concentration property of the cups beams are also demonstrated. Our results provide great potentials for promoting metasurface-enabled compact photonic devices used in wide applications of light-matter interactions.
Plasma Measurements in an Integrated-System FARAD Thruster
NASA Technical Reports Server (NTRS)
Polzin, K. A.; Rose, M. F.; Miller, R.; Best, S.
2007-01-01
Pulsed inductive plasma accelerators are spacecraft propulsion devices in which energy is stored in a capacitor and then discharged through an inductive coil. The device is electrodeless, inducing a current sheet in a plasma located near the face of the coil. The propellant is accelerated and expelled at a high exhaust velocity (order of 10 km/s) through the interaction of the plasma current and the induced magnetic field. The Faraday Accelerator with RF-Assisted Discharge (FARAD) thruster[1,2] is a type of pulsed inductive plasma accelerator in which the plasma is preionized by a mechanism separate from that used to form the current sheet and accelerate the gas. Employing a separate preionization mechanism allows for the formation of an inductive current sheet at much lower discharge energies and voltages than those used in previous pulsed inductive accelerators like the Pulsed Inductive Thruster (PIT). A benchtop FARAD thruster was designed following guidelines and similarity performance parameters presented in Refs. [3,4]. This design is described in detail in Ref. [5]. In this paper, we present the temporally and spatially resolved measurements of the preionized plasma and inductively-accelerated current sheet in the FARAD thruster operating with a Vector Inversion Generator (VIG) to preionize the gas and a Bernardes and Merryman circuit topology to provide inductive acceleration. The acceleration stage operates on the order of 100 J/pulse. Fast-framing photography will be used to produce a time-resolved, global view of the evolving current sheet. Local diagnostics used include a fast ionization gauge capable of mapping the gas distribution prior to plasma initiation; direct measurement of the induced magnetic field using B-dot probes, induced azimuthal current measurement using a mini-Rogowski coil, and direct probing of the number density and electron temperature using triple probes.
Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko
2015-06-01
Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Relativistic Electrons in Ground-Level Enhanced (GLE) Solar Particle Events
NASA Astrophysics Data System (ADS)
Tylka, Allan J.; Dietrich, William; Novikova, Elena I.
Ground-level enhanced (GLE) solar particle events are one of the most spectacular manifesta-tions of solar activity, with protons accelerated to multi-GeV energies in minutes. Although GLEs have been observed for more than sixty years, the processes by which the particle ac-celeration takes place remain controversial. Relativistic electrons provide another means of investigating the nature of the particle accelerator, since some processes that can efficiently ac-celerate protons and ions are less attractive candidates for electron acceleration. We report on observations of relativistic electrons, at ˜0.5 -5 MeV, during GLEs of 1976-2005, using data from the University of Chicago's Cosmic Ray Nuclei Experiment (CRNE) on IMP-8, whose electron response has recently been calibrated using GEANT-4 simulations (Novikova et al. 2010). In particular, we examine onset times, temporal structure, fluences, and spectra of elec-trons in GLEs and compare them with comparable quantities for relativistic protons derived from neutron monitors. We discuss the implications of these comparisons for the nature of the particle acceleration process.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
St. Louis area earthquake hazards mapping project; seismic and liquefaction hazard maps
Cramer, Chris H.; Bauer, Robert A.; Chung, Jae-won; Rogers, David; Pierce, Larry; Voigt, Vicki; Mitchell, Brad; Gaunt, David; Williams, Robert; Hoffman, David; Hempen, Gregory L.; Steckel, Phyllis; Boyd, Oliver; Watkins, Connor M.; Tucker, Kathleen; McCallister, Natasha
2016-01-01
We present probabilistic and deterministic seismic and liquefaction hazard maps for the densely populated St. Louis metropolitan area that account for the expected effects of surficial geology on earthquake ground shaking. Hazard calculations were based on a map grid of 0.005°, or about every 500 m, and are thus higher in resolution than any earlier studies. To estimate ground motions at the surface of the model (e.g., site amplification), we used a new detailed near‐surface shear‐wave velocity model in a 1D equivalent‐linear response analysis. When compared with the 2014 U.S. Geological Survey (USGS) National Seismic Hazard Model, which uses a uniform firm‐rock‐site condition, the new probabilistic seismic‐hazard estimates document much more variability. Hazard levels for upland sites (consisting of bedrock and weathered bedrock overlain by loess‐covered till and drift deposits), show up to twice the ground‐motion values for peak ground acceleration (PGA), and similar ground‐motion values for 1.0 s spectral acceleration (SA). Probabilistic ground‐motion levels for lowland alluvial floodplain sites (generally the 20–40‐m‐thick modern Mississippi and Missouri River floodplain deposits overlying bedrock) exhibit up to twice the ground‐motion levels for PGA, and up to three times the ground‐motion levels for 1.0 s SA. Liquefaction probability curves were developed from available standard penetration test data assuming typical lowland and upland water table levels. A simplified liquefaction hazard map was created from the 5%‐in‐50‐year probabilistic ground‐shaking model. The liquefaction hazard ranges from low (60% of area expected to liquefy) in the lowlands. Because many transportation routes, power and gas transmission lines, and population centers exist in or on the highly susceptible lowland alluvium, these areas in the St. Louis region are at significant potential risk from seismically induced liquefaction and associated ground deformation
Vacuum Plasma Spray Forming of Tungsten Lorentz Force Accelerator Components
NASA Technical Reports Server (NTRS)
Zimmerman, Frank R.
2004-01-01
The Vacuum Plasma Spray (VPS) Laboratory at NASA's Marshall Space Flight Center, working with the Jet Propulsion Laboratory, has developed and demonstrated a fabrication technique using the VPS process to form anode and cathode sections for a Lorentz force accelerator made from tungsten. Lorentz force accelerators are an attractive form of electric propulsion that provides continuous, high-efficiency propulsion at useful power levels for such applications as orbit transfers or deep space missions. The VPS process is used to deposit refractory metals such as tungsten onto a graphite mandrel of the desired shape. Because tungsten is reactive at high temperatures, it is thermally sprayed in an inert environment where the plasma gun melts and deposits the molten metal powder onto a mandrel. A three-axis robot inside the chamber controls the motion of the plasma spray torch. A graphite mandrel acts as a male mold, forming the required contour and dimensions for the inside surface of the anode or cathode of the accelerator. This paper describes the processing techniques, design considerations, and process development associated with the VPS forming of Lorentz force accelerator components.
Probing the Earth's core with magnetic field observations from Swarm
NASA Astrophysics Data System (ADS)
Finlay, Christopher; Olsen, Nils; Kotsiaros, Stavros; Gillet, Nicolas; Tøffner-Clausen, Lars
2016-07-01
By far the largest part of the Earth's magnetic field is generated by motions taking place within our planet's liquid metal outer core. Variations of this core-generated field thus provide a unique means of probing the dynamics taking place in the deepest reaches of the Earth. In this contribution we present a new high resolution model of the core-generated magnetic field, and its recent time changes, derived from a dataset that includes more two years of observations from the Swarm mission. Resulting inferences regarding the underlying core flow, its dynamics, and the nature of the geodynamo process will be discussed. The CHAOS-6 geomagnetic field model, covering the interval 1999-2016, is derived from magnetic data collected by the three Swarm missions, as well as the earlier CHAMP and Oersted satellites, and monthly means data collected from 160 ground observatories. Advantage is taken of the constellation aspect of the Swarm mission by ingesting both scalar and vector field differences along-track and across track between the lower pair of Swarm satellites. The internal part of the model consists of a spherical harmonic (SH) expansion, time-dependent for degrees 20 and below. The model coefficients are estimated using a regularized, iteratively reweighted, least squares scheme involving Huber weights. At Earth's surface, CHAOS-6 shows evidence for positive acceleration of the field intensity in 2015 over a broad area around longitude 90deg E that is also seen at ground observatories such as Novosibirsk. At the core surface, we are able to map the secular variation (linear trend in the magnetic field) up to SH degree 16. The radial field acceleration at the core surface in 2015 is found be largest at low latitudes under the India-South East Asia region and under the region of northern South America, as well as at high northern latitudes under Alaska and Siberia. Surprisingly, there is also evidence for some acceleration in the central Pacific region, for example near Hawaii, where radial field SA is observed either side of a jerk event in 2014. On the other hand, little activity has occurred over the past 17 years in the Southern polar region. Maps of the underlying core flow can be derived assuming that field changes result from advective processes, and taking into account the organizing influence of the Coriolis force. The dominant large-scale flow feature is found to be a planetary-scale, anti-cyclonic, gyre centered on the Atlantic hemisphere. In addition to this gyre we find evidence for time-dependent eddies at mid-latitudes and oscillating, non-axisymmetric, jets in the azimuthal direction at low latitudes.
Yu, Yang; Zhang, Xiaojun; Yuan, Jianbo; Li, Fuhua; Chen, Xiaohan; Zhao, Yongzhen; Huang, Long; Zheng, Hongkun; Xiang, Jianhai
2015-01-01
The Pacific white shrimp Litopenaeus vannamei is the dominant crustacean species in global seafood mariculture. Understanding the genome and genetic architecture is useful for deciphering complex traits and accelerating the breeding program in shrimp. In this study, a genome survey was conducted and a high-density linkage map was constructed using a next-generation sequencing approach. The genome survey was used to identify preliminary genome characteristics and to generate a rough reference for linkage map construction. De novo SNP discovery resulted in 25,140 polymorphic markers. A total of 6,359 high-quality markers were selected for linkage map construction based on marker coverage among individuals and read depths. For the linkage map, a total of 6,146 markers spanning 4,271.43 cM were mapped to 44 sex-averaged linkage groups, with an average marker distance of 0.7 cM. An integration analysis linked 5,885 genome scaffolds and 1,504 BAC clones to the linkage map. Based on the high-density linkage map, several QTLs for body weight and body length were detected. This high-density genetic linkage map reveals basic genomic architecture and will be useful for comparative genomics research, genome assembly and genetic improvement of L. vannamei and other penaeid shrimp species. PMID:26503227
Electron injection by whistler waves in non-relativistic shocks
NASA Astrophysics Data System (ADS)
Riquelme, Mario A.; Spitkovsky, Anatoly
2012-04-01
Radio and X-ray observations of shocks in young supernova remnants (SNRs) reveal electron acceleration to non-thermal, ultra-relativistic energies (~ 10-100 TeV). This acceleration is usually assumed to happen via the diffusive shock acceleration (DSA) mechanism. However, the way in which electrons are initially energized or 'injected' into this acceleration process is an open question and the main focus of this work. We present our study of electron acceleration in nonrelativistic shocks using 2D and 3D particle-in-cell (PIC) plasma simulations. Our simulations show that significant non-thermal acceleration happens due to the growth of oblique whistler waves in the foot of quasi-perpendicular shocks. The obtained electron energy distributions show power law tails with spectral indices up to α ~ 3-4. Also, the maximum energies of the accelerated particles are consistent with the electron Larmor radii being comparable to that of the ions, indicating potential injection into the subsequent DSA process. This injection mechanism requires the shock waves to have fairly low Alfvénic Mach numbers, MA <20, which is consistent with the theoretical conditions for the growth of whistler waves in the shock foot (MA <(mi/me)1/2). Thus, if this mechanism is the only robust electron injection process at work in SNR shocks, then SNRs that display non-thermal emission must have significantly amplified upstream magnetic fields. Such field amplification is likely achieved by accelerated ions in these environments, so electron and ion acceleration in SNR shocks must be interconnected.
ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings
NASA Astrophysics Data System (ADS)
Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.
2002-12-01
We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.
New seismic sources parameterization in El Salvador. Implications to seismic hazard.
NASA Astrophysics Data System (ADS)
Alonso-Henar, Jorge; Staller, Alejandra; Jesús Martínez-Díaz, José; Benito, Belén; Álvarez-Gómez, José Antonio; Canora, Carolina
2014-05-01
El Salvador is located at the pacific active margin of Central America, here, the subduction of the Cocos Plate under the Caribbean Plate at a rate of ~80 mm/yr is the main seismic source. Although the seismic sources located in the Central American Volcanic Arc have been responsible for some of the most damaging earthquakes in El Salvador. The El Salvador Fault Zone is the main geological structure in El Salvador and accommodates 14 mm/yr of horizontal displacement between the Caribbean Plate and the forearc sliver. The ESFZ is a right lateral strike-slip fault zone c. 150 km long and 20 km wide .This shear band distributes the deformation among strike-slip faults trending N90º-100ºE and secondary normal faults trending N120º- N170º. The ESFZ is relieved westward by the Jalpatagua Fault and becomes less clear eastward disappearing at Golfo de Fonseca. Five sections have been proposed for the whole fault zone. These fault sections are (from west to east): ESFZ Western Section, San Vicente Section, Lempa Section, Berlin Section and San Miguel Section. Paleoseismic studies carried out in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation and paleoearthquakes up to Mw 7.6. In this study we present 45 capable seismic sources in El Salvador and their preliminary slip-rate from geological and GPS data. The GPS data detailled results are presented by Staller et al., 2014 in a complimentary communication. The calculated preliminary slip-rates range from 0.5 to 8 mm/yr for individualized faults within the ESFZ. We calculated maximum magnitudes from the mapped lengths and paleoseismic observations.We propose different earthquakes scenario including the potential combined rupture of different fault sections of the ESFZ, resulting in maximum earthquake magnitudes of Mw 7.6. We used deterministic models to calculate acceleration distribution related with maximum earthquakes of the different proposed scenario. The spatial distribution of seismic accelerations are compared and calibrated using the February 13, 2001 earthquake, as control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917. control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917.
González, Ana M; Yuste-Lisbona, Fernando J; Saburido, Soledad; Bretones, Sandra; De Ron, Antonio M; Lozano, Rafael; Santalla, Marta
2016-01-01
Determinacy growth habit and accelerated flowering traits were selected during or after domestication in common bean. Both processes affect several presumed adaptive traits such as the rate of plant production. There is a close association between flowering initiation and vegetative growth; however, interactions among these two crucial developmental processes and their genetic bases remain unexplored. In this study, with the aim to establish the genetic relationships between these complex processes, a multi-environment quantitative trait locus (QTL) mapping approach was performed in two recombinant inbred line populations derived from inter-gene pool crosses between determinate and indeterminate genotypes. Additive and epistatic QTLs were found to regulate flowering time, vegetative growth, and rate of plant production. Moreover, the pleiotropic patterns of the identified QTLs evidenced that regions controlling time to flowering traits, directly or indirectly, are also involved in the regulation of plant production traits. Further QTL analysis highlighted one QTL, on the lower arm of the linkage group Pv01, harboring the Phvul.001G189200 gene, homologous to the Arabidopsis thaliana TERMINAL FLOWER1 ( TFL1 ) gene, which explained up to 32% of phenotypic variation for time to flowering, 66% for vegetative growth, and 19% for rate of plant production. This finding was consistent with previous results, which have also suggested Phvul.001G189200 (PvTFL1y ) as a candidate gene for determinacy locus. The information here reported can also be applied in breeding programs seeking to optimize key agronomic traits, such as time to flowering, plant height and an improved reproductive biomass, pods, and seed size, as well as yield.
González, Ana M.; Yuste-Lisbona, Fernando J.; Saburido, Soledad; Bretones, Sandra; De Ron, Antonio M.; Lozano, Rafael; Santalla, Marta
2016-01-01
Determinacy growth habit and accelerated flowering traits were selected during or after domestication in common bean. Both processes affect several presumed adaptive traits such as the rate of plant production. There is a close association between flowering initiation and vegetative growth; however, interactions among these two crucial developmental processes and their genetic bases remain unexplored. In this study, with the aim to establish the genetic relationships between these complex processes, a multi-environment quantitative trait locus (QTL) mapping approach was performed in two recombinant inbred line populations derived from inter-gene pool crosses between determinate and indeterminate genotypes. Additive and epistatic QTLs were found to regulate flowering time, vegetative growth, and rate of plant production. Moreover, the pleiotropic patterns of the identified QTLs evidenced that regions controlling time to flowering traits, directly or indirectly, are also involved in the regulation of plant production traits. Further QTL analysis highlighted one QTL, on the lower arm of the linkage group Pv01, harboring the Phvul.001G189200 gene, homologous to the Arabidopsis thaliana TERMINAL FLOWER1 (TFL1) gene, which explained up to 32% of phenotypic variation for time to flowering, 66% for vegetative growth, and 19% for rate of plant production. This finding was consistent with previous results, which have also suggested Phvul.001G189200 (PvTFL1y) as a candidate gene for determinacy locus. The information here reported can also be applied in breeding programs seeking to optimize key agronomic traits, such as time to flowering, plant height and an improved reproductive biomass, pods, and seed size, as well as yield. PMID:28082996
Gatenby, J. Christopher; Gore, John C.; Tong, Frank
2012-01-01
High-resolution functional MRI is a leading application for very high field (7 Tesla) human MR imaging. Though higher field strengths promise improvements in signal-to-noise ratios (SNR) and BOLD contrast relative to fMRI at 3 Tesla, these benefits may be partially offset by accompanying increases in geometric distortion and other off-resonance effects. Such effects may be especially pronounced with the single-shot EPI pulse sequences typically used for fMRI at standard field strengths. As an alternative, one might consider multishot pulse sequences, which may lead to somewhat lower temporal SNR than standard EPI, but which are also often substantially less susceptible to off-resonance effects. Here we consider retinotopic mapping of human visual cortex as a practical test case by which to compare examples of these sequence types for high-resolution fMRI at 7 Tesla. We performed polar angle retinotopic mapping at each of 3 isotropic resolutions (2.0, 1.7, and 1.1 mm) using both accelerated single-shot 2D EPI and accelerated multishot 3D gradient-echo pulse sequences. We found that single-shot EPI indeed led to greater temporal SNR and contrast-to-noise ratios (CNR) than the multishot sequences. However, additional distortion correction in postprocessing was required in order to fully realize these advantages, particularly at higher resolutions. The retinotopic maps produced by both sequence types were qualitatively comparable, and showed equivalent test/retest reliability. Thus, when surface-based analyses are planned, or in other circumstances where geometric distortion is of particular concern, multishot pulse sequences could provide a viable alternative to single-shot EPI. PMID:22514646
Swisher, Jascha D; Sexton, John A; Gatenby, J Christopher; Gore, John C; Tong, Frank
2012-01-01
High-resolution functional MRI is a leading application for very high field (7 Tesla) human MR imaging. Though higher field strengths promise improvements in signal-to-noise ratios (SNR) and BOLD contrast relative to fMRI at 3 Tesla, these benefits may be partially offset by accompanying increases in geometric distortion and other off-resonance effects. Such effects may be especially pronounced with the single-shot EPI pulse sequences typically used for fMRI at standard field strengths. As an alternative, one might consider multishot pulse sequences, which may lead to somewhat lower temporal SNR than standard EPI, but which are also often substantially less susceptible to off-resonance effects. Here we consider retinotopic mapping of human visual cortex as a practical test case by which to compare examples of these sequence types for high-resolution fMRI at 7 Tesla. We performed polar angle retinotopic mapping at each of 3 isotropic resolutions (2.0, 1.7, and 1.1 mm) using both accelerated single-shot 2D EPI and accelerated multishot 3D gradient-echo pulse sequences. We found that single-shot EPI indeed led to greater temporal SNR and contrast-to-noise ratios (CNR) than the multishot sequences. However, additional distortion correction in postprocessing was required in order to fully realize these advantages, particularly at higher resolutions. The retinotopic maps produced by both sequence types were qualitatively comparable, and showed equivalent test/retest reliability. Thus, when surface-based analyses are planned, or in other circumstances where geometric distortion is of particular concern, multishot pulse sequences could provide a viable alternative to single-shot EPI.
Evidence for an elastic projection mechanism in the chameleon tongue.
de Groot, Jurriaan H.; van Leeuwen, Johan L.
2004-01-01
To capture prey, chameleons ballistically project their tongues as far as 1.5 body lengths with accelerations of up to 500 m s(-2). At the core of a chameleon's tongue is a cylindrical tongue skeleton surrounded by the accelerator muscle. Previously, the cylindrical accelerator muscle was assumed to power tongue projection directly during the actual fast projection of the tongue. However, high-speed recordings of Chamaeleo melleri and C. pardalis reveal that peak powers of 3000 W kg(-1) are necessary to generate the observed accelerations, which exceed the accelerator muscle's capacity by at least five- to 10-fold. Extrinsic structures might power projection via the tongue skeleton. High-speed fluoroscopy suggests that they contribute less than 10% of the required peak instantaneous power. Thus, the projection power must be generated predominantly within the tongue, and an energy-storage-and-release mechanism must be at work. The key structure in the projection mechanism is probably a cylindrical connective-tissue layer, which surrounds the entoglossal process and was previously suggested to act as lubricating tissue. This tissue layer comprises at least 10 sheaths that envelop the entoglossal process. The outer portion connects anteriorly to the accelerator muscle and the inner portion to the retractor structures. The sheaths contain helical arrays of collagen fibres. Prior to projection, the sheaths are longitudinally loaded by the combined radial contraction and hydrostatic lengthening of the accelerator muscle, at an estimated mean power of 144 W kg(-1) in C. melleri. Tongue projection is triggered as the accelerator muscle and the loaded portions of the sheaths start to slide over the tip of the entoglossal process. The springs relax radially while pushing off the rounded tip of the entoglossal process, making the elastic energy stored in the helical fibres available for a simultaneous forward acceleration of the tongue pad, accelerator muscle and retractor structures. The energy release continues as the multilayered spring slides over the tip of the smooth and lubricated entoglossal process. This sliding-spring theory predicts that the sheaths deliver most of the instantaneous power required for tongue projection. The release power of the sliding tubular springs exceeds the work rate of the accelerator muscle by at least a factor of 10 because the elastic-energy release occurs much faster than the loading process. Thus, we have identified a unique catapult mechanism that is very different from standard engineering designs. Our morphological and kinematic observations, as well as the available literature data, are consistent with the proposed mechanism of tongue projection, although experimental tests of the sheath strain and the lubrication of the entoglossal process are currently beyond our technical scope. PMID:15209111
Evidence for an elastic projection mechanism in the chameleon tongue.
de Groot, Jurriaan H; van Leeuwen, Johan L
2004-04-07
To capture prey, chameleons ballistically project their tongues as far as 1.5 body lengths with accelerations of up to 500 m s(-2). At the core of a chameleon's tongue is a cylindrical tongue skeleton surrounded by the accelerator muscle. Previously, the cylindrical accelerator muscle was assumed to power tongue projection directly during the actual fast projection of the tongue. However, high-speed recordings of Chamaeleo melleri and C. pardalis reveal that peak powers of 3000 W kg(-1) are necessary to generate the observed accelerations, which exceed the accelerator muscle's capacity by at least five- to 10-fold. Extrinsic structures might power projection via the tongue skeleton. High-speed fluoroscopy suggests that they contribute less than 10% of the required peak instantaneous power. Thus, the projection power must be generated predominantly within the tongue, and an energy-storage-and-release mechanism must be at work. The key structure in the projection mechanism is probably a cylindrical connective-tissue layer, which surrounds the entoglossal process and was previously suggested to act as lubricating tissue. This tissue layer comprises at least 10 sheaths that envelop the entoglossal process. The outer portion connects anteriorly to the accelerator muscle and the inner portion to the retractor structures. The sheaths contain helical arrays of collagen fibres. Prior to projection, the sheaths are longitudinally loaded by the combined radial contraction and hydrostatic lengthening of the accelerator muscle, at an estimated mean power of 144 W kg(-1) in C. melleri. Tongue projection is triggered as the accelerator muscle and the loaded portions of the sheaths start to slide over the tip of the entoglossal process. The springs relax radially while pushing off the rounded tip of the entoglossal process, making the elastic energy stored in the helical fibres available for a simultaneous forward acceleration of the tongue pad, accelerator muscle and retractor structures. The energy release continues as the multilayered spring slides over the tip of the smooth and lubricated entoglossal process. This sliding-spring theory predicts that the sheaths deliver most of the instantaneous power required for tongue projection. The release power of the sliding tubular springs exceeds the work rate of the accelerator muscle by at least a factor of 10 because the elastic-energy release occurs much faster than the loading process. Thus, we have identified a unique catapult mechanism that is very different from standard engineering designs. Our morphological and kinematic observations, as well as the available literature data, are consistent with the proposed mechanism of tongue projection, although experimental tests of the sheath strain and the lubrication of the entoglossal process are currently beyond our technical scope.
Boatwright, J.; Bundock, H.; Seekins, L.C.
2006-01-01
We derive and test relations between the Modified Mercalli Intensity (MMI) and the pseudo-acceleration response spectra at 1.0 and 0.3 s - SA(1.0 s) and SA(0.3 s) - in order to map response spectral ordinates for the 1906 San Francisco earthquake. Recent analyses of intensity have shown that MMI ??? 6 correlates both with peak ground velocity and with response spectra for periods from 0.5 to 3.0 s. We use these recent results to derive a linear relation between MMI and log SA(1.0 s), and we refine this relation by comparing the SA(1.0 s) estimated from Boatwright and Bundock's (2005) MMI map for the 1906 earthquake to the SA(1.0 s) calculated from recordings of the 1989 Loma Prieta earthquake. South of San Jose, the intensity distributions for the 1906 and 1989 earthquakes are remarkably similar, despite the difference in magnitude and rupture extent between the two events. We use recent strong motion regressions to derive a relation between SA(1.0 s) and SA(0.3 s) for a M7.8 strike-slip earthquake that depends on soil type, acceleration level, and source distance. We test this relation by comparing SA(0.3 s) estimated for the 1906 earthquake to SA(0.3 s) calculated from recordings of both the 1989 Loma Prieta and 1994 Northridge earthquakes, as functions of distance from the fault. ?? 2006, Earthquake Engineering Research Institute.
Up-to-date Probabilistic Earthquake Hazard Maps for Egypt
NASA Astrophysics Data System (ADS)
Gaber, Hanan; El-Hadidy, Mahmoud; Badawy, Ahmed
2018-04-01
An up-to-date earthquake hazard analysis has been performed in Egypt using a probabilistic seismic hazard approach. Through the current study, we use a complete and homogenous earthquake catalog covering the time period between 2200 BC and 2015 AD. Three seismotectonic models representing the seismic activity in and around Egypt are used. A logic-tree framework is applied to allow for the epistemic uncertainty in the declustering parameters, minimum magnitude, seismotectonic setting and ground-motion prediction equations. The hazard analysis is performed for a grid of 0.5° × 0.5° in terms of types of rock site for the peak ground acceleration (PGA) and spectral acceleration at 0.2-, 0.5-, 1.0- and 2.0-s periods. The hazard is estimated for three return periods (72, 475 and 2475 years) corresponding to 50, 10 and 2% probability of exceedance in 50 years. The uniform hazard spectra for the cities of Cairo, Alexandria, Aswan and Nuwbia are constructed. The hazard maps show that the highest ground acceleration values are expected in the northeastern part of Egypt around the Gulf of Aqaba (PGA up to 0.4 g for return period 475 years) and in south Egypt around the city of Aswan (PGA up to 0.2 g for return period 475 years). The Western Desert of Egypt is characterized by the lowest level of hazard (PGA lower than 0.1 g for return period 475 years).
NASA Astrophysics Data System (ADS)
Legleiter, Justin; Park, Matthew; Cusick, Brian; Kowalewski, Tomasz
2006-03-01
One of the major thrusts in proximal probe techniques is combination of imaging capabilities with simultaneous measurements of physical properties. In tapping mode atomic force microscopy (TMAFM), the most straightforward way to accomplish this goal is to reconstruct the time-resolved force interaction between the tip and surface. These tip-sample forces can be used to detect interactions (e.g., binding sites) and map material properties with nanoscale spatial resolution. Here, we describe a previously unreported approach, which we refer to as scanning probe acceleration microscopy (SPAM), in which the TMAFM cantilever acts as an accelerometer to extract tip-sample forces during imaging. This method utilizes the second derivative of the deflection signal to recover the tip acceleration trajectory. The challenge in such an approach is that with real, noisy data, the second derivative of the signal is strongly dominated by the noise. This problem is solved by taking advantage of the fact that most of the information about the deflection trajectory is contained in the higher harmonics, making it possible to filter the signal by “comb” filtering, i.e., by taking its Fourier transform and inverting it while selectively retaining only the intensities at integer harmonic frequencies. Such a comb filtering method works particularly well in fluid TMAFM because of the highly distorted character of the deflection signal. Numerical simulations and in situ TMAFM experiments on supported lipid bilayer patches on mica are reported to demonstrate the validity of this approach.
Jiang, Yun; Ma, Dan; Bhat, Himanshu; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L; Setsompop, Kawin; Griswold, Mark A
2017-11-01
The purpose of this study is to accelerate an MR fingerprinting (MRF) acquisition by using a simultaneous multislice method. A multiband radiofrequency (RF) pulse was designed to excite two slices with different flip angles and phases. The signals of two slices were driven to be as orthogonal as possible. The mixed and undersampled MRF signal was matched to two dictionaries to retrieve T 1 and T 2 maps of each slice. Quantitative results from the proposed method were validated with the gold-standard spin echo methods in a phantom. T 1 and T 2 maps of in vivo human brain from two simultaneously acquired slices were also compared to the results of fast imaging with steady-state precession based MRF method (MRF-FISP) with a single-band RF excitation. The phantom results showed that the simultaneous multislice imaging MRF-FISP method quantified the relaxation properties accurately compared to the gold-standard spin echo methods. T 1 and T 2 values of in vivo brain from the proposed method also matched the results from the normal MRF-FISP acquisition. T 1 and T 2 values can be quantified at a multiband acceleration factor of two using our proposed acquisition even in a single-channel receive coil. Further acceleration could be achieved by combining this method with parallel imaging or iterative reconstruction. Magn Reson Med 78:1870-1876, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes
Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin
2012-01-01
Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993
Image processing for optical mapping.
Ravindran, Prabu; Gupta, Aditya
2015-01-01
Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.
NASA Technical Reports Server (NTRS)
Foster, John E.
2004-01-01
A plasma accelerator has been conceived for both material-processing and spacecraft-propulsion applications. This accelerator generates and accelerates ions within a very small volume. Because of its compactness, this accelerator could be nearly ideal for primary or station-keeping propulsion for spacecraft having masses between 1 and 20 kg. Because this accelerator is designed to generate beams of ions having energies between 50 and 200 eV, it could also be used for surface modification or activation of thin films.
Accelerating Drug Development: Antiviral Therapies for Emerging Viruses as a Model.
Everts, Maaike; Cihlar, Tomas; Bostwick, J Robert; Whitley, Richard J
2017-01-06
Drug discovery and development is a lengthy and expensive process. Although no one, simple, single solution can significantly accelerate this process, steps can be taken to avoid unnecessary delays. Using the development of antiviral therapies as a model, we describe options for acceleration that cover target selection, assay development and high-throughput screening, hit confirmation, lead identification and development, animal model evaluations, toxicity studies, regulatory issues, and the general drug discovery and development infrastructure. Together, these steps could result in accelerated timelines for bringing antiviral therapies to market so they can treat emerging infections and reduce human suffering.
Revision of Primary Series Maps
,
2000-01-01
In 1992, the U.S. Geological Survey (USGS) completed a 50-year effort to provide primary series map coverage of the United States. Many of these maps now need to be updated to reflect the construction of new roads and highways and other changes that have taken place over time. The USGS has formulated a graphic revision plan to help keep the primary series maps current. Primary series maps include 1:20,000-scale quadrangles of Puerto Rico, 1:24,000- or 1:25,000-scale quadrangles of the conterminous United States, Hawaii, and U.S. Territories, and 1:63,360-scale quadrangles of Alaska. The revision of primary series maps from new collection sources is accomplished using a variety of processes. The raster revision process combines the scanned content of paper maps with raster updating technologies. The vector revision process involves the automated plotting of updated vector files. Traditional processes use analog stereoplotters and manual scribing instruments on specially coated map separates. The ability to select from or combine these processes increases the efficiency of the National Mapping Division map revision program.
Susceptibility of materials processing experiments to low-level accelerations
NASA Technical Reports Server (NTRS)
Naumann, R. J.
1981-01-01
The types of material processing experiments being considered for shuttle can be grouped into four categories: (1) contained solidification experiment; (2) quasicontainerless experiments; (3) containerless experiments; and (4) fluids experiments. Low level steady acceleration, compensated and uncompensated transient accelerations, and rotation induced flow factors that must be considered in the acceleration environment of a space vehicle whose importance depends on the type of experiment being performed. Some control of these factors may be exercised by the location and orientation of the experiment relative to shuttle and by the orbit vehicle attitude chosen for mission. The effects of the various residual accelerations can have serious consequence to the control of the experiment and must be factored into the design and operation of the apparatus.
2014-12-11
Cassava (Manihot esculenta Crantz) is a major staple crop in Africa, Asia, and South America, and its starchy roots provide nourishment for 800 million people worldwide. Although native to South America, cassava was brought to Africa 400-500 years ago and is now widely cultivated across sub-Saharan Africa, but it is subject to biotic and abiotic stresses. To assist in the rapid identification of markers for pathogen resistance and crop traits, and to accelerate breeding programs, we generated a framework map for M. esculenta Crantz from reduced representation sequencing [genotyping-by-sequencing (GBS)]. The composite 2412-cM map integrates 10 biparental maps (comprising 3480 meioses) and organizes 22,403 genetic markers on 18 chromosomes, in agreement with the observed karyotype. We used the map to anchor 71.9% of the draft genome assembly and 90.7% of the predicted protein-coding genes. The chromosome-anchored genome sequence will be useful for breeding improvement by assisting in the rapid identification of markers linked to important traits, and in providing a framework for genomic selection-enhanced breeding of this important crop. Copyright © 2015 International Cassava Genetic Map Consortium (ICGMC).
Demonstration Of Fast, Single-Shot Photocathode QE Mapping Method Using Mla Pattern Beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wisniewski, E. E.; Conde, M.; Doran, D. S.
Quantum efficiency (QE) is the chief figure of merit in the characterization of photocathodes. Semiconductor photocathodes, especially when used in high rep-rate photoinjectors, are known to show QE degradation over time and must be replaced. The totalQE is the basic diagnosticwhich is used widely and is easy to obtain. However, a QE map indicating variations of QE across the cathode surface has greater utility. It can quickly diagnose problems of QE inhomogeneity. Most QE mapping techniques require hours to complete and are thus disruptive to a user facility schedule. A fast, single-shot method has been proposed using a micro-lens arraymore » (MLA) generated QE map. In this paper we report the implementation of the method at Argonne Wakefield Accelerator facility. A micro-lens array (MLA) is used to project an array of beamlets onto the photocathode. The resulting photoelectron beam in the form of an array of electron beamlets is imaged at a YAG screen. Four synchronized measurements are made and the results used to produce a QE map of the photocathode.« less
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Hick, P.; Jackson, B. V.; Schwenn, R.
1992-01-01
We display the electron Thomson scattering intensity of the inner heliosphere as observed by the zodiacal light photometers on board the Helios spacecraft in the form of synoptic maps. The technique extrapolates the brightness information from each photometer sector near the Sun and constructs a latitude/longitude map at a given solar height. These data are unique in that they give a determination of heliospheric structures out of the ecliptic above the primary region of solar wind acceleration. The spatial extent of bright, co-rotating heliospheric structures is readily observed in the data north and south of the ecliptic plane where the Helios photometer coverage is most complete. Because the technique has been used on the complete Helios data set from 1974 to 1985, we observe the change in our synoptic maps with solar cycle. Bright structures are concentrated near the heliospheric equator at solar minimum, while at solar maximum bright structures are found at far higher heliographic latitudes. A comparison of these maps with other forms of synoptic data are shown for two available intervals.
In-Storage Embedded Accelerator for Sparse Pattern Processing
2016-08-13
performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory
Coaching versus Direct Service Models for University Training to Accelerated Schools.
ERIC Educational Resources Information Center
Kirby, Peggy C.; Meza, James, Jr.
This paper examines the changing roles and relationships of schools, central offices, and university facilitators at 11 schools that implemented the nationally recognized Accelerated Schools process. The schools joined the Louisiana Accelerated Schools Network in the summer of 1994. The paper begins with an overview of the Accelerated Schools…
42 CFR 484.245 - Accelerated payments for home health agencies.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Recovery of payment. Recovery of the accelerated payment is made by recoupment as HHA bills are processed... 42 Public Health 5 2013-10-01 2013-10-01 false Accelerated payments for home health agencies. 484... for Home Health Agencies § 484.245 Accelerated payments for home health agencies. (a) General rule...
42 CFR 484.245 - Accelerated payments for home health agencies.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Recovery of payment. Recovery of the accelerated payment is made by recoupment as HHA bills are processed... 42 Public Health 5 2014-10-01 2014-10-01 false Accelerated payments for home health agencies. 484... for Home Health Agencies § 484.245 Accelerated payments for home health agencies. (a) General rule...
Collective acceleration of ions in a system with an insulated anode
NASA Astrophysics Data System (ADS)
Bystritskii, V. M.; Didenko, A. N.; Krasik, Ya. E.; Lopatin, V. S.; Podkatov, V. I.
1980-11-01
An investigation was made of the processes of collective acceleration of protons in vacuum in a system with an insulated anode and trans-anode electrodes, which were insulated or grounded, in high-current Tonus and Vera electron accelerators. The influence of external conditions and parameters of the electron beam on the efficiency of acceleration processes was investigated. Experiments were carried out in which protons were accelerated in a system with trans-anode electrodes. A study was made of the influence of a charge prepulse and of the number of trans-anode electrodes on the energy of the accelerated electrons. A system with a single anode produced Np=1014 protons of 2Ee < Ep < 3Ee energy. Suppression of a charge prepulse increased the proton energy to (6 8)Ee and the yield was then 1013. The maximum proton energy of 14Ee was obtained in a system with three trans-anode electrodes. A possible mechanism of proton acceleration was analyzed. The results obtained were compared with those of other investigations. Ways of increasing the efficiency of this acceleration method were considered.
Astrophysical particle acceleration mechanisms in colliding magnetized laser-produced plasmas
Fox, W.; Park, J.; Deng, W.; ...
2017-08-11
Significant particle energization is observed to occur in numerous astrophysical environments, and in the standard models, this acceleration occurs alongside energy conversion processes including collisionless shocks or magnetic reconnection. Recent platforms for laboratory experiments using magnetized laser-produced plasmas have opened opportunities to study these particle acceleration processes in the laboratory. Through fully kinetic particle-in-cell simulations, we investigate acceleration mechanisms in experiments with colliding magnetized laser-produced plasmas, with geometry and parameters matched to recent high-Mach number reconnection experiments with externally controlled magnetic fields. 2-D simulations demonstrate significant particle acceleration with three phases of energization: first, a “direct” Fermi acceleration driven bymore » approaching magnetized plumes; second, x-line acceleration during magnetic reconnection of anti-parallel fields; and finally, an additional Fermi energization of particles trapped in contracting and relaxing magnetic islands produced by reconnection. Furthermore, the relative effectiveness of these mechanisms depends on plasma and magnetic field parameters of the experiments.« less
Venus - Ishtar gravity anomaly
NASA Technical Reports Server (NTRS)
Sjogren, W. L.; Bills, B. G.; Mottinger, N. A.
1984-01-01
The gravity anomaly associated with Ishtar Terra on Venus is characterized, comparing line-of-sight acceleration profiles derived by differentiating Pioneer Venus Orbiter Doppler residual profiles with an Airy-compensated topographic model. The results are presented in graphs and maps, confirming the preliminary findings of Phillips et al. (1979). The isostatic compensation depth is found to be 150 + or - 30 km.
ERIC Educational Resources Information Center
Smith, Regina O.
2014-01-01
Research into the best practices for basic skills education, national bridge programs, the new GED® assessment, and accelerated developmental education indicated that contextualized instruction was most effective when preparing adult literacy students for college and work. Nevertheless, "remedial pedagogy" with a sole focus on the…
Jupiter radio bursts and particle acceleration
NASA Technical Reports Server (NTRS)
Desch, Michael D.
1994-01-01
Particle acceleration processes are important in understanding many of the Jovian radio and plasma wave emissions. However, except for the high-energy electrons that generate synchrotron emission following inward diffusion from the outer magnetosphere, acceleration processes in Jupiter's magnetosphere and between Jupiter and Io are poorly understood. We discuss very recent observations from the Ulysses spacecraft of two new Jovian radio and plamas wave emissions in which particle acceleration processes are important and have been addressed directly by complementary investigations. First, radio bursts known as quasi-periodic bursts have been observed in close association with a population of highly energetic electrons. Second, a population of much lower energy (keV range) electrons on auroral field lines can be shown to be responsible for the first observation of a Jovian plasma wave emission known as auroral hiss.
Kujur, Alice; Upadhyaya, Hari D.; Shree, Tanima; Bajaj, Deepak; Das, Shouvik; Saxena, Maneesha S.; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C. L. L.; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K.; Parida, Swarup K.
2015-01-01
We discovered 26785 and 16573 high-quality SNPs differentiating two parental genotypes of a RIL mapping population using reference desi and kabuli genome-based GBS assay. Of these, 3625 and 2177 SNPs have been integrated into eight desi and kabuli chromosomes, respectively in order to construct ultra-high density (0.20–0.37 cM) intra-specific chickpea genetic linkage maps. One of these constructed high-resolution genetic map has potential to identify 33 major genomic regions harbouring 35 robust QTLs (PVE: 17.9–39.7%) associated with three agronomic traits, which were mapped within <1 cM mean marker intervals on desi chromosomes. The extended LD (linkage disequilibrium) decay (~15 cM) in chromosomes of genetic maps have encouraged us to use a rapid integrated approach (comparative QTL mapping, QTL-region specific haplotype/LD-based trait association analysis, expression profiling and gene haplotype-based association mapping) rather than a traditional QTL map-based cloning method to narrow-down one major seed weight (SW) robust QTL region. It delineated favourable natural allelic variants and superior haplotype-containing one seed-specific candidate embryo defective gene regulating SW in chickpea. The ultra-high-resolution genetic maps, QTLs/genes and alleles/haplotypes-related genomic information generated and integrated strategy for rapid QTL/gene identification developed have potential to expedite genomics-assisted breeding applications in crop plants, including chickpea for their genetic enhancement. PMID:25942004
Advancing precision cosmology with 21 cm intensity mapping
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi Wesley
In this thesis we make progress toward establishing the observational method of 21 cm intensity mapping as a sensitive and efficient method for mapping the large-scale structure of the Universe. In Part I we undertake theoretical studies to better understand the potential of intensity mapping. This includes forecasting the ability of intensity mapping experiments to constrain alternative explanations to dark energy for the Universe's accelerated expansion. We also considered how 21 cm observations of the neutral gas in the early Universe (after recombination but before reionization) could be used to detect primordial gravity waves, thus providing a window into cosmological inflation. Finally we showed that scientifically interesting measurements could in principle be performed using intensity mapping in the near term, using existing telescopes in pilot surveys or prototypes for larger dedicated surveys. Part II describes observational efforts to perform some of the first measurements using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing intensity mapping data from single dish radio telescopes. We then apply the pipeline to observations using the Green Bank Telescope. By cross-correlating the intensity mapping survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude and we thus constrain the signal from both above and below. This pilot survey represents a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
Developing a Hadoop-based Middleware for Handling Multi-dimensional NetCDF
NASA Astrophysics Data System (ADS)
Li, Z.; Yang, C. P.; Schnase, J. L.; Duffy, D.; Lee, T. J.
2014-12-01
Climate observations and model simulations are collecting and generating vast amounts of climate data, and these data are ever-increasing and being accumulated in a rapid speed. Effectively managing and analyzing these data are essential for climate change studies. Hadoop, a distributed storage and processing framework for large data sets, has attracted increasing attentions in dealing with the Big Data challenge. The maturity of Infrastructure as a Service (IaaS) of cloud computing further accelerates the adoption of Hadoop in solving Big Data problems. However, Hadoop is designed to process unstructured data such as texts, documents and web pages, and cannot effectively handle the scientific data format such as array-based NetCDF files and other binary data format. In this paper, we propose to build a Hadoop-based middleware for transparently handling big NetCDF data by 1) designing a distributed climate data storage mechanism based on POSIX-enabled parallel file system to enable parallel big data processing with MapReduce, as well as support data access by other systems; 2) modifying the Hadoop framework to transparently processing NetCDF data in parallel without sequencing or converting the data into other file formats, or loading them to HDFS; and 3) seamlessly integrating Hadoop, cloud computing and climate data in a highly scalable and fault-tolerance framework.
Investigation of the aerothermodynamics of hypervelocity reacting flows in the ram accelerator
NASA Technical Reports Server (NTRS)
Hertzberg, A.; Bruckner, A. P.; Mattick, A. T.; Knowlen, C.
1992-01-01
New diagnostic techniques for measuring the high pressure flow fields associated with high velocity ram accelerator propulsive modes was experimentally investigated. Individual propulsive modes are distinguished by their operating Mach number range and the manner in which the combustion process is initiated and stabilized. Operation of the thermally choked ram accelerator mode begins by injecting the projectile into the accelerator tube at a prescribed entrance velocity by means of a conventional light gas gun. A specially designed obturator, which is used to seal the bore of the gun, plays a key role in the ignition of the propellant gases in the subsonic combustion mode of the ram accelerator. Once ignited, the combustion process travels with the projectile and releases enough heat to thermally choke the flow within several tube diameters behind it, thereby stabilizing a high pressure zone on the rear of the projectile. When the accelerating projectile approaches the Chapman-Jouguet detonation speed of the propellant mixture, the combustion region is observed to move up onto the afterbody of the projectile as the pressure field evolves to a distinctively different form that implies the presence of supersonic combustion processes. Eventually, a high enough Mach number is reached that the ram effect is sufficient to cause the combustion process to occur entirely on the body. Propulsive cycles utilizing on-body heat release can be established either by continuously accelerating the projectile in a single propellant mixture from low initial in-tube Mach numbers (M less than 4) or by injecting the projectile at a speed above the propellant's Chapman-Jouguet detonation speed. The results of experimental and theoretical explorations of ram accelerator gas dynamic phenomena and the effectiveness of the new diagnostic techniques are presented in this report.
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Plasma Radiation and Acceleration Effectiveness of CME-driven Shocks
NASA Astrophysics Data System (ADS)
Gopalswamy, N.; Schmidt, J. M.
2008-05-01
CME-driven shocks are effective radio radiation generators and accelerators for Solar Energetic Particles (SEPs). We present simulated 3 D time-dependent radio maps of second order plasma radiation generated by CME- driven shocks. The CME with its shock is simulated with the 3 D BATS-R-US CME model developed at the University of Michigan. The radiation is simulated using a kinetic plasma model that includes shock drift acceleration of electrons and stochastic growth theory of Langmuir waves. We find that in a realistic 3 D environment of magnetic field and solar wind outflow of the Sun the CME-driven shock shows a detailed spatial structure of the density, which is responsible for the fine structure of type II radio bursts. We also show realistic 3 D reconstructions of the magnetic cloud field of the CME, which is accelerated outward by magnetic buoyancy forces in the diverging magnetic field of the Sun. The CME-driven shock is reconstructed by tomography using the maximum jump in the gradient of the entropy. In the vicinity of the shock we determine the Alfven speed of the plasma. This speed profile controls how steep the shock can grow and how stable the shock remains while propagating away from the Sun. Only a steep shock can provide for an effective particle acceleration.
Plasma radiation and acceleration effectiveness of CME-driven shocks
NASA Astrophysics Data System (ADS)
Schmidt, Joachim
CME-driven shocks are effective radio radiation generators and accelerators for Solar Energetic Particles (SEPs). We present simulated 3 D time-dependent radio maps of second order plasma radiation generated by CME-driven shocks. The CME with its shock is simulated with the 3 D BATS-R-US CME model developed at the University of Michigan. The radiation is simulated using a kinetic plasma model that includes shock drift acceleration of electrons and stochastic growth theory of Langmuir waves. We find that in a realistic 3 D environment of magnetic field and solar wind outflow of the Sun the CME-driven shock shows a detailed spatial structure of the density, which is responsible for the fine structure of type II radio bursts. We also show realistic 3 D reconstructions of the magnetic cloud field of the CME, which is accelerated outward by magnetic buoyancy forces in the diverging magnetic field of the Sun. The CME-driven shock is reconstructed by tomography using the maximum jump in the gradient of the entropy. In the vicinity of the shock we determine the Alfven speed of the plasma. This speed profile controls how steep the shock can grow and how stable the shock remains while propagating away from the Sun. Only a steep shock can provide for an effective particle acceleration.
Superconducting gravity gradiometer and a test of inverse square law
NASA Technical Reports Server (NTRS)
Moody, M. V.; Paik, Ho Jung
1989-01-01
The equivalence principle prohibits the distinction of gravity from acceleration by a local measurement. However, by making a differential measurement of acceleration over a baseline, platform accelerations can be cancelled and gravity gradients detected. In an in-line superconducting gravity gradiometer, this differencing is accomplished with two spring-mass accelerometers in which the proof masses are confined to motion in a single degree of freedom and are coupled together by superconducting circuits. Platform motions appear as common mode accelerations and are cancelled by adjusting the ratio of two persistent currents in the sensing circuit. The sensing circuit is connected to a commercial SQUID amplifier to sense changes in the persistent currents generated by differential accelerations, i.e., gravity gradients. A three-axis gravity gradiometer is formed by mounting six accelerometers on the faces of a precision cube, with the accelerometers on opposite faces of the cube forming one of three in-line gradiometers. A dedicated satellite mission for mapping the earth's gravity field is an important one. Additional scientific goals are a test of the inverse square law to a part in 10(exp 10) at 100 km, and a test of the Lense-Thirring effect by detecting the relativistic gravity magnetic terms in the gravity gradient tensor for the earth.
Searching for a Link Between Suprathermal Ions and Solar Wind Parameters During Quiet Times.
NASA Astrophysics Data System (ADS)
Nickell, J.; Desai, M. I.; Dayeh, M. A.
2017-12-01
The acceleration processes that suprathermal particles undergo are largely ambiguous. The two prevailing acceleration processes are: 1) Continuous acceleration in the IP space due to i) Bulk velocity fluctuations (e.g., Fahr et al. 2012), ii) magnetic compressions (e.g., Fisk and Gloeckler 2012), iii) magnetic field waves and turbulence (e.g., Zhang and Lee 2013), and iv) reconnection between magnetic islands (e.g., Drake et al. 2014) . 2) Discrete acceleration that occurs in discrete solar events such as CIRs, CME-driven shocks, and flares (e.g., Reames 1999, Desai et al. 2008). Using data from ACE/ULEIS during solar cycles 23 and 24 (1997-present), we examine the solar wind and magnetic field parameters during quiet-times (e.g., Dayeh et al. 2017) in an attempt to gain insights into the acceleration processes of the suprathermal particle population. In particular, we look for compression regions by performing comparative studies between solar wind and magnetic field parameters during quiet-times in the interplanetary space.
NASA Astrophysics Data System (ADS)
De Becker, Michaël; Blomme, Ronny; Micela, Giusi; Pittard, Julian M.; Rauw, Gregor; Romero, Gustavo E.; Sana, Hugues; Stevens, Ian R.
2009-05-01
Several colliding-wind massive binaries are known to be non-thermal emitters in the radio domain. This constitutes strong evidence for the fact that an efficient particle acceleration process is at work in these objects. The acceleration mechanism is most probably the Diffusive Shock Acceleration (DSA) process in the presence of strong hydrodynamic shocks due to the colliding-winds. In order to investigate the physics of this particle acceleration, we initiated a multiwavelength campaign covering a large part of the electromagnetic spectrum. In this context, the detailed study of the hard X-ray emission from these sources in the SIMBOL-X bandpass constitutes a crucial element in order to probe this still poorly known topic of astrophysics. It should be noted that colliding-wind massive binaries should be considered as very valuable targets for the investigation of particle acceleration in a similar way as supernova remnants, but in a different region of the parameter space.
NASA Technical Reports Server (NTRS)
Thompson, J. M.; Russell, J. W.; Blanchard, R. C.
1987-01-01
This report presents a process for extracting the aerodynamic accelerations of the Shuttle Orbiter Vehicle from the High Resolution Accelerometer Package (HiRAP) flight data during reentry. The methods for obtaining low-level aerodynamic accelerations, principally in the rarefied flow regime, are applied to 10 Orbiter flights. The extraction process is presented using data obtained from Space Transportation System Flight 32 (Mission 61-C) as a typical example. This process involves correcting the HiRAP measurements for the effects of temperature bias and instrument offset from the Orbiter center of gravity, and removing acceleration data during times they are affected by thruster firings. The corrected data are then made continuous and smooth and are further enhanced by refining the temperature bias correction and removing effects of the auxiliary power unit actuation. The resulting data are the current best estimate of the Orbiter aerodynamic accelerations during reentry and will be used for further analyses of the Orbiter aerodynamics and the upper atmosphere characteristics.
A New Network-Based Approach for the Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.
2017-12-01
Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.
An open repository of earthquake-triggered ground-failure inventories
Schmitt, Robert G.; Tanyas, Hakan; Nowicki Jessee, M. Anna; Zhu, Jing; Biegel, Katherine M.; Allstadt, Kate E.; Jibson, Randall W.; Thompson, Eric M.; van Westen, Cees J.; Sato, Hiroshi P.; Wald, David J.; Godt, Jonathan W.; Gorum, Tolga; Xu, Chong; Rathje, Ellen M.; Knudsen, Keith L.
2017-12-20
Earthquake-triggered ground failure, such as landsliding and liquefaction, can contribute significantly to losses, but our current ability to accurately include them in earthquake-hazard analyses is limited. The development of robust and widely applicable models requires access to numerous inventories of ground failures triggered by earthquakes that span a broad range of terrains, shaking characteristics, and climates. We present an openly accessible, centralized earthquake-triggered groundfailure inventory repository in the form of a ScienceBase Community to provide open access to these data with the goal of accelerating research progress. The ScienceBase Community hosts digital inventories created by both U.S. Geological Survey (USGS) and non-USGS authors. We present the original digital inventory files (when available) as well as an integrated database with uniform attributes. We also summarize the mapping methodology and level of completeness as reported by the original author(s) for each inventory. This document describes the steps taken to collect, process, and compile the inventories and the process for adding additional ground-failure inventories to the ScienceBase Community in the future.
Precision gravity measurement utilizing Accelerex vibrating beam accelerometer technology
NASA Astrophysics Data System (ADS)
Norling, Brian L.
Tests run using Sundstrand vibrating beam accelerometers to sense microgravity are described. Lunar-solar tidal effects were used as a highly predictable signal which varies by approximately 200 billionths of the full-scale gravitation level. Test runs of 48-h duration were used to evaluate stability, resolution, and noise. Test results on the Accelerex accelerometer show accuracies suitable for precision applications such as gravity mapping and gravity density logging. The test results indicate that Accelerex technology, even with an instrument design and signal processing approach not optimized for microgravity measurement, can achieve 48-nano-g (1 sigma) or better accuracy over a 48-h period. This value includes contributions from instrument noise and random walk, combined bias and scale factor drift, and thermal modeling errors as well as external contributions from sampling noise, test equipment inaccuracies, electrical noise, and cultural noise induced acceleration.
Programming languages for synthetic biology.
Umesh, P; Naveen, F; Rao, Chanchala Uma Maheswara; Nair, Achuthsankar S
2010-12-01
In the backdrop of accelerated efforts for creating synthetic organisms, the nature and scope of an ideal programming language for scripting synthetic organism in-silico has been receiving increasing attention. A few programming languages for synthetic biology capable of defining, constructing, networking, editing and delivering genome scale models of cellular processes have been recently attempted. All these represent important points in a spectrum of possibilities. This paper introduces Kera, a state of the art programming language for synthetic biology which is arguably ahead of similar languages or tools such as GEC, Antimony and GenoCAD. Kera is a full-fledged object oriented programming language which is tempered by biopart rule library named Samhita which captures the knowledge regarding the interaction of genome components and catalytic molecules. Prominent feature of the language are demonstrated through a toy example and the road map for the future development of Kera is also presented.
Imaging of nonlocal hot-electron energy dissipation via shot noise.
Weng, Qianchun; Komiyama, Susumu; Yang, Le; An, Zhenghua; Chen, Pingping; Biehs, Svend-Age; Kajihara, Yusuke; Lu, Wei
2018-05-18
In modern microelectronic devices, hot electrons accelerate, scatter, and dissipate energy in nanoscale dimensions. Despite recent progress in nanothermometry, direct real-space mapping of hot-electron energy dissipation is challenging because existing techniques are restricted to probing the lattice rather than the electrons. We realize electronic nanothermometry by measuring local current fluctuations, or shot noise, associated with ultrafast hot-electron kinetic processes (~21 terahertz). Exploiting a scanning and contact-free tungsten tip as a local noise probe, we directly visualize hot-electron distributions before their thermal equilibration with the host gallium arsenide/aluminium gallium arsenide crystal lattice. With nanoconstriction devices, we reveal unexpected nonlocal energy dissipation at room temperature, which is reminiscent of ballistic transport of low-temperature quantum conductors. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Landslide early warning system prototype with GIS analysis indicates by soil movement and rainfall
NASA Astrophysics Data System (ADS)
Artha, Y.; Julian, E. S.
2018-01-01
The aim of this paper is developing and testing of landslide early warning system. The early warning system uses accelerometersas ground movement and tilt-sensing device and a water flow sensor. A microcentroller is used to process the input signal and activate the alarm. An LCD is used to display the acceleration in x,y and z axis. When the soil moved or shifted and rainfall reached 100 mm/day, the alarm rang and signal were sentto the monitoring center via a telemetry system.Data logging information and GIS spatial data can be monitored remotely as tables and graphics as well as in the form of geographical map with the help of web-GIS interface. The system were tested at Kampung Gerendong, Desa Putat Nutug, Kecamatan Ciseeng, Kabupaten Bogor. This area has 3.15 cumulative score, which mean vulnerable to landslide. The results show that the early warning system worked as planned.
CUDA Fortran acceleration for the finite-difference time-domain method
NASA Astrophysics Data System (ADS)
Hadi, Mohammed F.; Esmaeili, Seyed A.
2013-05-01
A detailed description of programming the three-dimensional finite-difference time-domain (FDTD) method to run on graphical processing units (GPUs) using CUDA Fortran is presented. Two FDTD-to-CUDA thread-block mapping designs are investigated and their performances compared. Comparative assessment of trade-offs between GPU's shared memory and L1 cache is also discussed. This presentation is for the benefit of FDTD programmers who work exclusively with Fortran and are reluctant to port their codes to C in order to utilize GPU computing. The derived CUDA Fortran code is compared with an optimized CPU version that runs on a workstation-class CPU to present a realistic GPU to CPU run time comparison and thus help in making better informed investment decisions on FDTD code redesigns and equipment upgrades. All analyses are mirrored with CUDA C simulations to put in perspective the present state of CUDA Fortran development.
Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan
2010-01-07
Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric
Andonov, Rumen; Djidjev, Hristo Nikolov; Klau, Gunnar W.; ...
2015-10-09
In this paper, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of protein representations. Having a metric in that space allows one to avoid pairwise comparisons on the entire database and, thus, to significantly accelerate exploring the protein space compared to no-metric spaces. We show on a gold standard superfamily classification benchmark set of 6759 proteins that our exact k-nearest neighbor (k-NN) scheme classifiesmore » up to 224 out of 236 queries correctly and on a larger, extended version of the benchmark with 60; 850 additional structures, up to 1361 out of 1369 queries. Finally, our k-NN classification thus provides a promising approach for the automatic classification of protein structures based on flexible contact map overlap alignments.« less
NASA Astrophysics Data System (ADS)
Verkhodanov, O. V.; Verkhodanova, N. V.; Ulakhovich, O. S.; Solovyov, D. I.; Khabibullina, M. L.
2018-01-01
Based on the data from the Westerbork Northern Sky Survey performed at a frequency of 325 MHz in the range of right ascensions 0h ≤ α < 2h and declinations 29° < δ < 78° and using multi-frequency Planck maps, we selected candidate objects with the Sunyaev-Zeldovich effect. The list of the most probable candidates includes 381 sources. It is shown that the search for such objects can be accelerated by using a priori data on the negative level of fluctuations in the CMB map with removed low multipoles in the direction to radio sources.
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Irwin, J.; Nosochkov, Y.
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}
Computational tools and lattice design for the PEP-II B-Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Yunhai; Irwin, John; Nosochkov, Yuri
1997-02-01
Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.
Jupiter's Auroras Acceleration Processes
2017-09-06
This image, created with data from Juno's Ultraviolet Imaging Spectrometer (UVS), marks the path of Juno's readings of Jupiter's auroras, highlighting the electron measurements that show the discovery of the so-called discrete auroral acceleration processes indicated by the "inverted Vs" in the lower panel (Figure 1). This signature points to powerful magnetic-field-aligned electric potentials that accelerate electrons toward the atmosphere to energies that are far greater than what drive the most intense aurora at Earth. Scientists are looking into why the same processes are not the main factor in Jupiter's most powerful auroras. https://photojournal.jpl.nasa.gov/catalog/PIA21937
MO-FG-CAMPUS-TeP1-03: Pre-Treatment Surface Imaging Based Collision Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiant, D; Maurer, J; Liu, H
2016-06-15
Purpose: Modern radiotherapy increasingly employs large immobilization devices, gantry attachments, and couch rotations for treatments. All of which raise the risk of collisions between the patient and the gantry / couch. Collision detection is often achieved by manually checking each couch position in the treatment room and sometimes results in extraneous imaging if collisions are detected after image based setup has begun. In the interest of improving efficiency and avoiding extra imaging, we explore the use of a surface imaging based collision detection model. Methods: Surfaces acquired from AlignRT (VisionRT, London, UK) were transferred in wavefront format to a custommore » Matlab (Mathworks, Natick, MA) software package (CCHECK). Computed tomography (CT) scans acquired at the same time were sent to CCHECK in DICOM format. In CCHECK, binary maps of the surfaces were created and overlaid on the CT images based on the fixed relationship of the AlignRT and CT coordinate systems. Isocenters were added through a graphical user interface (GUI). CCHECK then compares the inputted surfaces to a model of the linear accelerator (linac) to check for collisions at defined gantry and couch positions. Note, CCHECK may be used with or without a CT. Results: The nominal surface image field of view is 650 mm × 900 mm, with variance based on patient position and size. The accuracy of collision detections is primarily based on the linac model and the surface mapping process. The current linac model and mapping process yield detection accuracies on the order of 5 mm, assuming no change in patient posture between surface acquisition and treatment. Conclusions: CCHECK provides a non-ionizing method to check for collisions without the patient in the treatment room. Collision detection accuracy may be improved with more robust linac modeling. Additional gantry attachments (e.g. conical collimators) can be easily added to the model.« less
Business logic for geoprocessing of distributed geodata
NASA Astrophysics Data System (ADS)
Kiehle, Christian
2006-12-01
This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).
Collado, Elena; Venzke Klug, Tâmmila; Martínez-Sánchez, Ascensión; Artés-Hernandez, Francisco; Aguayo, Encarna; Artés, Francisco; Fernández, Juan A; Gómez, Perla A
2017-10-01
Appropriate sanitation is a priority for extending the shelf life and promoting the consumption of immature pea seeds, as processing accelerates quality deterioration and microbial growth. The combined effect of disinfection with acidified sodium chlorite (ASC) or sodium hypochlorite (SH) and packaging under a passive modified atmosphere (MAP) at 1 or 4 °C on quality was analysed. After 14 days, greenness and vitamin C had decreased, especially in the SH-disinfected samples. Total phenols and antioxidant capacity were not affected by disinfection. Proteins levels fell by around 27%, regardless of the sanitizer and storage temperature. Compared with the initial microbial load, samples stored at 1 °C showed an increase of 1 log CFU g -1 in psychrophiles when treated with SH, whereas no increase of note occurred with ASC. In general, microbial counts were always below 3 log CFU g -1 for all the treatments. Immature pea seeds could be stored for 14 days at 1-4 °C under MAP with only minor quality changes. Disinfection with ASC resulted in better sensory quality, higher content of vitamin C and lower psychrophile counts. More research is needed to analyse the effect of these treatments on other quality parameters. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Begg, John; Brackley, Hannah; Irwin, Marion; Grant, Helen; Berryman, Kelvin; Dellow, Grant; Scott, David; Jones, Katie; Barrell, David; Lee, Julie; Townsend, Dougal; Jacka, Mike; Harwood, Nick; McCahon, Ian; Christensen, Steve
2013-04-01
Following the damaging 4 Sept 2010 Mw7.1 Darfield Earthquake, the 22 Feb 2011 Christchurch Earthquake and subsequent damaging aftershocks, we completed a liquefaction hazard evaluation for c. 2700 km2 of the coastal Canterbury region. Its purpose was to distinguish at a regional scale areas of land that, in the event of strong ground shaking, may be susceptible to damaging liquefaction from areas where damaging liquefaction is unlikely. This information will be used by local government for defining liquefaction-related geotechnical investigation requirements for consent applications. Following a review of historic records of liquefaction and existing liquefaction assessment maps, we undertook comprehensive new work that included: a geologic context from existing geologic maps; geomorphic mapping using LiDAR and integrating existing soil map data; compilation of lithological data for the surficial 10 m from an extensive drillhole database; modelling of depth to unconfined groundwater from existing subsurface and surface water data. Integrating and honouring all these sources of information, we mapped areas underlain by materials susceptible to liquefaction (liquefaction-prone lithologies present, or likely, in the near-surface, with shallow unconfined groundwater) from areas unlikely to suffer widespread liquefaction damage. Comparison of this work with more detailed liquefaction susceptibility assessment based on closely spaced geotechnical probes in Christchurch City provides a level of confidence in these results. We tested our susceptibility map by assigning a matrix of liquefaction susceptibility rankings to lithologies recorded in drillhole logs and local groundwater depths, then applying peak ground accelerations for four earthquake scenarios from the regional probabilistic seismic hazard model (25 year return = 0.13g; 100 year return = 0.22g; 500 year return = 0.38g and 2500 year return = 0.6g). Our mapped boundary between liquefaction-prone areas and areas unlikely to sustain heavy damage proved sound. In addition, we compared mapped liquefaction extents (derived from post-earthquake aerial photographs) from the 4 Sept 2010 Mw7.1 and 22 Feb 2011 Mw6.2 earthquakes with our liquefaction susceptibility map. The overall area of liquefaction for these two earthquakes was similar, and statistics show that for the first (large regional) earthquake, c. 93% of mapped liquefaction fell within the liquefaction-prone area, and for the second (local, high peak ground acceleration) earthquake, almost 99% fell within the liquefaction-prone area. We conclude that basic geological and groundwater data when coupled with LiDAR data can usefully delineate areas susceptible to liquefaction from those unlikely to suffer damaging liquefaction. We believe that these techniques can be used successfully in many other cities around the world.
Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings
NASA Astrophysics Data System (ADS)
Hussein Maibed, Zena
2018-05-01
The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.
Proposal for an astronaut mass measurement device for the Space Shuttle
NASA Technical Reports Server (NTRS)
Beyer, Neil; Lomme, Jon; Mccollough, Holly; Price, Bradford; Weber, Heidi
1994-01-01
For medical reasons, astronauts in space need to have their mass measured. Currently, this measurement is performed using a mass-spring system. The current system is large, inaccurate, and uncomfortable for the astronauts. NASA is looking for new, different, and preferably better ways to perform this measurement process. After careful analysis our design team decided on a linear acceleration process. Within the process, four possible concept variants are put forth. Among these four variants, one is suggested over the others. The variant suggested is that of a motor-winch system to linearly accelerate the astronaut. From acceleration and force measurements of the process combined Newton's second law, the mass of an astronaut can be calculated.
GPU-Accelerated Voxelwise Hepatic Perfusion Quantification
Wang, H; Cao, Y
2012-01-01
Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using CUDA-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, non-linear least squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626400 voxels in a patient’s liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10−6. The method will be useful for generating liver perfusion images in clinical settings. PMID:22892645
Wald, David J.; Lin, Kuo-wan; Kircher, C.A.; Jaiswal, Kishor; Luco, Nicolas; Turner, L.; Slosky, Daniel
2017-01-01
The ShakeCast system is an openly available, near real-time post-earthquake information management system. ShakeCast is widely used by public and private emergency planners and responders, lifeline utility operators and transportation engineers to automatically receive and process ShakeMap products for situational awareness, inspection priority, or damage assessment of their own infrastructure or building portfolios. The success of ShakeCast to date and its broad, critical-user base mandates improved software usability and functionality, including improved engineering-based damage and loss functions. In order to make the software more accessible to novice users—while still utilizing advanced users’ technical and engineering background—we have developed a “ShakeCast Workbook”, a well documented, Excel spreadsheet-based user interface that allows users to input notification and inventory data and export XML files requisite for operating the ShakeCast system. Users will be able to select structure based on a minimum set of user-specified facility (building location, size, height, use, construction age, etc.). “Expert” users will be able to import user-modified structural response properties into facility inventory associated with the HAZUS Advanced Engineering Building Modules (AEBM). The goal of the ShakeCast system is to provide simplified real-time potential impact and inspection metrics (i.e., green, yellow, orange and red priority ratings) to allow users to institute customized earthquake response protocols. Previously, fragilities were approximated using individual ShakeMap intensity measures (IMs, specifically PGA and 0.3 and 1s spectral accelerations) for each facility but we are now performing capacity-spectrum damage state calculations using a more robust characterization of spectral deamnd.We are also developing methods for the direct import of ShakeMap’s multi-period spectra in lieu of the assumed three-domain design spectrum (at 0.3s for constant acceleration; 1s or 3s for constant velocity and constant displacement at very long response periods). As part of ongoing ShakeCast research and development, we will also explore the use of ShakeMap IM uncertainty estimates and evaluate the assumption of employing multiple response spectral damping values rather than the single 5%-damped value currently employed. Developing and incorporating advanced fragility assignments into the ShakeCast Workbook requires related software modifications and database improvements; these enhancements are part of an extensive rewrite of the ShakeCast application.
Faults in parts of north-central and western Houston metropolitan area, Texas
Verbeek, Earl R.; Ratzlaff, Karl W.; Clanton, Uel S.
1979-01-01
Hundreds of residential, commercial, and industrial structures in the Houston metropolitan area have sustained moderate to severe damage owing to their locations on or near active faults. Paved roads have been offset by faults at hundreds of locations, butted pipelines have been distorted by fault movements, and fault-induced gradient changes in drainage lines have raised concern among flood control engineers. Over 150 faults, many of them moving at rates of 0.5 to 2 cm/yr, have been mapped in the Houston area; the number of faults probably far exceeds this figure.This report includes a map of eight faults, in north-central and western Houston, at a scale useful for land-use planning. Seven of the faults, are known, to be active and have caused considerable damage to structures built on or near them. If the eighth fault is active, it may be of concern to new developments on the west side of Houston. A ninth feature shown on the map is regarded only as a possible fault, as an origin by faulting has not been firmly established.Seismic and drill-hold data for some 40 faults, studied in detail by various investigators have verified connections between scarps at the land surface and growth faults in the shallow subsurface. Some scarps, then, are known to be the surface manifestations of faults that have geologically long histories of movement. The degree to which natural geologic processes contribute to current fault movement, however, is unclear, for some of man’s activities may play a role in faulting as well.Evidence that current rates of fault movement far exceed average prehistoric rates and that most offset of the land surface in the Houston area has occurred only within the last 50 years indirectly suggest that fluid withdrawal may be accelerating or reinitiating movement on pre-existing faults. This conclusion, however, is based only on a coincidence in time between increased fault activity and increased rates of withdrawal of water, oil, and gas from subsurface sediments; no cause-and-effect relationship has been demonstrated. An alternative hypothesis is that natural fault movements are characterized by short—term episodicity and that Houston is experiencing the effects of a brief period of accelerated natural fault movement. Available data from monitored faults are insufficient to weigh the relative importance of natural vs. induced fault movements.
NASA Astrophysics Data System (ADS)
Ramirez-Ruiz, J. J.
2016-12-01
Slope instability is presented each year in the mountain region of the Colima State, Mexico. It occurs due to the combination of different factors existing in this area as: Precipitation, topography contrast, type and mechanical properties of deposits that constitute the rocks and soils of the region and the erosion due to the elimination of vegetation deck to develop and grow urban areas. To these geological factors we can extend the tectonic activity of the Western part of Mexico that originate high seismicity by the interaction of Cocos plate and North America plate forming the region of Graben de Colima, were is located our study area. Here we will present a Zonification and determination of the Susceptibility maps of slope instability due to the rain and seismicity accelerators factors. The North part of the State Colima is covered by deposits of the Volcan de Colima with an elevation of 3860 masl. It is the area of major precipitation yearly with more than 1200 mm in comparison to the average precipitation of about 900 mm of the State of Colima. Using a SIG system and the mapping of more than 30 sites we realize a zonification and analysis of the Risk using a methodology developed by CENAPRED. The susceptibility map developed in this area in combination with erosion factors permit us to determine an approximation of the Risk considering some limitations that will be present in this study.
First muon acceleration using a radio-frequency accelerator
NASA Astrophysics Data System (ADS)
Bae, S.; Choi, H.; Choi, S.; Fukao, Y.; Futatsukawa, K.; Hasegawa, K.; Iijima, T.; Iinuma, H.; Ishida, K.; Kawamura, N.; Kim, B.; Kitamura, R.; Ko, H. S.; Kondo, Y.; Li, S.; Mibe, T.; Miyake, Y.; Morishita, T.; Nakazawa, Y.; Otani, M.; Razuvaev, G. P.; Saito, N.; Shimomura, K.; Sue, Y.; Won, E.; Yamazaki, T.
2018-05-01
Muons have been accelerated by using a radio-frequency accelerator for the first time. Negative muonium atoms (Mu- ), which are bound states of positive muons (μ+) and two electrons, are generated from μ+'s through the electron capture process in an aluminum degrader. The generated Mu- 's are initially electrostatically accelerated and injected into a radio-frequency quadrupole linac (RFQ). In the RFQ, the Mu- 's are accelerated to 89 keV. The accelerated Mu- 's are identified by momentum measurement and time of flight. This compact muon linac opens the door to various muon accelerator applications including particle physics measurements and the construction of a transmission muon microscope.
NASA Astrophysics Data System (ADS)
Hull, A. J.; Chaston, C. C.; Fillingim, M. O.; Frey, H. U.; Goldstein, M. L.; Bonnell, J. W.; Mozer, F.
2015-12-01
The auroral acceleration region is an integral link in the chain of events that transpire during substorms, and the currents, plasma and electric fields undergo significant changes driven by complex dynamical processes deep in the magnetotail. The acceleration processes that occur therein accelerate and heat the plasma that ultimately leads to some of the most intense global substorm auroral displays. Though this region has garnered considerable attention, the temporal evolution of field-aligned current systems, associated acceleration processes, and resultant changes in the plasma constituents that occur during key stages of substorm development remain unclear. In this study we present a survey of Cluster traversals within and just above the auroral acceleration region (≤3 Re altitude) during substorms. Particular emphasis is on the spatial morphology and developmental sequence of auroral acceleration current systems, potentials and plasma constituents, with the aim of identifying controlling factors, and assessing auroral emmission consequences. Exploiting multi-point measurements from Cluster in combination with auroral imaging, we reveal the injection powered, Alfvenic nature of both the substorm onset and expansion of auroral particle acceleration. We show evidence that indicates substorm onsets are characterized by the gross-intensification and filamentation/striation of pre-existing large-scale current systems to smaller/dispersive scale Alfven waves. Such an evolutionary sequence has been suggested in theoretical models or single spacecraft data, but has not been demonstrated or characterized in multispacecraft observations until now. It is also shown how the Alfvenic variations over time may dissipate to form large-scale inverted-V structures characteristic of the quasi-static aurora. These findings suggest that, in addition to playing active roles in driving substorm aurora, inverted-V and Alfvenic acceleration processes are causally linked. Key elements of substorm current spatial structure and temporal development, relationship to electric fields/potentials, plasma moment and distribution features, causal linkages to auroral emission features, and other properties will be discussed.
Integration of process diagnostics and three dimensional simulations in thermal spraying
NASA Astrophysics Data System (ADS)
Zhang, Wei
Thermal spraying is a group of processes in which the metallic or ceramic materials are deposited in a molten or semi-molten state on a prepared substrate. In atmospheric plasma spray process, a thermal plasma jet is used to heat up and accelerate loading particles. The process is inherently complex due to the deviation from equilibrium conditions, three dimensional nature, multitude of interrelated variables involved, and stochastic variability at different stages. This dissertation is aimed at understanding the in-flight particle state and plasma plume characteristics in atmospheric plasma spray process through the integration of process diagnostics and three-dimensional simulation. Effects of injection angle and carrier gas flow rate on in-flight particle characteristics are studied experimentally and interpreted through numerical simulation. Plasma jet perturbation by particle injection angle, carrier gas, and particle loading are also identified. Maximum particle average temperature and velocity at any given spray distance is systematically quantified. Optimum plasma plume position for particle injection which was observed in experiments was verified numerically along with description of physical mechanisms. Correlation of spray distance with in-flight particle behavior for various kinds of materials is revealed. A new strategy for visualization and representation of particle diagnostic results for thermal spray processes has been presented. Specifically, 1 st order process maps (process-particle interactions) have been addressed by converting the Temperature-Velocity of particles obtained via diagnostics into non-dimensional group parameters [Melting Index-Reynolds number]. This approach provides an improved description of the thermal and kinetic energy of particles and allows for cross-comparison of diagnostic data within a given process for different materials, comparison of a single material across different thermal spray processes, and detailed assessment of the melting behavior through recourse to analysis of the distributions. An additional group parameter, Oxidation Index, has been applied to relatively track the oxidation extent of metallic particles under different operating conditions. The new mapping strategies have also been proposed in circumstances where only ensemble particle diagnostics are available. Through the integration of process diagnostics and numerical simulation, key issues concerning in-flight particle status as well as the controlling physical mechanisms have been analyzed. A scientific and intellectual strategy for universal description of particle characteristics has been successfully developed.
Recent Advances in Understanding Particle Acceleration Processes in Solar Flares
NASA Astrophysics Data System (ADS)
Zharkova, V. V.; Arzner, K.; Benz, A. O.; Browning, P.; Dauphin, C.; Emslie, A. G.; Fletcher, L.; Kontar, E. P.; Mann, G.; Onofri, M.; Petrosian, V.; Turkmani, R.; Vilmer, N.; Vlahos, L.
2011-09-01
We review basic theoretical concepts in particle acceleration, with particular emphasis on processes likely to occur in regions of magnetic reconnection. Several new developments are discussed, including detailed studies of reconnection in three-dimensional magnetic field configurations (e.g., current sheets, collapsing traps, separatrix regions) and stochastic acceleration in a turbulent environment. Fluid, test-particle, and particle-in-cell approaches are used and results compared. While these studies show considerable promise in accounting for the various observational manifestations of solar flares, they are limited by a number of factors, mostly relating to available computational power. Not the least of these issues is the need to explicitly incorporate the electrodynamic feedback of the accelerated particles themselves on the environment in which they are accelerated. A brief prognosis for future advancement is offered.
Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations
NASA Astrophysics Data System (ADS)
Hause, Benjamin; Parker, Scott
2012-10-01
We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.
Microwave processes in the SPD-ATON stationary plasma thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirdyashev, K. P., E-mail: kpk@ms.ire.rssi.ru
2016-09-15
Results of experimental studies of microwave processes accompanying plasma acceleration in the SPD-ATON stationary plasma thruster are presented. Specific features of the generation of microwave oscillations in both the acceleration channel and the plasma flow outgoing from the thruster are analyzed on the basis of local measurements of the spectra of the plasma wave fields. Mechanisms for generation of microwave oscillations are considered with allowance for the inhomogeneity of the electron density and magnetic field behind the edge of the acceleration channel. The effect of microwave oscillations on the electron transport and the formation of the discharge current in themore » acceleration channel is discussed.« less
NASA Astrophysics Data System (ADS)
Trachtenberg, I.
How a reliability model might be developed with new data from accelerated stress testing, failure mechanisms, process control monitoring, and test structure evaluations is illustrated. The effects of the acceleration of temperature on operating life is discussed. Test structures that will further accelerate the failure rate are discussed. Corrosion testing is addressed. The uncoated structure is encapsulated in a variety of mold compounds and subjected to pressure-cooker testing.
Selfsimilar time dependent shock structures
NASA Astrophysics Data System (ADS)
Beck, R.; Drury, L. O.
1985-08-01
Diffusive shock acceleration as an astrophysical mechanism for accelerating charged particles has the advantage of being highly efficient. This means however that the theory is of necessity nonlinear; the reaction of the accelerated particles on the shock structure and the acceleration process must be self-consistently included in any attempt to develop a complete theory of diffusive shock acceleration. Considerable effort has been invested in attempting, at least partially, to do this and it has become clear that in general either the maximum particle energy must be restricted by introducing additional loss processes into the problem or the acceleration must be treated as a time dependent problem (Drury, 1984). It is concluded that stationary modified shock structures can only exist for strong shocks if additional loss processes limit the maximum energy a particle can attain. This is certainly possible and if it occurs the energy loss from the shock will lead to much greater shock compressions. It is however equally possible that no such processes exist and we must then ask what sort of nonstationary shock structure develops. The ame argument which excludes stationary structures also rules out periodic solutions and indeed any solution where the width of the shock remains bounded. It follows that the width of the shock must increase secularly with time and it is natural to examine the possibility of selfsimilar time dependent solutions.
Lateralized Motor Control Processes Determine Asymmetry of Interlimb Transfer
Sainburg, Robert L.; Schaefer, Sydney Y.; Yadav, Vivek
2016-01-01
This experiment tested the hypothesis that interlimb transfer of motor performance depends on recruitment of motor control processes that are specialized to the hemisphere contralateral to the arm that is initially trained. Right-handed participants performed a single-joint task, in which reaches were targeted to 4 different distances. While the speed and accuracy was similar for both hands, the underlying control mechanisms used to vary movement speed with distance were systematically different between the arms: The amplitude of the initial acceleration profiles scaled greater with movement speed for the right-dominant arm, while the duration of the initial acceleration profile scaled greater with movement speed for the left-non-dominant arm. These two processes were previously shown to be differentially disrupted by left and right hemisphere damage, respectively. We now hypothesize that task practice with the right arm might reinforce left-hemisphere mechanisms that vary acceleration amplitude with distance, while practice with the left arm might reinforce right-hemisphere mechanisms that vary acceleration duration with distance. We thus predict that following right arm practice, the left arm should show increased contributions of acceleration amplitude to peak velocities, and following left arm practice, the right arm should show increased contributions of acceleration duration to peak velocities. Our findings support these predictions, indicating that asymmetry in interlimb transfer of motor performance, at least in the task used here, depends on recruitment of lateralized motor control processes. PMID:27491479
Selfsimilar time dependent shock structures
NASA Technical Reports Server (NTRS)
Beck, R.; Drury, L. O.
1985-01-01
Diffusive shock acceleration as an astrophysical mechanism for accelerating charged particles has the advantage of being highly efficient. This means however that the theory is of necessity nonlinear; the reaction of the accelerated particles on the shock structure and the acceleration process must be self-consistently included in any attempt to develop a complete theory of diffusive shock acceleration. Considerable effort has been invested in attempting, at least partially, to do this and it has become clear that in general either the maximum particle energy must be restricted by introducing additional loss processes into the problem or the acceleration must be treated as a time dependent problem (Drury, 1984). It is concluded that stationary modified shock structures can only exist for strong shocks if additional loss processes limit the maximum energy a particle can attain. This is certainly possible and if it occurs the energy loss from the shock will lead to much greater shock compressions. It is however equally possible that no such processes exist and we must then ask what sort of nonstationary shock structure develops. The ame argument which excludes stationary structures also rules out periodic solutions and indeed any solution where the width of the shock remains bounded. It follows that the width of the shock must increase secularly with time and it is natural to examine the possibility of selfsimilar time dependent solutions.
Li, Xuehui; Wei, Yanling; Acharya, Ananta; Jiang, Qingzhen; Kang, Junmei; Brummer, E. Charles
2014-01-01
A genetic linkage map is a valuable tool for quantitative trait locus mapping, map-based gene cloning, comparative mapping, and whole-genome assembly. Alfalfa, one of the most important forage crops in the world, is autotetraploid, allogamous, and highly heterozygous, characteristics that have impeded the construction of a high-density linkage map using traditional genetic marker systems. Using genotyping-by-sequencing (GBS), we constructed low-cost, reasonably high-density linkage maps for both maternal and paternal parental genomes of an autotetraploid alfalfa F1 population. The resulting maps contain 3591 single-nucleotide polymorphism markers on 64 linkage groups across both parents, with an average density of one marker per 1.5 and 1.0 cM for the maternal and paternal haplotype maps, respectively. Chromosome assignments were made based on homology of markers to the M. truncatula genome. Four linkage groups representing the four haplotypes of each alfalfa chromosome were assigned to each of the eight Medicago chromosomes in both the maternal and paternal parents. The alfalfa linkage groups were highly syntenous with M. truncatula, and clearly identified the known translocation between Chromosomes 4 and 8. In addition, a small inversion on Chromosome 1 was identified between M. truncatula and M. sativa. GBS enabled us to develop a saturated linkage map for alfalfa that greatly improved genome coverage relative to previous maps and that will facilitate investigation of genome structure. GBS could be used in breeding populations to accelerate molecular breeding in alfalfa. PMID:25147192
Li, Xuehui; Wei, Yanling; Acharya, Ananta; Jiang, Qingzhen; Kang, Junmei; Brummer, E Charles
2014-08-21
A genetic linkage map is a valuable tool for quantitative trait locus mapping, map-based gene cloning, comparative mapping, and whole-genome assembly. Alfalfa, one of the most important forage crops in the world, is autotetraploid, allogamous, and highly heterozygous, characteristics that have impeded the construction of a high-density linkage map using traditional genetic marker systems. Using genotyping-by-sequencing (GBS), we constructed low-cost, reasonably high-density linkage maps for both maternal and paternal parental genomes of an autotetraploid alfalfa F1 population. The resulting maps contain 3591 single-nucleotide polymorphism markers on 64 linkage groups across both parents, with an average density of one marker per 1.5 and 1.0 cM for the maternal and paternal haplotype maps, respectively. Chromosome assignments were made based on homology of markers to the M. truncatula genome. Four linkage groups representing the four haplotypes of each alfalfa chromosome were assigned to each of the eight Medicago chromosomes in both the maternal and paternal parents. The alfalfa linkage groups were highly syntenous with M. truncatula, and clearly identified the known translocation between Chromosomes 4 and 8. In addition, a small inversion on Chromosome 1 was identified between M. truncatula and M. sativa. GBS enabled us to develop a saturated linkage map for alfalfa that greatly improved genome coverage relative to previous maps and that will facilitate investigation of genome structure. GBS could be used in breeding populations to accelerate molecular breeding in alfalfa. Copyright © 2014 Li et al.
2013-01-01
Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011
Updated Colombian Seismic Hazard Map
NASA Astrophysics Data System (ADS)
Eraso, J.; Arcila, M.; Romero, J.; Dimate, C.; Bermúdez, M. L.; Alvarado, C.
2013-05-01
The Colombian seismic hazard map used by the National Building Code (NSR-98) in effect until 2009 was developed in 1996. Since then, the National Seismological Network of Colombia has improved in both coverage and technology providing fifteen years of additional seismic records. These improvements have allowed a better understanding of the regional geology and tectonics which in addition to the seismic activity in Colombia with destructive effects has motivated the interest and the need to develop a new seismic hazard assessment in this country. Taking advantage of new instrumental information sources such as new broad band stations of the National Seismological Network, new historical seismicity data, standardized global databases availability, and in general, of advances in models and techniques, a new Colombian seismic hazard map was developed. A PSHA model was applied. The use of the PSHA model is because it incorporates the effects of all seismic sources that may affect a particular site solving the uncertainties caused by the parameters and assumptions defined in this kind of studies. First, the seismic sources geometry and a complete and homogeneous seismic catalog were defined; the parameters of seismic rate of each one of the seismic sources occurrence were calculated establishing a national seismotectonic model. Several of attenuation-distance relationships were selected depending on the type of seismicity considered. The seismic hazard was estimated using the CRISIS2007 software created by the Engineering Institute of the Universidad Nacional Autónoma de México -UNAM (National Autonomous University of Mexico). A uniformly spaced grid each 0.1° was used to calculate the peak ground acceleration (PGA) and response spectral values at 0.1, 0.2, 0.3, 0.5, 0.75, 1, 1.5, 2, 2.5 and 3.0 seconds with return periods of 75, 225, 475, 975 and 2475 years. For each site, a uniform hazard spectrum and exceedance rate curves were calculated. With the results, it is possible to determinate environments and scenarios where the seismic hazard is a function of distance and magnitude and also the principal seismic sources that contribute to the seismic hazard at each site (dissagregation). This project was conducted by the Servicio Geológico Colombiano (Colombian Geological Survey) and the Universidad Nacional de Colombia (National University of Colombia), with the collaboration of national and foreign experts and the National System of Prevention and Attention of Disaster (SNPAD). It is important to stand out that this new seismic hazard map was used in the updated national building code (NSR-10). A new process is ongoing in order to improve and present the Seismic Hazard Map in terms of intensity. This require new knowledge in site effects, in both local and regional scales, checking the existing and develop new acceleration to intensity relationships, in order to obtain results more understandable and useful for a wider range of users, not only in the engineering field, but also all the risk assessment and management institutions, research and general community.
Fast Mapping Across Time: Memory Processes Support Children's Retention of Learned Words.
Vlach, Haley A; Sandhofer, Catherine M
2012-01-01
Children's remarkable ability to map linguistic labels to referents in the world is commonly called fast mapping. The current study examined children's (N = 216) and adults' (N = 54) retention of fast-mapped words over time (immediately, after a 1-week delay, and after a 1-month delay). The fast mapping literature often characterizes children's retention of words as consistently high across timescales. However, the current study demonstrates that learners forget word mappings at a rapid rate. Moreover, these patterns of forgetting parallel forgetting functions of domain-general memory processes. Memory processes are critical to children's word learning and the role of one such process, forgetting, is discussed in detail - forgetting supports extended mapping by promoting the memory and generalization of words and categories.
Adventures in the microlensing cloud: Large datasets, eResearch tools, and GPUs
NASA Astrophysics Data System (ADS)
Vernardos, G.; Fluke, C. J.
2014-10-01
As astronomy enters the petascale data era, astronomers are faced with new challenges relating to storage, access and management of data. A shift from the traditional approach of combining data and analysis at the desktop to the use of remote services, pushing the computation to the data, is now underway. In the field of cosmological gravitational microlensing, future synoptic all-sky surveys are expected to bring the number of multiply imaged quasars from the few tens that are currently known to a few thousands. This inflow of observational data, together with computationally demanding theoretical modeling via the production of microlensing magnification maps, requires a new approach. We present our technical solutions to supporting the GPU-Enabled, High Resolution cosmological MicroLensing parameter survey (GERLUMPH). This extensive dataset for cosmological microlensing modeling comprises over 70 000 individual magnification maps and ˜106 related results. We describe our approaches to hosting, organizing, and serving ˜ 30 TB of data and metadata products. We present a set of online analysis tools developed with PHP, JavaScript and WebGL to support access and analysis of GELRUMPH data in a Web browser. We discuss our use of graphics processing units (GPUs) to accelerate data production, and we release the core of the GPU-D direct inverse ray-shooting code (Thompson et al., 2010, 2014) used to generate the magnification maps. All of the GERLUMPH data and tools are available online from http://gerlumph.swin.edu.au. This project made use of gSTAR, the GPU Supercomputer for Theoretical Astrophysical Research.
NASA Astrophysics Data System (ADS)
Watkins, Hannah; Bond, Clare; Butler, Rob
2016-04-01
Geological mapping techniques have advanced significantly in recent years from paper fieldslips to Toughbook, smartphone and tablet mapping; but how do the methods used to create a geological map affect the thought processes that result in the final map interpretation? Geological maps have many key roles in the field of geosciences including understanding geological processes and geometries in 3D, interpreting geological histories and understanding stratigraphic relationships in 2D and 3D. Here we consider the impact of the methods used to create a map on the thought processes that result in the final geological map interpretation. As mapping technology has advanced in recent years, the way in which we produce geological maps has also changed. Traditional geological mapping is undertaken using paper fieldslips, pencils and compass clinometers. The map interpretation evolves through time as data is collected. This interpretive process that results in the final geological map is often supported by recording in a field notebook, observations, ideas and alternative geological models explored with the use of sketches and evolutionary diagrams. In combination the field map and notebook can be used to challenge the map interpretation and consider its uncertainties. These uncertainties and the balance of data to interpretation are often lost in the creation of published 'fair' copy geological maps. The advent of Toughbooks, smartphones and tablets in the production of geological maps has changed the process of map creation. Digital data collection, particularly through the use of inbuilt gyrometers in phones and tablets, has changed smartphones into geological mapping tools that can be used to collect lots of geological data quickly. With GPS functionality this data is also geospatially located, assuming good GPS connectivity, and can be linked to georeferenced infield photography. In contrast line drawing, for example for lithological boundary interpretation and sketching, is yet to find the digital flow that is achieved with pencil on notebook page or map. Free-form integrated sketching and notebook functionality in geological mapping software packages is in its nascence. Hence, the result is a tendency for digital geological mapping to focus on the ease of data collection rather than on the thoughts and careful observations that come from notebook sketching and interpreting boundaries on a map in the field. The final digital geological map can be assessed for when and where data was recorded, but the thought processes of the mapper are less easily assessed, and the use of observations and sketching to generate ideas and interpretations maybe inhibited by reliance on digital mapping methods. All mapping methods used have their own distinct advantages and disadvantages and with more recent technologies both hardware and software issues have arisen. We present field examples of using conventional fieldslip mapping, and compare these with more advanced technologies to highlight some of the main advantages and disadvantages of each method and discuss where geological mapping may be going in the future.
A new image enhancement algorithm with applications to forestry stand mapping
NASA Technical Reports Server (NTRS)
Kan, E. P. F. (Principal Investigator); Lo, J. K.
1975-01-01
The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.
End-to-end workflow for finite element analysis of tumor treating fields in glioblastomas
NASA Astrophysics Data System (ADS)
Timmons, Joshua J.; Lok, Edwin; San, Pyay; Bui, Kevin; Wong, Eric T.
2017-11-01
Tumor Treating Fields (TTFields) therapy is an approved modality of treatment for glioblastoma. Patient anatomy-based finite element analysis (FEA) has the potential to reveal not only how these fields affect tumor control but also how to improve efficacy. While the automated tools for segmentation speed up the generation of FEA models, multi-step manual corrections are required, including removal of disconnected voxels, incorporation of unsegmented structures and the addition of 36 electrodes plus gel layers matching the TTFields transducers. Existing approaches are also not scalable for the high throughput analysis of large patient volumes. A semi-automated workflow was developed to prepare FEA models for TTFields mapping in the human brain. Magnetic resonance imaging (MRI) pre-processing, segmentation, electrode and gel placement, and post-processing were all automated. The material properties of each tissue were applied to their corresponding mask in silico using COMSOL Multiphysics (COMSOL, Burlington, MA, USA). The fidelity of the segmentations with and without post-processing was compared against the full semi-automated segmentation workflow approach using Dice coefficient analysis. The average relative differences for the electric fields generated by COMSOL were calculated in addition to observed differences in electric field-volume histograms. Furthermore, the mesh file formats in MPHTXT and NASTRAN were also compared using the differences in the electric field-volume histogram. The Dice coefficient was less for auto-segmentation without versus auto-segmentation with post-processing, indicating convergence on a manually corrected model. An existent but marginal relative difference of electric field maps from models with manual correction versus those without was identified, and a clear advantage of using the NASTRAN mesh file format was found. The software and workflow outlined in this article may be used to accelerate the investigation of TTFields in glioblastoma patients by facilitating the creation of FEA models derived from patient MRI datasets.
ChalkBoard: Mapping Functions to Polygons
NASA Astrophysics Data System (ADS)
Matlage, Kevin; Gill, Andy
ChalkBoard is a domain specific language for describing images. The ChalkBoard language is uncompromisingly functional and encourages the use of modern functional idioms. ChalkBoard uses off-the-shelf graphics cards to speed up rendering of functional descriptions. In this paper, we describe the design of the core ChalkBoard language, and the architecture of our static image generation accelerator.
Joseph E. Jakes; Christopher G. Hunt; Daniel J. Yelle; Linda Lorenz; Kolby Hirth; Sophie-Charlotte Gleber; Stefan Vogt; Warren Grigsby; Charles R. Frihart
2015-01-01
Understanding and controlling molecular-scale interactions between adhesives and wood polymers are critical to accelerate the development of improved adhesives for advanced wood-based materials. The submicrometer resolution of synchrotron-based X-ray fluorescence microscopy (XFM) was found capable of mapping and quantifying infiltration of Br-labeled phenol−...
USDA-ARS?s Scientific Manuscript database
Date palm is one of the few crop species that thrive in arid environments and are the most significant fruit crop in the Middle East and North Africa, but lacks genomic resources that can accelerate breeding efforts. Here, we present the first comprehensive catalogue of ~12 million common single nuc...
Vacuum Brazing of Accelerator Components
NASA Astrophysics Data System (ADS)
Singh, Rajvir; Pant, K. K.; Lal, Shankar; Yadav, D. P.; Garg, S. R.; Raghuvanshi, V. K.; Mundra, G.
2012-11-01
Commonly used materials for accelerator components are those which are vacuum compatible and thermally conductive. Stainless steel, aluminum and copper are common among them. Stainless steel is a poor heat conductor and not very common in use where good thermal conductivity is required. Aluminum and copper and their alloys meet the above requirements and are frequently used for the above purpose. The accelerator components made of aluminum and its alloys using welding process have become a common practice now a days. It is mandatory to use copper and its other grades in RF devices required for accelerators. Beam line and Front End components of the accelerators are fabricated from stainless steel and OFHC copper. Fabrication of components made of copper using welding process is very difficult and in most of the cases it is impossible. Fabrication and joining in such cases is possible using brazing process especially under vacuum and inert gas atmosphere. Several accelerator components have been vacuum brazed for Indus projects at Raja Ramanna Centre for Advanced Technology (RRCAT), Indore using vacuum brazing facility available at RRCAT, Indore. This paper presents details regarding development of the above mentioned high value and strategic components/assemblies. It will include basics required for vacuum brazing, details of vacuum brazing facility, joint design, fixturing of the jobs, selection of filler alloys, optimization of brazing parameters so as to obtain high quality brazed joints, brief description of vacuum brazed accelerator components etc.