Wire-chamber radiation detector with discharge control
Perez-Mendez, V.; Mulera, T.A.
1982-03-29
A wire chamber; radiation detector has spaced apart parallel electrodes and grids defining an ignition region in which charged particles or other ionizing radiations initiate brief localized avalanche discharges and defining an adjacent memory region in which sustained glow discharges are initiated by the primary discharges. Conductors of the grids at each side of the memory section extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles were detected by sequentially transmitting pulses to the conductors of one grid while detecting transmissions of the pulses to the orthogonal conductors of the other grid through glow discharges. One of the grids bounding the memory region is defined by an array of conductive elements each of which is connected to the associated readout conductor through a separate resistance. The wire chamber avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or; near simultaneous charged particles have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.
Wire chamber radiation detector with discharge control
Perez-Mendez, Victor; Mulera, Terrence A.
1984-01-01
A wire chamber radiation detector (11) has spaced apart parallel electrodes (16) and grids (17, 18, 19) defining an ignition region (21) in which charged particles (12) or other ionizing radiations initiate brief localized avalanche discharges (93) and defining an adjacent memory region (22) in which sustained glow discharges (94) are initiated by the primary discharges (93). Conductors (29, 32) of the grids (18, 19) at each side of the memory section (22) extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles (12) were detected by sequentially transmitting pulses to the conductors (29) of one grid (18) while detecting transmissions of the pulses to the orthogonal conductors (36) of the other grid (19) through glow discharges (94). One of the grids (19) bounding the memory region (22) is defined by an array of conductive elements (32) each of which is connected to the associated readout conductor (36) through a separate resistance (37). The wire chamber (11) avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or near simultaneous charged particles (12) have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.
NASA Astrophysics Data System (ADS)
Nakhostin, M.; Baba, M.
2014-06-01
Parallel-plate avalanche counters have long been recognized as timing detectors for heavily ionizing particles. However, these detectors suffer from a poor pulse-height resolution which limits their capability to discriminate between different ionizing particles. In this paper, a new approach for discriminating between charged particles of different specific energy-loss with avalanche counters is demonstrated. We show that the effect of the self-induced space-charge in parallel-plate avalanche counters leads to a strong correlation between the shape of output current pulses and the amount of primary ionization created by the incident charged particles. The correlation is then exploited for the discrimination of charged particles with different energy-losses in the detector. The experimental results obtained with α-particles from an 241Am α-source demonstrate a discrimination capability far beyond that achievable with the standard pulse-height discrimination method.
NASA Astrophysics Data System (ADS)
Sheridan, M. F.; Stinton, A. J.; Patra, A.; Pitman, E. B.; Bauer, A.; Nichita, C. C.
2005-01-01
The Titan2D geophysical mass-flow model is evaluated by comparing its simulation results and those obtained from another flow model, FLOW3D, with published data on the 1963 Little Tahoma Peak avalanches on Mount Rainier, Washington. The avalanches, totaling approximately 10×10 6 m 3 of broken lava blocks and other debris, traveled 6.8 km horizontally and fell 1.8 km vertically ( H/ L=0.246). Velocities calculated from runup range from 24 to 42 m/s and may have been as high as 130 m/s while the avalanches passed over Emmons Glacier. Titan2D is a code for an incompressible Coulomb continuum; it is a depth-averaged, 'shallow-water', granular-flow model. The conservation equations for mass and momentum are solved with a Coulomb-type friction term at the basal interface. The governing equations are solved on multiple processors using a parallel, adaptive mesh, Godunov scheme. Adaptive gridding dynamically concentrates computing power in regions of special interest; mesh refinement and coarsening key on the perimeter of the moving avalanche. The model flow initiates as a pile defined as an ellipsoid by a height ( z) and an elliptical base defined by radii in the x and y planes. Flow parameters are the internal friction angle and bed friction angle. Results from the model are similar in terms of velocity history, lateral spreading, location of runup areas, and final distribution of the Little Tahoma Peak deposit. The avalanches passed over the Emmons Glacier along their upper flow paths, but lower in the valley they traversed stream gravels and glacial outwash deposits. This presents difficulty in assigning an appropriate bed friction angle for the entire deposit. Incorporation of variable bed friction angles into the model using GIS will help to resolve this issue.
Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling
NASA Astrophysics Data System (ADS)
Patra, A. K.; Nichita, C. C.; Bauer, A. C.; Pitman, E. B.; Bursik, M.; Sheridan, M. F.
2006-08-01
This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.
Age of Palos Verdes submarine debris avalanche, southern California
Normark, W.R.; McGann, M.; Sliter, R.
2004-01-01
The Palos Verdes debris avalanche is the largest, by volume, late Quaternary mass-wasted deposit recognized from the inner California Borderland basins. Early workers speculated that the sediment failure giving rise to the deposit is young, taking place well after sea level reached its present position. A newly acquired, closely-spaced grid of high-resolution, deep-tow boomer profiles of the debris avalanche shows that the Palos Verdes debris avalanche fills a turbidite leveed channel that extends seaward from San Pedro Sea Valley, with the bulk of the avalanche deposit appearing to result from a single failure on the adjacent slope. Radiocarbon dates from piston-cored sediment samples acquired near the distal edge of the avalanche deposit indicate that the main failure took place about 7500 yr BP. ?? 2003 Elsevier B.V. All rights reserved.
Spatial Variability of Snowpack Properties On Small Slopes
NASA Astrophysics Data System (ADS)
Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.
The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.
Lumped transmission line avalanche pulser
Booth, R.
1995-07-18
A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse. 8 figs.
Lumped transmission line avalanche pulser
Booth, Rex
1995-01-01
A lumped linear avalanche transistor pulse generator utilizes stacked transistors in parallel within a stage and couples a plurality of said stages, in series with increasing zener diode limited voltages per stage and decreasing balanced capacitance load per stage to yield a high voltage, high and constant current, very short pulse.
NASA Astrophysics Data System (ADS)
Chisolm, Rachel E.; McKinney, Daene C.
2018-05-01
This paper studies the lake dynamics for avalanche-triggered glacial lake outburst floods (GLOFs) in the Cordillera Blanca mountain range in Ancash, Peru. As new glacial lakes emerge and existing lakes continue to grow, they pose an increasing threat of GLOFs that can be catastrophic to the communities living downstream. In this work, the dynamics of displacement waves produced from avalanches are studied through three-dimensional hydrodynamic simulations of Lake Palcacocha, Peru, with an emphasis on the sensitivity of the lake model to input parameters and boundary conditions. This type of avalanche-generated wave is an important link in the GLOF process chain because there is a high potential for overtopping and erosion of the lake-damming moraine. The lake model was evaluated for sensitivity to turbulence model and grid resolution, and the uncertainty due to these model parameters is significantly less than that due to avalanche boundary condition characteristics. Wave generation from avalanche impact was simulated using two different boundary condition methods. Representation of an avalanche as water flowing into the lake generally resulted in higher peak flows and overtopping volumes than simulating the avalanche impact as mass-momentum inflow at the lake boundary. Three different scenarios of avalanche size were simulated for the current lake conditions, and all resulted in significant overtopping of the lake-damming moraine. Although the lake model introduces significant uncertainty, the avalanche portion of the GLOF process chain is likely to be the greatest source of uncertainty. To aid in evaluation of hazard mitigation alternatives, two scenarios of lake lowering were investigated. While large avalanches produced significant overtopping waves for all lake-lowering scenarios, simulations suggest that it may be possible to contain waves generated from smaller avalanches if the surface of the lake is lowered.
NASA Astrophysics Data System (ADS)
Hua, Weizhuo; Koji, Fukagata
2017-11-01
A numerical study has been conducted to understand the streamer formation and propagation of nanosecond pulsed surface dielectric barrier discharge of positive polarity. First we compared the result of different grid configuration to investigate the influence of x and y direction grid spacing on the streamer propagation. The streamer propagation is sensitive to y grid spacing especially at the dielectric surface. The streamer propagation velocity can reach 0.2 cm/ns when the voltage magnitude is 12 kV. A narrow gap was found between the streamer and dielectric barrier, where the plasma density is several orders of magnitude smaller than the streamer region. Analyses on the ion transport in the gap and streamer regions show the different ion transport mechanisms in the two different region. In the gap region, the diffusion of electron toward the dielectric layer decreases the seed electron in the beginning of voltage pulse, resulting that ionization avalanche does not occur. The streamer region is not significantly affected by the diffusion flux toward the dielectric layer, so that ionization avalanche takes place and leads to dramatic increase of plasma density.
Bae, Jun Woo; Kim, Hee Reyoung
2018-01-01
Anti-scattering grid has been used to improve the image quality. However, applying a commonly used linear or parallel grid would cause image distortion, and focusing grid also requires a precise fabrication technology, which is expensive. To investigate and analyze whether using CO2 laser micromachining-based PMMA anti-scattering grid can improve the performance of the grid at a lower cost. Thus, improvement of grid performance would result in improvement of image quality. The cross-sectional shape of CO2 laser machined PMMA is similar to alphabet 'V'. The performance was characterized by contrast improvement factor (CIF) and Bucky. Four types of grid were tested, which include thin parallel, thick parallel, 'V'-type and 'inverse V'-type of grid. For a Bucky factor of 2.1, the CIF of the grid with both the "V" and inverse "V" had a value of 1.53, while the thick and thick parallel types had values of 1.43 and 1.65, respectively. The 'V' shape grid manufacture by CO2 laser micromachining showed higher CIF than parallel one, which had same shielding material channel width. It was thought that the 'V' shape grid would be replacement to the conventional parallel grid if it is hard to fabricate the high-aspect-ratio grid.
A VO-Driven Astronomical Data Grid in China
NASA Astrophysics Data System (ADS)
Cui, C.; He, B.; Yang, Y.; Zhao, Y.
2010-12-01
With the implementation of many ambitious observation projects, including LAMOST, FAST, and Antarctic observatory at Doom A, observational astronomy in China is stepping into a brand new era with emerging data avalanche. In the era of e-Science, both these cutting-edge projects and traditional astronomy research need much more powerful data management, sharing and interoperability. Based on data-grid concept, taking advantages of the IVOA interoperability technologies, China-VO is developing a VO-driven astronomical data grid environment to enable multi-wavelength science and large database science. In the paper, latest progress and data flow of the LAMOST, architecture of the data grid, and its supports to the VO are discussed.
Assessing the importance of terrain parameters on glide avalanche release
NASA Astrophysics Data System (ADS)
Peitzsch, E.; Hendrikx, J.; Fagre, D. B.
2013-12-01
Glide snow avalanches are dangerous and difficult to predict. Despite recent research there is still a lack of understanding regarding the controls of glide avalanche release. Glide avalanches often occur in similar terrain or the same locations annually and observations suggest that topography may be critical. Thus, to gain an understanding of the terrain component of these types of avalanches we examined terrain parameters associated with glide avalanche release as well as areas of consistent glide crack formation but no subsequent avalanches. Glide avalanche occurrences visible from the Going-to-the-Sun Road corridor in Glacier National Park, Montana from 2003-2013 were investigated using an avalanche database derived of daily observations each year from April 1 to June 15. This yielded 192 glide avalanches in 53 distinct avalanche paths. Each avalanche occurrence was digitized in a GIS using satellite, oblique, and aerial imagery as reference. Topographical parameters such as area, slope, aspect, elevation and elevation were then derived for the entire dataset utilizing GIS tools and a 10m DEM. Land surface substrate and surface geology were derived from National Park Service Inventory and Monitoring maps and U.S. Geological Survey surface geology maps, respectively. Surface roughness and glide factor were calculated using a four level classification index. . Then, each avalanche occurrence was aggregated to general avalanche release zones and the frequencies were compared. For this study, glide avalanches released in elevations ranging from 1300 to 2700 m with a mean aspect of 98 degrees (east) and a mean slope angle of 38 degrees. The mean profile curvature for all glide avalanches was 0.15 and a plan curvature of -0.01, suggesting a fairly linear surface (i.e. neither convex nor concave). The glide avalanches occurred in mostly bedrock made up of dolomite and limestone slabs and talus deposits with very few occurring in alpine meadows. However, not all glide avalanches failed as cohesive slabs on this bedrock surface. Consequently, surface roughness proved to be a useful descriptive variable to discriminate between slopes that avalanched and those that did not. Annual 'repeat offender' glide avalanche paths were characterized by smooth outcropping rock plates with stratification planes parallel to the slope. Combined with aspect these repeat offenders were also members of the highest glide category. Using this understanding of the role of topographic parameters on glide avalanche activity, a spatial terrain based model was developed to identify other areas with high glide avalanche potential outside of our immediate observation area.
Assessing the importance of terrain parameters on glide avalanche release
Peitzsch, Erich H.; Hendrikx, Jordy; Fagre, Daniel B.
2014-01-01
Glide snow avalanches are dangerous and difficult to predict. Despite recent research there is still a lack of understanding regarding the controls of glide avalanche release. Glide avalanches often occur in similar terrain or the same locations annually and observations suggest that topography may be critical. Thus, to gain an understanding of the terrain component of these types of avalanches we examined terrain parameters associated with glide avalanche release as well as areas of consistent glide crack formation but no subsequent avalanches. Glide avalanche occurrences visible from the Going-to-the-Sun Road corridor in Glacier National Park, Montana from 2003-2013 were investigated using an avalanche database derived of daily observations each year from April 1 to June 15. This yielded 192 glide avalanches in 53 distinct avalanche paths. Each avalanche occurrence was digitized in a GIS using satellite, oblique, and aerial imagery as reference. Topographical parameters such as area, slope, aspect, elevation and elevation were then derived for the entire dataset utilizing GIS tools and a 10m DEM. Land surface substrate and surface geology were derived from National Park Service Inventory and Monitoring maps and U.S. Geological Survey surface geology maps, respectively. Surface roughness and glide factor were calculated using a four level classification index. . Then, each avalanche occurrence was aggregated to general avalanche release zones and the frequencies were compared. For this study, glide avalanches released in elevations ranging from 1300 to 2700 m with a mean aspect of 98 degrees (east) and a mean slope angle of 38 degrees. The mean profile curvature for all glide avalanches was 0.15 and a plan curvature of -0.01, suggesting a fairly linear surface (i.e. neither convex nor concave). The glide avalanches occurred in mostly bedrock made up of dolomite and limestone slabs and talus deposits with very few occurring in alpine meadows. However, not all glide avalanches failed as cohesive slabs on this bedrock surface. Consequently, surface roughness proved to be a useful descriptive variable to discriminate between slopes that avalanched and those that did not. Annual 'repeat offender' glide avalanche paths were characterized by smooth outcropping rock plates with stratification planes parallel to the slope. Combined with aspect these repeat offenders were also members of the highest glide category. Using this understanding of the role of topographic parameters on glide avalanche activity, a spatial terrain based model was developed to identify other areas with high glide avalanche potential outside of our immediate observation area.
Mobility of large rock avalanches: evidence from Valles Marineris, Mars
McEwen, A.S.
1989-01-01
Measurements of H/L (height of drop/length of runout) vs. volume for landslides in Valles Marineris on Mars show a trend of decreasing H/L with increasing volume. This trend, which is linear on a log-log plot, is parallel to but lies above the trend for terrestrial dry rock avalanches. This result and estimates of 104 to 105 Pa yield strength suggest that the landslides were not water saturated, as suggested by previous workers. The offset between the H/L vs. volume trends shows that a typical Martian avalanche must be nearly two orders of magnitude more voluminous than a typical terrestrial avalance in order to achieve the same mobility. This offset might be explained by the effects of gravity on flows with high yield strengths. These results should prove useful to future efforts to resolve the controversy over the mechanics of long-runout avalanches. -Author
Forecasting of wet snow avalanche activity: Proof of concept and operational implementation
NASA Astrophysics Data System (ADS)
Gobiet, Andreas; Jöbstl, Lisa; Rieder, Hannes; Bellaire, Sascha; Mitterer, Christoph
2017-04-01
State-of-the-art tools for the operational assessment of avalanche danger include field observations, recordings from automatic weather stations, meteorological analyses and forecasts, and recently also indices derived from snowpack models. In particular, an index for identifying the onset of wet-snow avalanche cycles (LWCindex), has been demonstrated to be useful. However, its value for operational avalanche forecasting is currently limited, since detailed, physically based snowpack models are usually driven by meteorological data from automatic weather stations only and have therefore no prognostic ability. Since avalanche risk management heavily relies on timely information and early warnings, many avalanche services in Europe nowadays start issuing forecasts for the following days, instead of the traditional assessment of the current avalanche danger. In this context, the prognostic operation of detailed snowpack models has recently been objective of extensive research. In this study a new, observationally constrained setup for forecasting the onset of wet-snow avalanche cycles with the detailed snow cover model SNOWPACK is presented and evaluated. Based on data from weather stations and different numerical weather prediction models, we demonstrate that forecasts of the LWCindex as indicator for wet-snow avalanche cycles can be useful for operational warning services, but is so far not reliable enough to be used as single warning tool without considering other factors. Therefore, further development currently focuses on the improvement of the forecasts by applying ensemble techniques and suitable post processing approaches to the output of numerical weather prediction models. In parallel, the prognostic meteo-snow model chain is operationally used by two regional avalanche warning services in Austria since winter 2016/2017 for the first time. Experiences from the first operational season and first results from current model developments will be reported.
Hu, Long; Su, Jiancang; Ding, Zhenjie; Hao, Qingsong; Fan, Yajun; Liu, Chunliang
2016-08-01
An all solid-state high repetitive sub-nanosecond risetime pulse generator featuring low-energy-triggered bulk gallium arsenide (GaAs) avalanche semiconductor switches and a step-type transmission line is presented. The step-type transmission line with two stages is charged to a potential of 5.0 kV also biasing at the switches. The bulk GaAs avalanche semiconductor switch closes within sub-nanosecond range when illuminated with approximately 87 nJ of laser energy at 905 nm in a single pulse. An asymmetric dipolar pulse with peak-to-peak amplitude of 9.6 kV and risetime of 0.65 ns is produced on a resistive load of 50 Ω. A technique that allows for repetition-rate multiplication of pulse trains experimentally demonstrated that the parallel-connected bulk GaAs avalanche semiconductor switches are triggered in sequence. The highest repetition rate is decided by recovery time of the bulk GaAs avalanche semiconductor switch, and the operating result of 100 kHz of the generator is discussed.
Simulation of Tip-Sample Interaction in the Atomic Force Microscope
NASA Technical Reports Server (NTRS)
Good, Brian S.; Banerjea, Amitava
1994-01-01
Recent simulations of the interaction between planar surfaces and model Atomic Force Microscope (AFM) tips have suggested that there are conditions under which the tip may become unstable and 'avalanche' toward the sample surface. Here we investigate via computer simulation the stability of a variety of model AFM tip configurations with respect to the avalanche transition for a number of fcc metals. We perform Monte-Carlo simulations at room temperature using the Equivalent Crystal Theory (ECT) of Smith and Banerjea. Results are compared with recent experimental results as well as with our earlier work on the avalanche of parallel planar surfaces. Our results on a model single-atom tip are in excellent agreement with recent experiments on tunneling through mechanically-controlled break junctions.
A Debugger for Computational Grid Applications
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.
Multicore runup simulation by under water avalanche using two-layer 1D shallow water equations
NASA Astrophysics Data System (ADS)
Bagustara, B. A. R. H.; Simanjuntak, C. A.; Gunawan, P. H.
2018-03-01
The increasing of layers in shallow water equations (SWE) produces more dynamic model than the one-layer SWE model. The two-layer 1D SWE model has different density for each layer. This model becomes more dynamic and natural, for instance in the ocean, the density of water will decreasing from the bottom to the surface. Here, the source-centered hydro-static reconstruction (SCHR) numerical scheme will be used to approximate the solution of two-layer 1D SWE model, since this scheme is proved to satisfy the mathematical properties for shallow water equation. Additionally in this paper, the algorithm of SCHR is adapted to the multicore architecture. The simulation of runup by under water avalanche is elaborated here. The results show that the runup is depend on the ratio of density of each layers. Moreover by using grid sizes Nx = 8000, the speedup and efficiency by 2 threads are obtained 1.74779 times and 87.3896 % respectively. Nevertheless, by 4 threads the speedup and efficiency are obtained 2.93132 times and 73.2830 % respectively by similar number of grid sizes Nx = 8000.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
GridPix detectors: Production and beam test results
NASA Astrophysics Data System (ADS)
Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.
2013-12-01
The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.
GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows
NASA Technical Reports Server (NTRS)
2003-01-01
With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.
Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver
NASA Technical Reports Server (NTRS)
Parikh, Paresh
2001-01-01
A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.
Wald, Ingo; Ize, Santiago
2015-07-28
Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.
Parallel Grid Manipulations in Earth Science Calculations
NASA Technical Reports Server (NTRS)
Sawyer, W.; Lucchesi, R.; daSilva, A.; Takacs, L. L.
1999-01-01
The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center is moving its data assimilation system to massively parallel computing platforms. This parallel implementation of GEOS DAS will be used in the DAO's normal activities, which include reanalysis of data, and operational support for flight missions. Key components of GEOS DAS, including the gridpoint-based general circulation model and a data analysis system, are currently being parallelized. The parallelization of GEOS DAS is also one of the HPCC Grand Challenge Projects. The GEOS-DAS software employs several distinct grids. Some examples are: an observation grid- an unstructured grid of points at which observed or measured physical quantities from instruments or satellites are associated- a highly-structured latitude-longitude grid of points spanning the earth at given latitude-longitude coordinates at which prognostic quantities are determined, and a computational lat-lon grid in which the pole has been moved to a different location to avoid computational instabilities. Each of these grids has a different structure and number of constituent points. In spite of that, there are numerous interactions between the grids, e.g., values on one grid must be interpolated to another, or, in other cases, grids need to be redistributed on the underlying parallel platform. The DAO has designed a parallel integrated library for grid manipulations (PILGRIM) to support the needed grid interactions with maximum efficiency. It offers a flexible interface to generate new grids, define transformations between grids and apply them. Basic communication is currently MPI, however the interfaces defined here could conceivably be implemented with other message-passing libraries, e.g., Cray SHMEM, or with shared-memory constructs. The library is written in Fortran 90. First performance results indicate that even difficult problems, such as above-mentioned pole rotation- a sparse interpolation with little data locality between the physical lat-lon grid and a pole rotated computational grid- can be solved efficiently and at the GFlop/s rates needed to solve tomorrow's high resolution earth science models. In the subsequent presentation we will discuss the design and implementation of PILGRIM as well as a number of the problems it is required to solve. Some conclusions will be drawn about the potential performance of the overall earth science models on the supercomputer platforms foreseen for these problems.
Aerodynamic simulation on massively parallel systems
NASA Technical Reports Server (NTRS)
Haeuser, Jochem; Simon, Horst D.
1992-01-01
This paper briefly addresses the computational requirements for the analysis of complete configurations of aircraft and spacecraft currently under design to be used for advanced transportation in commercial applications as well as in space flight. The discussion clearly shows that massively parallel systems are the only alternative which is both cost effective and on the other hand can provide the necessary TeraFlops, needed to satisfy the narrow design margins of modern vehicles. It is assumed that the solution of the governing physical equations, i.e., the Navier-Stokes equations which may be complemented by chemistry and turbulence models, is done on multiblock grids. This technique is situated between the fully structured approach of classical boundary fitted grids and the fully unstructured tetrahedra grids. A fully structured grid best represents the flow physics, while the unstructured grid gives best geometrical flexibility. The multiblock grid employed is structured within a block, but completely unstructured on the block level. While a completely unstructured grid is not straightforward to parallelize, the above mentioned multiblock grid is inherently parallel, in particular for multiple instruction multiple datastream (MIMD) machines. In this paper guidelines are provided for setting up or modifying an existing sequential code so that a direct parallelization on a massively parallel system is possible. Results are presented for three parallel systems, namely the Intel hypercube, the Ncube hypercube, and the FPS 500 system. Some preliminary results for an 8K CM2 machine will also be mentioned. The code run is the two dimensional grid generation module of Grid, which is a general two dimensional and three dimensional grid generation code for complex geometries. A system of nonlinear Poisson equations is solved. This code is also a good testcase for complex fluid dynamics codes, since the same datastructures are used. All systems provided good speedups, but message passing MIMD systems seem to be best suited for large miltiblock applications.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Laurino, Marco; Piarulli, Andrea; Gemignani, Angelo
2015-09-01
Criticality reportedly describes brain dynamics. The main critical feature is the presence of scale-free neural avalanches, whose auto-organization is determined by a critical branching ratio of neural-excitation spreading. Other features, directly associated to second-order phase transitions, are: (i) scale-free-network topology of functional connectivity, stemming from suprathreshold pairwise correlations, superimposable, in waking brain activity, with that of ferromagnets at Curie temperature; (ii) temporal long-range memory associated to renewal intermittency driven by abrupt fluctuations in the order parameters, detectable in human brain via spatially distributed phase or amplitude changes in EEG activity. Herein we study intermittent events, extracted from 29 night EEG recordings, including presleep wakefulness and all phases of sleep, where different levels of mentation and consciousness are present. We show that while critical avalanching is unchanged, at least qualitatively, intermittency and functional connectivity, present during conscious phases (wakefulness and REM sleep), break down during both shallow and deep non-REM sleep. We provide a theory for fragmentation-induced intermittency breakdown and suggest that the main difference between conscious and unconscious states resides in the backwards causation, namely on the constraints that the emerging properties at large scale induce to the lower scales. In particular, while in conscious states this backwards causation induces a critical slowing down, preserving spatiotemporal correlations, in dreamless sleep we see a self-organized maintenance of moduli working in parallel. Critical avalanches are still present, and establish transient auto-organization, whose enhanced fluctuations are able to trigger sleep-protecting mechanisms that reinstate parallel activity. The plausible role of critical avalanches in dreamless sleep is to provide a rapid recovery of consciousness, if stimuli are highly arousing.
NASA Astrophysics Data System (ADS)
Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Laurino, Marco; Piarulli, Andrea; Gemignani, Angelo
2015-09-01
Criticality reportedly describes brain dynamics. The main critical feature is the presence of scale-free neural avalanches, whose auto-organization is determined by a critical branching ratio of neural-excitation spreading. Other features, directly associated to second-order phase transitions, are: (i) scale-free-network topology of functional connectivity, stemming from suprathreshold pairwise correlations, superimposable, in waking brain activity, with that of ferromagnets at Curie temperature; (ii) temporal long-range memory associated to renewal intermittency driven by abrupt fluctuations in the order parameters, detectable in human brain via spatially distributed phase or amplitude changes in EEG activity. Herein we study intermittent events, extracted from 29 night EEG recordings, including presleep wakefulness and all phases of sleep, where different levels of mentation and consciousness are present. We show that while critical avalanching is unchanged, at least qualitatively, intermittency and functional connectivity, present during conscious phases (wakefulness and REM sleep), break down during both shallow and deep non-REM sleep. We provide a theory for fragmentation-induced intermittency breakdown and suggest that the main difference between conscious and unconscious states resides in the backwards causation, namely on the constraints that the emerging properties at large scale induce to the lower scales. In particular, while in conscious states this backwards causation induces a critical slowing down, preserving spatiotemporal correlations, in dreamless sleep we see a self-organized maintenance of moduli working in parallel. Critical avalanches are still present, and establish transient auto-organization, whose enhanced fluctuations are able to trigger sleep-protecting mechanisms that reinstate parallel activity. The plausible role of critical avalanches in dreamless sleep is to provide a rapid recovery of consciousness, if stimuli are highly arousing.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Implicit schemes and parallel computing in unstructured grid CFD
NASA Technical Reports Server (NTRS)
Venkatakrishnam, V.
1995-01-01
The development of implicit schemes for obtaining steady state solutions to the Euler and Navier-Stokes equations on unstructured grids is outlined. Applications are presented that compare the convergence characteristics of various implicit methods. Next, the development of explicit and implicit schemes to compute unsteady flows on unstructured grids is discussed. Next, the issues involved in parallelizing finite volume schemes on unstructured meshes in an MIMD (multiple instruction/multiple data stream) fashion are outlined. Techniques for partitioning unstructured grids among processors and for extracting parallelism in explicit and implicit solvers are discussed. Finally, some dynamic load balancing ideas, which are useful in adaptive transient computations, are presented.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.; Zagaris, George
2009-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
Dynamic overset grid communication on distributed memory parallel processors
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.
1993-01-01
A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.
Schnek: A C++ library for the development of parallel simulation codes on regular grids
NASA Astrophysics Data System (ADS)
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
Spike avalanches in vivo suggest a driven, slightly subcritical brain state
Priesemann, Viola; Wibral, Michael; Valderrama, Mario; Pröpper, Robert; Le Van Quyen, Michel; Geisel, Theo; Triesch, Jochen; Nikolić, Danko; Munk, Matthias H. J.
2014-01-01
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy. PMID:25009473
NASA Astrophysics Data System (ADS)
Edwards, Nathaniel S.; Conley, Jerrod C.; Reichenberger, Michael A.; Nelson, Kyle A.; Tiner, Christopher N.; Hinson, Niklas J.; Ugorowski, Philip B.; Fronk, Ryan G.; McGregor, Douglas S.
2018-06-01
The propagation of electrons through several linear pore densities of reticulated vitreous carbon (RVC) foam was studied using a Frisch-grid parallel-plate ionization chamber pressurized to 1 psig of P-10 proportional gas. The operating voltages of the electrodes contained within the Frisch-grid parallel-plate ionization chamber were defined by measuring counting curves using a collimated 241Am alpha-particle source with and without a Frisch grid. RVC foam samples with linear pore densities of 5, 10, 20, 30, 45, 80, and 100 pores per linear inch were separately positioned between the cathode and anode. Pulse-height spectra and count rates from a collimated 241Am alpha-particle source positioned between the cathode and each RVC foam sample were measured and compared to a measurement without an RVC foam sample. The Frisch grid was positioned in between the RVC foam sample and the anode. The measured pulse-height spectra were indiscernible from background and resulted in negligible net count rates for all RVC foam samples. The Frisch grid parallel-plate ionization chamber measurement results indicate that electrons do not traverse the bulk of RVC foam and consequently do not produce a pulse.
Grid computing in large pharmaceutical molecular modeling.
Claus, Brian L; Johnson, Stephen R
2008-07-01
Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
NASA Astrophysics Data System (ADS)
Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.
2011-05-01
This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.
Parallelization Issues and Particle-In Codes.
NASA Astrophysics Data System (ADS)
Elster, Anne Cathrine
1994-01-01
"Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.
Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A
2010-01-01
Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, E.F.; Vrabec, J.
1981-10-26
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
Parallel-wire grid assembly with method and apparatus for construction thereof
Lewandowski, Edward F.; Vrabec, John
1984-01-01
Disclosed is a parallel wire grid and an apparatus and method for making the same. The grid consists of a generally coplanar array of parallel spaced-apart wires secured between metallic frame members by an electrically conductive epoxy. The method consists of continuously winding a wire about a novel winding apparatus comprising a plurality of spaced-apart generally parallel spindles. Each spindle is threaded with a number of predeterminedly spaced-apart grooves which receive and accurately position the wire at predetermined positions along the spindle. Overlying frame members coated with electrically conductive epoxy are then placed on either side of the wire array and are drawn together. After the epoxy hardens, portions of the wire array lying outside the frame members are trimmed away.
PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions
NASA Astrophysics Data System (ADS)
Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin
2017-01-01
We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.
Establishment of key grid-connected performance index system for integrated PV-ES system
NASA Astrophysics Data System (ADS)
Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.
2016-08-01
In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.
Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project
NASA Astrophysics Data System (ADS)
Rodila, D.; Bacu, V.; Gorgan, D.
2012-04-01
The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.
Load Balancing Strategies for Multi-Block Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)
2002-01-01
The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.
Parallel Proximity Detection for Computer Simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1997-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
Parallel Proximity Detection for Computer Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1998-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
GSHR-Tree: a spatial index tree based on dynamic spatial slot and hash table in grid environments
NASA Astrophysics Data System (ADS)
Chen, Zhanlong; Wu, Xin-cai; Wu, Liang
2008-12-01
Computation Grids enable the coordinated sharing of large-scale distributed heterogeneous computing resources that can be used to solve computationally intensive problems in science, engineering, and commerce. Grid spatial applications are made possible by high-speed networks and a new generation of Grid middleware that resides between networks and traditional GIS applications. The integration of the multi-sources and heterogeneous spatial information and the management of the distributed spatial resources and the sharing and cooperative of the spatial data and Grid services are the key problems to resolve in the development of the Grid GIS. The performance of the spatial index mechanism is the key technology of the Grid GIS and spatial database affects the holistic performance of the GIS in Grid Environments. In order to improve the efficiency of parallel processing of a spatial mass data under the distributed parallel computing grid environment, this paper presents a new grid slot hash parallel spatial index GSHR-Tree structure established in the parallel spatial indexing mechanism. Based on the hash table and dynamic spatial slot, this paper has improved the structure of the classical parallel R tree index. The GSHR-Tree index makes full use of the good qualities of R-Tree and hash data structure. This paper has constructed a new parallel spatial index that can meet the needs of parallel grid computing about the magnanimous spatial data in the distributed network. This arithmetic splits space in to multi-slots by multiplying and reverting and maps these slots to sites in distributed and parallel system. Each sites constructs the spatial objects in its spatial slot into an R tree. On the basis of this tree structure, the index data was distributed among multiple nodes in the grid networks by using large node R-tree method. The unbalance during process can be quickly adjusted by means of a dynamical adjusting algorithm. This tree structure has considered the distributed operation, reduplication operation transfer operation of spatial index in the grid environment. The design of GSHR-Tree has ensured the performance of the load balance in the parallel computation. This tree structure is fit for the parallel process of the spatial information in the distributed network environments. Instead of spatial object's recursive comparison where original R tree has been used, the algorithm builds the spatial index by applying binary code operation in which computer runs more efficiently, and extended dynamic hash code for bit comparison. In GSHR-Tree, a new server is assigned to the network whenever a split of a full node is required. We describe a more flexible allocation protocol which copes with a temporary shortage of storage resources. It uses a distributed balanced binary spatial tree that scales with insertions to potentially any number of storage servers through splits of the overloaded ones. The application manipulates the GSHR-Tree structure from a node in the grid environment. The node addresses the tree through its image that the splits can make outdated. This may generate addressing errors, solved by the forwarding among the servers. In this paper, a spatial index data distribution algorithm that limits the number of servers has been proposed. We improve the storage utilization at the cost of additional messages. The structure of GSHR-Tree is believed that the scheme of this grid spatial index should fit the needs of new applications using endlessly larger sets of spatial data. Our proposal constitutes a flexible storage allocation method for a distributed spatial index. The insertion policy can be tuned dynamically to cope with periods of storage shortage. In such cases storage balancing should be favored for better space utilization, at the price of extra message exchanges between servers. This structure makes a compromise in the updating of the duplicated index and the transformation of the spatial index data. Meeting the needs of the grid computing, GSHRTree has a flexible structure in order to satisfy new needs in the future. The GSHR-Tree provides the R-tree capabilities for large spatial datasets stored over interconnected servers. The analysis, including the experiments, confirmed the efficiency of our design choices. The scheme should fit the needs of new applications of spatial data, using endlessly larger datasets. Using the system response time of the parallel processing of spatial scope query algorithm as the performance evaluation factor, According to the result of the simulated the experiments, GSHR-Tree is performed to prove the reasonable design and the high performance of the indexing structure that the paper presented.
A computer program for converting rectangular coordinates to latitude-longitude coordinates
Rutledge, A.T.
1989-01-01
A computer program was developed for converting the coordinates of any rectangular grid on a map to coordinates on a grid that is parallel to lines of equal latitude and longitude. Using this program in conjunction with groundwater flow models, the user can extract data and results from models with varying grid orientations and place these data into grid structure that is oriented parallel to lines of equal latitude and longitude. All cells in the rectangular grid must have equal dimensions, and all cells in the latitude-longitude grid measure one minute by one minute. This program is applicable if the map used shows lines of equal latitude as arcs and lines of equal longitude as straight lines and assumes that the Earth 's surface can be approximated as a sphere. The program user enters the row number , column number, and latitude and longitude of the midpoint of the cell for three test cells on the rectangular grid. The latitude and longitude of boundaries of the rectangular grid also are entered. By solving sets of simultaneous linear equations, the program calculates coefficients that are used for making the conversion. As an option in the program, the user may build a groundwater model file based on a grid that is parallel to lines of equal latitude and longitude. The program reads a data file based on the rectangular coordinates and automatically forms the new data file. (USGS)
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
Junction-side illuminated silicon detector arrays
Iwanczyk, Jan S.; Patt, Bradley E.; Tull, Carolyn
2004-03-30
A junction-side illuminated detector array of pixelated detectors is constructed on a silicon wafer. A junction contact on the front-side may cover the whole detector array, and may be used as an entrance window for light, x-ray, gamma ray and/or other particles. The back-side has an array of individual ohmic contact pixels. Each of the ohmic contact pixels on the back-side may be surrounded by a grid or a ring of junction separation implants. Effective pixel size may be changed by separately biasing different sections of the grid. A scintillator may be coupled directly to the entrance window while readout electronics may be coupled directly to the ohmic contact pixels. The detector array may be used as a radiation hardened detector for high-energy physics research or as avalanche imaging arrays.
NASA Astrophysics Data System (ADS)
Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.
2016-05-01
This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
An Analysis of Performance Enhancement Techniques for Overset Grid Applications
NASA Technical Reports Server (NTRS)
Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)
2002-01-01
The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.
Parallel Processing of Images in Mobile Devices using BOINC
NASA Astrophysics Data System (ADS)
Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo
2018-04-01
Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.
Enabling Object Storage via shims for Grid Middleware
NASA Astrophysics Data System (ADS)
Cadellin Skipsey, Samuel; De Witt, Shaun; Dewhurst, Alastair; Britton, David; Roy, Gareth; Crooks, David
2015-12-01
The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called ”Cloud” storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for ”POSIX-like” filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow.
NASA Technical Reports Server (NTRS)
Zhang, Jun; Ge, Lixin; Kouatchou, Jules
2000-01-01
A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.
Integrated electronics for time-resolved array of single-photon avalanche diodes
NASA Astrophysics Data System (ADS)
Acconcia, G.; Crotti, M.; Rech, I.; Ghioni, M.
2013-12-01
The Time Correlated Single Photon Counting (TCSPC) technique has reached a prominent position among analytical methods employed in a great variety of fields, from medicine and biology (fluorescence spectroscopy) to telemetry (laser ranging) and communication (quantum cryptography). Nevertheless the development of TCSPC acquisition systems featuring both a high number of parallel channels and very high performance is still an open challenge: to satisfy the tight requirements set by the applications, a fully parallel acquisition system requires not only high efficiency single photon detectors but also a read-out electronics specifically designed to obtain the highest performance in conjunction with these sensors. To this aim three main blocks have been designed: a gigahertz bandwidth front-end stage to directly read the custom technology SPAD array avalanche current, a reconfigurable logic to route the detectors output signals to the acquisition chain and an array of time measurement circuits capable of recording the photon arrival times with picoseconds time resolution and a very high linearity. An innovative architecture based on these three circuits will feature a very high number of detectors to perform a truly parallel spatial or spectral analysis and a smaller number of high performance time-to-amplitude converter offering very high performance and a very high conversion frequency while limiting the area occupation and power dissipation. The routing logic will make the dynamic connection between the two arrays possible in order to guarantee that no information gets lost.
Unstructured grids on SIMD torus machines
NASA Technical Reports Server (NTRS)
Bjorstad, Petter E.; Schreiber, Robert
1994-01-01
Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
A communication library for the parallelization of air quality models on structured grids
NASA Astrophysics Data System (ADS)
Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian
PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.
Recent Progress on the Parallel Implementation of Moving-Body Overset Grid Schemes
NASA Technical Reports Server (NTRS)
Wissink, Andrew; Allen, Edwin (Technical Monitor)
1998-01-01
Viscous calculations about geometrically complex bodies in which there is relative motion between component parts is one of the most computationally demanding problems facing CFD researchers today. This presentation documents results from the first two years of a CHSSI-funded effort within the U.S. Army AFDD to develop scalable dynamic overset grid methods for unsteady viscous calculations with moving-body problems. The first pan of the presentation will focus on results from OVERFLOW-D1, a parallelized moving-body overset grid scheme that employs traditional Chimera methodology. The two processes that dominate the cost of such problems are the flow solution on each component and the intergrid connectivity solution. Parallel implementations of the OVERFLOW flow solver and DCF3D connectivity software are coupled with a proposed two-part static-dynamic load balancing scheme and tested on the IBM SP and Cray T3E multi-processors. The second part of the presentation will cover some recent results from OVERFLOW-D2, a new flow solver that employs Cartesian grids with various levels of refinement, facilitating solution adaption. A study of the parallel performance of the scheme on large distributed- memory multiprocessor computer architectures will be reported.
The island dynamics model on parallel quadtree grids
NASA Astrophysics Data System (ADS)
Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic
2018-05-01
We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.
Large-Scale Parallel Viscous Flow Computations using an Unstructured Multigrid Algorithm
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1999-01-01
The development and testing of a parallel unstructured agglomeration multigrid algorithm for steady-state aerodynamic flows is discussed. The agglomeration multigrid strategy uses a graph algorithm to construct the coarse multigrid levels from the given fine grid, similar to an algebraic multigrid approach, but operates directly on the non-linear system using the FAS (Full Approximation Scheme) approach. The scalability and convergence rate of the multigrid algorithm are examined on the SGI Origin 2000 and the Cray T3E. An argument is given which indicates that the asymptotic scalability of the multigrid algorithm should be similar to that of its underlying single grid smoothing scheme. For medium size problems involving several million grid points, near perfect scalability is obtained for the single grid algorithm, while only a slight drop-off in parallel efficiency is observed for the multigrid V- and W-cycles, using up to 128 processors on the SGI Origin 2000, and up to 512 processors on the Cray T3E. For a large problem using 25 million grid points, good scalability is observed for the multigrid algorithm using up to 1450 processors on a Cray T3E, even when the coarsest grid level contains fewer points than the total number of processors.
JPARSS: A Java Parallel Network Package for Grid Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jie; Akers, Walter; Chen, Ying
2002-03-01
The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less
NASA Technical Reports Server (NTRS)
Reinsch, K. G. (Editor); Schmidt, W. (Editor); Ecer, A. (Editor); Haeuser, Jochem (Editor); Periaux, J. (Editor)
1992-01-01
A conference was held on parallel computational fluid dynamics and produced related papers. Topics discussed in these papers include: parallel implicit and explicit solvers for compressible flow, parallel computational techniques for Euler and Navier-Stokes equations, grid generation techniques for parallel computers, and aerodynamic simulation om massively parallel systems.
NASA Astrophysics Data System (ADS)
Hua, Weizhuo; Fukagata, Koji
2018-04-01
Two-dimensional numerical simulation of a surface dielectric barrier discharge (SDBD) plasma actuator, driven by a nanosecond voltage pulse, is conducted. A special focus is laid upon the influence of grid resolution on the computational result. It is found that the computational result is not very sensitive to the streamwise grid spacing, whereas the wall-normal grid spacing has a critical influence. In particular, the computed propagation velocity changes discontinuously around the wall-normal grid spacing about 2 μm due to a qualitative change of discharge structure. The present result suggests that a computational grid finer than that was used in most of previous studies is required to correctly capture the structure and dynamics of streamer: when a positive nanosecond voltage pulse is applied to the upper electrode, a streamer forms in the vicinity of upper electrode and propagates along the dielectric surface with a maximum propagation velocity of 2 × 108 cm/s, and a gap with low electron and ion density (i.e., plasma sheath) exists between the streamer and dielectric surface. Difference between the results obtained using the finer and the coarser grid is discussed in detail in terms of the electron transport at a position near the surface. When the finer grid is used, the low electron density near the surface is caused by the absence of ionization avalanche: in that region, the electrons generated by ionization is compensated by drift-diffusion flux. In contrast, when the coarser grid is used, underestimated drift-diffusion flux cannot compensate the electrons generated by ionization, and it leads to an incorrect increase of electron density.
Anderson, J.B.
1960-01-01
A reactor is described which comprises a tank, a plurality of coaxial steel sleeves in the tank, a mass of water in the tank, and wire grids in abutting relationship within a plurality of elongated parallel channels within the steel sleeves, the wire being provided with a plurality of bends in the same plane forming adjacent parallel sections between bends, and the sections of adjacent grids being normally disposed relative to each other.
Vision-Based Navigation and Parallel Computing
1990-08-01
33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space
Silicon photon-counting avalanche diodes for single-molecule fluorescence spectroscopy
Michalet, Xavier; Ingargiola, Antonino; Colyer, Ryan A.; Scalia, Giuseppe; Weiss, Shimon; Maccagnani, Piera; Gulinatti, Angelo; Rech, Ivan; Ghioni, Massimo
2014-01-01
Solution-based single-molecule fluorescence spectroscopy is a powerful experimental tool with applications in cell biology, biochemistry and biophysics. The basic feature of this technique is to excite and collect light from a very small volume and work in a low concentration regime resulting in rare burst-like events corresponding to the transit of a single molecule. Detecting photon bursts is a challenging task: the small number of emitted photons in each burst calls for high detector sensitivity. Bursts are very brief, requiring detectors with fast response time and capable of sustaining high count rates. Finally, many bursts need to be accumulated to achieve proper statistical accuracy, resulting in long measurement time unless parallelization strategies are implemented to speed up data acquisition. In this paper we will show that silicon single-photon avalanche diodes (SPADs) best meet the needs of single-molecule detection. We will review the key SPAD parameters and highlight the issues to be addressed in their design, fabrication and operation. After surveying the state-of-the-art SPAD technologies, we will describe our recent progress towards increasing the throughput of single-molecule fluorescence spectroscopy in solution using parallel arrays of SPADs. The potential of this approach is illustrated with single-molecule Förster resonance energy transfer measurements. PMID:25309114
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
A fast pulse design for parallel excitation with gridding conjugate gradient.
Feng, Shuo; Ji, Jim
2013-01-01
Parallel excitation (pTx) is recognized as a crucial technique in high field MRI to address the transmit field inhomogeneity problem. However, it can be time consuming to design pTx pulses which is not desirable. In this work, we propose a pulse design with gridding conjugate gradient (CG) based on the small-tip-angle approximation. The two major time consuming matrix-vector multiplications are substituted by two operators which involves with FFT and gridding only. Simulation results have shown that the proposed method is 3 times faster than conventional method and the memory cost is reduced by 1000 times.
Cost-effective GPU-grid for genome-wide epistasis calculations.
Pütz, B; Kam-Thong, T; Karbalai, N; Altmann, A; Müller-Myhsok, B
2013-01-01
Until recently, genotype studies were limited to the investigation of single SNP effects due to the computational burden incurred when studying pairwise interactions of SNPs. However, some genetic effects as simple as coloring (in plants and animals) cannot be ascribed to a single locus but only understood when epistasis is taken into account [1]. It is expected that such effects are also found in complex diseases where many genes contribute to the clinical outcome of affected individuals. Only recently have such problems become feasible computationally. The inherently parallel structure of the problem makes it a perfect candidate for massive parallelization on either grid or cloud architectures. Since we are also dealing with confidential patient data, we were not able to consider a cloud-based solution but had to find a way to process the data in-house and aimed to build a local GPU-based grid structure. Sequential epistatsis calculations were ported to GPU using CUDA at various levels. Parallelization on the CPU was compared to corresponding GPU counterparts with regards to performance and cost. A cost-effective solution was created by combining custom-built nodes equipped with relatively inexpensive consumer-level graphics cards with highly parallel GPUs in a local grid. The GPU method outperforms current cluster-based systems on a price/performance criterion, as a single GPU shows speed performance comparable up to 200 CPU cores. The outlined approach will work for problems that easily lend themselves to massive parallelization. Code for various tasks has been made available and ongoing development of tools will further ease the transition from sequential to parallel algorithms.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
NASA Astrophysics Data System (ADS)
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.
Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR
NASA Astrophysics Data System (ADS)
Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen
2013-03-01
Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Einstein, Daniel R.; Kuprat, Andrew P.; Jiao, Xiangmin
2013-01-01
Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: 1) the mapping of MRI diffusion tensor data to an unstuctured ventricular grid; 2) the mappingmore » of serial cyro-section histology data to an unstructured mouse brain grid; and 3) the mapping of CT-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case.« less
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Domain Decomposition By the Advancing-Partition Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.
Li, Jiangtao; Zhao, Zheng; Sun, Yi; Liu, Yuhao; Ren, Ziyuan; He, Jiaxin; Cao, Hui; Zheng, Minjun
2017-03-01
Numerous applications driven by pulsed voltage require pulses to be with high amplitude, high repetitive frequency, and narrow width, which could be satisfied by utilizing avalanche transistors. The output improvement is severely limited by power capacities of transistors. Pulse combining is an effective approach to increase the output amplitude while still adopting conventional pulse generating modules. However, there are drawbacks in traditional topologies including the saturation tendency of combining efficiency and waveform oscillation. In this paper, a hybrid pulse combining topology was adopted utilizing the combination of modularized avalanche transistor Marx circuits, direct pulse adding, and transmission line transformer. The factors affecting the combining efficiency were determined including the output time synchronization of Marx circuits, and the quantity and position of magnetic cores. The numbers of the parallel modules and the stages were determined by the output characteristics of each combining method. Experimental results illustrated the ability of generating pulses with 2-14 kV amplitude, 7-11 ns width, and a maximum 10 kHz repetitive rate on a matched 50-300 Ω resistive load. The hybrid topology would be a convinced pulse combining method for similar nanosecond pulse generators based on the solid-state switches.
NASA Astrophysics Data System (ADS)
Li, Jiangtao; Zhao, Zheng; Sun, Yi; Liu, Yuhao; Ren, Ziyuan; He, Jiaxin; Cao, Hui; Zheng, Minjun
2017-03-01
Numerous applications driven by pulsed voltage require pulses to be with high amplitude, high repetitive frequency, and narrow width, which could be satisfied by utilizing avalanche transistors. The output improvement is severely limited by power capacities of transistors. Pulse combining is an effective approach to increase the output amplitude while still adopting conventional pulse generating modules. However, there are drawbacks in traditional topologies including the saturation tendency of combining efficiency and waveform oscillation. In this paper, a hybrid pulse combining topology was adopted utilizing the combination of modularized avalanche transistor Marx circuits, direct pulse adding, and transmission line transformer. The factors affecting the combining efficiency were determined including the output time synchronization of Marx circuits, and the quantity and position of magnetic cores. The numbers of the parallel modules and the stages were determined by the output characteristics of each combining method. Experimental results illustrated the ability of generating pulses with 2-14 kV amplitude, 7-11 ns width, and a maximum 10 kHz repetitive rate on a matched 50-300 Ω resistive load. The hybrid topology would be a convinced pulse combining method for similar nanosecond pulse generators based on the solid-state switches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the pointmore » of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.« less
NASA Technical Reports Server (NTRS)
Voellmer, G. M.; Chuss, D. T.; Jackson, M.; Krejny, M.; Moseley, S. H.; Novak, G.; Wollack, E. J.
2008-01-01
We describe the design of the linear motion stage for a Variable-delay Polarization Modulator (VPM) and of a grid flattener that has been built and integrated into the Hertz ground-based, submillimeter polarimeter. VPMs allow the modulation of a polarized source by controlling the phase difference between two linear, orthogonal polarizations. The size of the gap between a mirror and a very flat polarizing grid determines the amount of the phase difference. This gap must be parallel to better than 1% of the wavelength. A novel, kinematic, flexure-based mechanism is described that passively maintains the parallelism of the mirror and the grid to 1.5 pm over a 150 mm diameter, with a 400 pm throw. A single piezoceramic actuator is used to modulate the gap, and a capacitive sensor provides position feedback for closed-loop control. A simple device that ensures the planarity of the polarizing grid is also described. Engineering results from the deployment of this device in the Hertz instrument April 2006 at the Submillimeter Telescope Observatory (SMTO) in Arizona are presented.
Multiple grid problems on concurrent-processing computers
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.
1986-01-01
Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.
Parallel grid library for rapid and flexible simulation development
NASA Astrophysics Data System (ADS)
Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.
2013-04-01
We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Influence of bed surface changes on snow avalanche simulation
NASA Astrophysics Data System (ADS)
Fischer, Jan-Thomas; Issler, Dieter
2014-05-01
Gravitational flows, such as snow avalanches, are often modeled employing the shallowness assumption. The driving gravitational force has a first order effect on the dynamics of the flow, especially in complex terrain. Under suitable conditions, erosion and deposition during passage of the flow may change the bed surface by a similar amount as the flow depth itself. The accompanying changes of local slope angle and curvature are particularly significant at the side margins of the flow, where they may induce self-channeling and levée formation. Generally, one ought to expect visible effects wherever the flow depth and velocity are small, e.g., in deposition zones. Most current numerical models in practical use neglect this effect. In order to study the importance of these effects in typical applications, we modified the quasi-3D (depth-averaged) code MoT-Voellmy, which implements the well-known Voellmy friction law that is traditionally used in hazard mapping: The bed shear stress is given by τiz(h,u) = -ui(μgh cosθ+ ku2), ||u|| (1) with μ = O(0.1...0.5) and k = O(10-3...10-2) the dimensionless friction and drag coefficients, respectively. The leading curvature effects, i.e., extra friction due to centrifugal normal forces, are taken into account. The mass and momentum balances are solved by the (simplified) method of transport on a grid whose cells are squares when projected onto the horizontal plane. The direction of depth-averaging is everywhere perpendicular to the topographic surface. A simple erosion model is used. The erosion formula is based on the assumption that the snow cover behaves as a perfectly brittle solid with shear strength τc, above which it instantaneously fails. The erosion rate is derived from the balance of momentum across the interface between bed and flow, where there is a discontinuity of the shear stress, which is given by equation 1 just above the interface and by τc just below it according to the assumptions. This immediately leads to the formula 2 qe = μgh-cosθ+-ku- τc/ρfΘ (μgh cosθ+ ku2 - τc/ρf). ||u|| (2) We present numerical simulations with static and dynamic beds in two different cases. First, an avalanche simulation on an inclined plane allows to study the occurring effects in their most immediate form. This allows to study the influence of spatial resolution of the computational grid. Second, we back-calculate a typical mid-size avalanche that was measured and documented in 1993 at the Norwegian test site Ryggfonn. This case study serves to test the relevance of including bed surface changes under conditions typical of real-world applications.
NASA Astrophysics Data System (ADS)
Roh, Y. H.; Yoon, Y.; Kim, K.; Kim, J.; Kim, J.; Morishita, J.
2016-10-01
Scattered radiation is the main reason for the degradation of image quality and the increased patient exposure dose in diagnostic radiology. In an effort to reduce scattered radiation, a novel structure of an indirect flat panel detector has been proposed. In this study, a performance evaluation of the novel system in terms of image contrast as well as an estimation of the number of photons incident on the detector and the grid exposure factor were conducted using Monte Carlo simulations. The image contrast of the proposed system was superior to that of the no-grid system but slightly inferior to that of the parallel-grid system. The number of photons incident on the detector and the grid exposure factor of the novel system were higher than those of the parallel-grid system but lower than those of the no-grid system. The proposed system exhibited the potential for reduced exposure dose without image quality degradation; additionally, can be further improved by a structural optimization considering the manufacturer's specifications of its lead contents.
Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loopmore » for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.« less
A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations
NASA Technical Reports Server (NTRS)
Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos
2009-01-01
A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.
A Data Parallel Multizone Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1995-01-01
We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.
NASA Astrophysics Data System (ADS)
Mandai, Shingo; Jain, Vishwas; Charbon, Edoardo
2014-02-01
This paper presents a digital silicon photomultiplier (SiPM) partitioned in columns, whereas each column is connected to a column-parallel time-to-digital converter (TDC), in order to improve the timing resolution of single-photon detection. By reducing the number of pixels per TDC using a sharing scheme with three TDCs per column, the pixel-to-pixel skew is reduced. We report the basic characterization of the SiPM, comprising 416 single-photon avalanche diodes (SPADs); the characterization includes photon detection probability, dark count rate, afterpulsing, and crosstalk. We achieved 264-ps full-width at half maximum timing resolution of single-photon detection using a 48-fold column-parallel TDC with a temporal resolution of 51.8 ps (least significant bit), fully integrated in standard complementary metal-oxide semiconductor technology.
Multi-LED parallel transmission for long distance underwater VLC system with one SPAD receiver
NASA Astrophysics Data System (ADS)
Wang, Chao; Yu, Hong-Yi; Zhu, Yi-Jun; Wang, Tao; Ji, Ya-Wei
2018-03-01
In this paper, a multiple light emitting diode (LED) chips parallel transmission (Multi-LED-PT) scheme for underwater visible light communication system with one photon-counting single photon avalanche diode (SPAD) receiver is proposed. As the lamp always consists of multi-LED chips, the data rate could be improved when we drive these multi-LED chips parallel by using the interleaver-division-multiplexing technique. For each chip, the on-off-keying modulation is used to reduce the influence of clipping. Then a serial successive interference cancellation detection algorithm based on ideal Poisson photon-counting channel by the SPAD is proposed. Finally, compared to the SPAD-based direct current-biased optical orthogonal frequency division multiplexing system, the proposed Multi-LED-PT system could improve the error-rate performance and anti-nonlinearity performance significantly under the effects of absorption, scattering and weak turbulence-induced channel fading together.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Progress Toward Overset-Grid Moving Body Capability for USM3D Unstructured Flow Solver
NASA Technical Reports Server (NTRS)
Pandyna, Mohagna J.; Frink, Neal T.; Noack, Ralph W.
2005-01-01
A static and dynamic Chimera overset-grid capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. Modifications to the solver primarily consist of a few strategic calls to the Donor interpolation Receptor Transaction library (DiRTlib) to facilitate communication of solution information between various grids. The assembly of multiple overlapping grids into a single-zone composite grid is performed by the Structured, Unstructured and Generalized Grid AssembleR (SUGGAR) code. Several test cases are presented to verify the implementation, assess overset-grid solution accuracy and convergence relative to single-grid solutions, and demonstrate the prescribed relative grid motion capability.
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
NASA Astrophysics Data System (ADS)
Voellmer, G. M.; Chuss, D. T.; Jackson, M.; Krejny, M.; Moseley, S. H.; Novak, G.; Wollack, E. J.
2006-06-01
We describe the design and construction of a Variable-delay Polarization Modulator (VPM) that has been built and integrated into the Hertz ground-based, submillimeter polarimeter at the SMTO on Mt. Graham in Arizona. VPMs allow polarization modulation by controlling the phase difference between two linear, orthogonal polarizations. This is accomplished by utilizing a grid-mirror pair with a controlled separation. The size of the gap between the mirror and the polarizing grid determines the amount of the phase difference. This gap must be parallel to better than 1% of the wavelength. The necessity of controlling the phase of the radiation across this device drives the two novel features of the VPM. First, a novel, kinematic, flexure is employed that passively maintains the parallelism of the mirror and the grid to 1.5 μm over a 150 mm diameter, with a 400 μm throw. A single piezoceramic actuator is used to modulate the gap, and a capacitive sensor provides position feedback for closed-loop control. Second, the VPM uses a grid flattener that highly constrains the planarity of the polarizing grid. In doing so, the phase error across the device is minimized. Engineering results from the deployment of this device in the Hertz instrument April 2006 at the Submillimeter Telescope Observatory (SMTO) in Arizona are presented.
Parallel adaptive wavelet collocation method for PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less
Research in Parallel Algorithms and Software for Computational Aerosciences
DOT National Transportation Integrated Search
1996-04-01
Phase I is complete for the development of a Computational Fluid Dynamics : with automatic grid generation and adaptation for the Euler : analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian : grid code developed at Lockheed...
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Modularized Parallel Neutron Instrument Simulation on the TeraGrid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Meili; Cobb, John W; Hagen, Mark E
2007-01-01
In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less
NASA Astrophysics Data System (ADS)
Grieco, F.; Capra, L.; Groppelli, G.; Norini, G.
2007-05-01
The present study concerns the numerical modeling of debris avalanches on the Nevado de Toluca Volcano (Mexico) using TITAN2D simulation software, and its application to create hazard maps. Nevado de Toluca is an andesitic to dacitic stratovolcano of Late Pliocene-Holocene age, located in central México near to the cities of Toluca and México City; its past activity has endangered an area with more than 25 million inhabitants today. The present work is based upon the data collected during extensive field work finalized to the realization of the geological map of Nevado de Toluca at 1:25,000 scale. The activity of the volcano has developed from 2.6 Ma until 10.5 ka with both effusive and explosive events; the Nevado de Toluca has presented long phases of inactivity characterized by erosion and emplacement of debris flow and debris avalanche deposits on its flanks. The largest epiclastic events in the history of the volcano are wide debris flows and debris avalanches, occurred between 1 Ma and 50 ka, during a prolonged hiatus in eruptive activity. Other minor events happened mainly during the most recent volcanic activity (less than 50 ka), characterized by magmatic and tectonic-induced instability of the summit dome complex. According to the most recent tectonic analysis, the active transtensive kinematics of the E-W Tenango Fault System had a strong influence on the preferential directions of the last three documented lateral collapses, which generated the Arroyo Grande and Zaguàn debris avalanche deposits towards E and Nopal debris avalanche deposit towards W. The analysis of the data collected during the field work permitted to create a detailed GIS database of the spatial and temporal distribution of debris avalanche deposits on the volcano. Flow models, that have been performed with the software TITAN2D, developed by GMFG at Buffalo, were entirely based upon the information stored in the geological database. The modeling software is built upon equations solved by a parallel and adaptive mesh, that can concentrate computing power in region of special interest. First of all, simulations of known past events, were compared with the geological data validating the effectiveness of the method. Afterwards, numerous simulations have been executed varying input parameters as friction angles, starting point and initial volume, in order to obtain a global perspective over the possible expected debris avalanche scenarios. The input parameters were selected considering the geological, structural and topographic factors controlling instability of the volcanic cone, especially in case of renewed eruptive activity. The interoperability between TITAN2D and GIS softwares permitted to draw a semi-quantitative hazard map by crossing simulation outputs with the distribution of deposits generated by past episodes of instability, mapped during the field work.
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
A template-based approach for parallel hexahedral two-refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
Far infrared polarizing grids for use at cryogenic temperatures
NASA Technical Reports Server (NTRS)
Novak, Giles; Sundwall, Jeffrey L.; Pernic, Robert J.
1989-01-01
A technique is proposed for the construction of free-standing wire grids for use as far-IR polarizers. The method involves wrapping a strand of wire around a single cylinder rather than around a pair of parallel rods, thus simplifying the problem of maintaining constant wire tension. The cylinder is composed of three separate pieces which are disassembled at a later stage in the grid-making process. Grids have been constructed using 8-micron-diameter stainless steel wire and a grid spacing of 25 microns. The grids are shown to be reliable under repeated cycling between room temperature and 1.5 K.
Recent improvements of the JET lithium beam diagnostica)
NASA Astrophysics Data System (ADS)
Brix, M.; Dodt, D.; Dunai, D.; Lupelli, I.; Marsen, S.; Melson, T. F.; Meszaros, B.; Morgan, P.; Petravich, G.; Refy, D. I.; Silva, C.; Stamp, M.; Szabolics, T.; Zastrow, K.-D.; Zoletnik, S.; JET-EFDA Contributors
2012-10-01
A 60 kV neutral lithium diagnostic beam probes the edge plasma of JET for the measurement of electron density profiles. This paper describes recent enhancements of the diagnostic setup, new procedures for calibration and protection measures for the lithium ion gun during massive gas puffs for disruption mitigation. New light splitting optics allow in parallel beam emission measurements with a new double entrance slit CCD spectrometer (spectrally resolved) and a new interference filter avalanche photodiode camera (fast density and fluctuation studies).
Accessing and visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, G. Bruce; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
Progress in Unsteady Turbopump Flow Simulations Using Overset Grid Systems
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Chan, William; Kwak, Dochan
2002-01-01
This viewgraph presentation provides information on unsteady flow simulations for the Second Generation RLV (Reusable Launch Vehicle) baseline turbopump. Three impeller rotations were simulated by using a 34.3 million grid points model. MPI/OpenMP hybrid parallelism and MLP shared memory parallelism has been implemented and benchmarked in INS3D, an incompressible Navier-Stokes solver. For RLV turbopump simulations a speed up of more than 30 times has been obtained. Moving boundary capability is obtained by using the DCF module. Scripting capability from CAD geometry to solution is developed. Unsteady flow simulations for advanced consortium impeller/diffuser by using a 39 million grid points model are currently underway. 1.2 impeller rotations are completed. The fluid/structure coupling is initiated.
Application Note: Power Grid Modeling With Xyce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sholander, Peter E.
This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.
Peitzsch, Erich H.; Hendrikx, Jordy; Fagre, Daniel B.; Reardon, Blase
2010-01-01
Wet slab and glide slab snow avalanches are dangerous and yet can be particularly difficult to predict. Both wet slab and glide slab avalanches are thought to depend upon free water moving through the snowpack but are driven by different processes. In Glacier National Park, Montana, both types of avalanches can occur in the same year and affect the Going-to-the-Sun Road (GTSR). Both wet slab and glide slab avalanches along the GTSR from 2003-2010 are investigated. Meteorological data from two high-elevation weather stations and one SNOTEL site are used in conjunction with an avalanche database and snowpit profiles. These data were used to characterize years when only glide slab avalanches occurred and those years when both glide slab and wet slab avalanches occurred. Results of 168 glide slab and 57 wet slab avalanches along the GTSR suggest both types of avalanche occurrence depend on sustained warming periods with intense solar radiation (or rain on snow) to produce free water in the snowpack. Differences in temperature and net radiation metrics between wet slab and glide slab avalanches emerge as one moves from one day to seven days prior to avalanche occurrence. On average, a more rapid warming precedes wet slab avalanche occurrence. Glide slab and wet slab avalanches require a similar amount of net radiation. Wet slab avalanches do not occur every year, while glide slab avalanches occur annually. These results aim to enhance understanding of the required meteorological conditions for wet slab and glide slab avalanches and aid in improved wet snow avalanche forecasting.
Avalanche ecology and large magnitude avalanche events: Glacier National Park, Montana, USA
Fagre, Daniel B.; Peitzsch, Erich H.
2010-01-01
Large magnitude snow avalanches play an important role ecologically in terms of wildlife habitat, vegetation diversity, and sediment transport within a watershed. Ecological effects from these infrequent avalanches can last for decades. Understanding the frequency of such large magnitude avalanches is also critical to avalanche forecasting for the Going-to-the-Sun Road (GTSR). In January 2009, a large magnitude avalanche cycle occurred in and around Glacier National Park, Montana. The study site is the Little Granite avalanche path located along the GTSR. The study is designed to quantify change in vegetative cover immediately after a large magnitude event and document ecological response over a multi-year period. GPS field mapping was completed to determine the redefined perimeter of the avalanche path. Vegetation was inventoried using modified U.S. Forest Service Forest Inventory and Analysis plots, cross sections were taken from over 100 dead trees throughout the avalanche path, and an avalanche chronology was developed. Initial results indicate that the perimeter of this path was expanded by 30%. The avalanche travelled approximately 1200 vertical meters and 3 linear kilometers. Stands of large conifers as old as 150 years were decimated by the avalanche, causing a shift in dominant vegetation types in many parts of the avalanche path. Woody debris is a major ground cover up to 3 m in depth on lower portions of the avalanche path and will likely affect tree regrowth. Monitoring and measuring the post-avalanche vegetation recovery of this particular avalanche path provides a unique dataset for determining the ecological role of avalanches in mountain landscapes.
NASA Astrophysics Data System (ADS)
Nagai, Hiroto; Watanabe, Manabu; Tomii, Naoya
2016-04-01
A major earthquake, measuring 7.8 Mw, occurred on April 25, 2015, in Lamjung district, central Nepal, causing more than 9,000 deaths and 23,000 injuries. During the event, termed the 2015 Gorkha earthquake, the most catastrophic collapse of the mountain side was reported in the Langtang Valley, located 60 km north of Kathmandu. In this collapse, a huge boulder-rich avalanche and a sudden air pressure wave traveled from a steep south-facing slope to the bottom of a U-shaped valley, resulting in more than 170 deaths. Accurate in-situ surveys are necessary to investigate such events, and to find out ways to avoid similar catastrophic events in the future. Geospatial information obtained from multiple satellite observations is invaluable for such surveys in remote mountain regions. In this study, we (1) identify the collapsed sediment using synthetic aperture radar, (2) conduct detailed mapping using high-resolution optical imagery, and (3) estimate sediment volumes from digital surface models in order to quantify the immediate situation of the avalanched sediment. (1) Visual interpretation and coherence calculations using Phased Array type L-band Synthetic Aperture Radar-2 (PALSAR-2) images give a consistent area of sediment cover. Emergency observation was carried out the day after the earthquake, using the PALSAR-2 onboard the Advanced Land Observing Satellite-2 (ALOS-2, "DAICHI-2"). Visual interpretation of orthorectified backscatter amplitude images revealed completely altered surface features, over which the identifiable sediment cover extended for 0.73 km2 (28°13'N, 85°30'E). Additionally, measuring the decrease in normalized coherence quantifies the similarity between the pre- and post-event surface features, after the removal of numerous noise patches by focal statistics. Calculations within the study area revealed high-value areas corresponding to the visually identified sediment area. Visual interpretation of the amplitude images and the coherence calculations thus produce similar extractions of collapse sediment. (2) Visual interpretation of high-resolution satellite imagery suggests multiple layers of sediment with different physical properties. A DigitalGlobe satellite, WorldView-3, observed the Langtang Valley on May 8, 2015, using a panchromatic sensor with a spatial resolution of 0.3 m. Identification and mapping of avalanche-induced surface features were performed manually. The surface features were classified into 15 segments on the basis of sediment features, including darkness, the dominance of scattering or flowing features, and the recognition of boulders. Together, these characteristics suggest various combinations of physical properties, such as viscosity, density, and ice and snow content. (3) Altitude differences between the pre- and post-quake digital surface models (DSM) suggest the deposition of 5.2×105 m3 of sediment, mainly along the river bed. A 5 m-grid pre-event DSM was generated from PRISM stereo-pair images acquired on October 12, 2008. A 2 m-grid post-event DSM was generated from WorldView-3 images acquired on May 8, 2015. Comparing the two DSMs, a vertical difference of up to 22±13 m is observed, mainly along the river bed. Estimates of the total avalanched volume reach 5.2×105 m^3, with a possible range of 3.7×105 to 10.7×105 m^3.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
Spherical ion oscillations in a positive polarity gridded inertial-electrostatic confinement device
NASA Astrophysics Data System (ADS)
Bandara, R.; Khachan, J.
2013-07-01
A pulsed, positive polarity gridded inertial electrostatic confinement device has been investigated experimentally, using a differential emissive probe and potential traces as primary diagnostics. Large amplitude oscillations in the plasma current and plasma potential were observed within a microsecond of the discharge onset, which are indicative of coherent ion oscillations about a temporarily confined excess of recirculating electron space charge. The magnitude of the depth of the potential well in the established virtual cathode was determined using a differential emissive Langmuir probe, which correlated well to the potential well inferred from the ion oscillation frequency for both hydrogen and argon experiments. It was found that the timescale for ion oscillation dispersion is strongly dependent on the neutral gas density, and weakly dependent on the peak anode voltage. The cessation of the oscillations was found to be due to charge exchange processes converting ions to high velocity neutrals, causing the abrupt de-coherence of the oscillations through an avalanche dispersion in phase space.
Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
Dip and anisotropy effects on flow using a vertically skewed model grid.
Hoaglund, John R; Pollard, David
2003-01-01
Darcy flow equations relating vertical and bedding-parallel flow to vertical and bedding-parallel gradient components are derived for a skewed Cartesian grid in a vertical plane, correcting for structural dip given the principal hydraulic conductivities in bedding-parallel and bedding-orthogonal directions. Incorrect-minus-correct flow error results are presented for ranges of structural dip (0 < or = theta < or = 90) and gradient directions (0 < or = phi < or = 360). The equations can be coded into ground water models (e.g., MODFLOW) that can use a skewed Cartesian coordinate system to simulate flow in structural terrain with deformed bedding planes. Models modified with these equations will require input arrays of strike and dip, and a solver that can handle off-diagonal hydraulic conductivity terms.
Spacer grid assembly and locking mechanism
Snyder, Jr., Harold J.; Veca, Anthony R.; Donck, Harry A.
1982-01-01
A spacer grid assembly is disclosed for retaining a plurality of fuel rods in substantially parallel spaced relation, the spacer grids being formed with rhombic openings defining contact means for engaging from one to four fuel rods arranged in each opening, the spacer grids being of symmetric configuration with their rhombic openings being asymmetrically offset to permit inversion and relative rotation of the similar spacer grids for improved support of the fuel rods. An improved locking mechanism includes tie bars having chordal surfaces to facilitate their installation in slotted circular openings of the spacer grids, the tie rods being rotatable into locking engagement with the slotted openings.
Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu
1995-01-01
As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
Characterizing the nature and variability of avalanche hazard in western Canada
NASA Astrophysics Data System (ADS)
Shandro, Bret; Haegeli, Pascal
2018-04-01
The snow and avalanche climate types maritime, continental and transitional are well established and have been used extensively to characterize the general nature of avalanche hazard at a location, study inter-seasonal and large-scale spatial variabilities and provide context for the design of avalanche safety operations. While researchers and practitioners have an experience-based understanding of the avalanche hazard associated with the three climate types, no studies have described the hazard character of an avalanche climate in detail. Since the 2009/2010 winter, the consistent use of Statham et al. (2017) conceptual model of avalanche hazard in public avalanche bulletins in Canada has created a new quantitative record of avalanche hazard that offers novel opportunities for addressing this knowledge gap. We identified typical daily avalanche hazard situations using self-organizing maps (SOMs) and then calculated seasonal prevalence values of these situations. This approach produces a concise characterization that is conducive to statistical analyses, but still provides a comprehensive picture that is informative for avalanche risk management due to its link to avalanche problem types. Hazard situation prevalence values for individual seasons, elevations bands and forecast regions provide unprecedented insight into the inter-seasonal and spatial variability of avalanche hazard in western Canada.
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
The Feasibility of Adaptive Unstructured Computations On Petaflops Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Heber, Gerd; Gao, Guang; Saini, Subhash (Technical Monitor)
1999-01-01
This viewgraph presentation covers the advantages of mesh adaptation, unstructured grids, and dynamic load balancing. It illustrates parallel adaptive communications, and explains PLUM (Parallel dynamic load balancing for adaptive unstructured meshes), and PSAW (Proper Self Avoiding Walks).
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid
NASA Astrophysics Data System (ADS)
Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua
2016-10-01
A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.
Layer detection and snowpack stratigraphy characterisation from digital penetrometer signals
NASA Astrophysics Data System (ADS)
Floyer, James Antony
Forecasting for slab avalanches benefits from precise measurements of snow stratigraphy. Snow penetrometers offer the possibility of providing detailed information about snowpack structure; however, their use has yet to be adopted by avalanche forecasting operations in Canada. A manually driven, variable rate force-resistance penetrometer is tested for its ability to measure snowpack information suitable for avalanche forecasting and for spatial variability studies on snowpack properties. Subsequent to modifications, weak layers of 5 mm thick are reliably detected from the penetrometer signals. Rate effects are investigated and found to be insignificant for push velocities between 0.5 to 100 cm s-1 for dry snow. An analysis of snow deformation below the penetrometer tip is presented using particle image velocimetry and two zones associated with particle deflection are identified. The compacted zone is a region of densified snow that is pushed ahead of the penetrometer tip; the deformation zone is a broader zone surrounding the compacted zone, where deformation is in compression and in shear. Initial formation of the compacted zone is responsible for pronounced force spikes in the penetrometer signal. A layer tracing algorithm for tracing weak layers, crusts and interfaces across transects or grids of penetrometer profiles is presented. This algorithm uses Wiener spiking deconvolution to detect a portion of the signal manually identified as a layer in one profile across to an adjacent profile. Layer tracing is found to be most effective for tracing crusts and prominent weak layers, although weak layers close to crusts were not well traced. A framework for extending this method for detecting weak layers with no prior knowledge of weak layer existence is also presented. A study relating the fracture character of layers identified in compression tests is presented. A multivariate model is presented that distinguishes between sudden and other fracture characters 80% of the time. Transects of penetrometer profiles are presented over several alpine terrain features commonly associated with spatial variability of snowpack properties. Physical processes relating to the variability of certain snowpack properties revealed in the transects is discussed. The importance of characteristic signatures for training avalanche practitioners to recognise potentially unstable terrain is also discussed.
Peitzsch, Erich H.; Fagre, Daniel B.; Dundas, Mark
2010-01-01
Snow avalanche paths are key geomorphologic features in Glacier National Park, Montana, and an important component of mountain ecosystems: they are isolated within a larger ecosystem, they are continuously disturbed, and they contain unique physical characteristics (Malanson and Butler, 1984). Avalanches impact subalpine forest structure and function, as well as overall biodiversity (Bebi et al., 2009). Because avalanches are dynamic phenomena, avalanche path geometry and spatial extent depend upon climatic regimes. The USGS/GNP Avalanche Program formally began in 2003 as an avalanche forecasting program for the spring opening of the ever-popular Going-to-the-Sun Road (GTSR), which crosses through 37 identified avalanche paths. Avalanche safety and forecasting is a necessary part of the GTSR spring opening procedures. An avalanche atlas detailing topographic parameters and oblique photographs was completed for the GTSR corridor in response to a request from GNP personnel for planning and resource management. Using ArcMap 9.2 GIS software, polygons were created for every avalanche path affecting the GTSR using aerial imagery, field-based observations, and GPS measurements of sub-meter accuracy. Spatial attributes for each path were derived within the GIS. Resulting products include an avalanche atlas book for operational use, a geoPDF of the atlas, and a Google Earth flyover illustrating each path and associated photographs. The avalanche atlas aids park management in worker safety, infrastructure planning, and natural resource protection by identifying avalanche path patterns and location. The atlas was created for operational and planning purposes and is also used as a foundation for research such as avalanche ecology projects and avalanche path runout modeling.
Slope failures in Northern Vermont, USA
Lee, F.T.; Odum, J.K.; Lee, J.D.
1997-01-01
Rockfalls and debris avalanches from steep hillslopes in northern Vermont are a continuing hazard for motorists, mountain climbers, and hikers. Huge blocks of massive schist and gneiss can reach the valley floor intact, whereas others may trigger debris avalanches on their downward travel. Block movement is facilitated by major joints both parallel and perpendicular to the glacially over-steepened valley walls. The slope failures occur most frequently in early spring, accompanying freeze/thaw cycles, and in the summer, following heavy rains. The study reported here began in August 1986 and ended in June 1989. Manual and automated measurements of temperature and displacement were made at two locations on opposing valley walls. Both cyclic-reversible and permanent displacements occurred during the 13-month monitoring period. The measurements indicate that freeze/thaw mechanisms produce small irreversible incremental movements, averaging 0.53 mm/yr, that displace massive blocks and produce rockfalls. The initial freeze/thaw weakening of the rock mass also makes slopes more susceptible to attrition by water, and heavy rains have triggered rockfalls and consequent debris flows and avalanches. Temperature changes on the rock surface produced time-dependent cyclic displacements of the rock blocks that were not instantaneous but lagged behind the temperature changes. Statistical analyses of the data were used to produce models of cyclic time-dependent rock block behavior. Predictions based solely on temperature changes gave poor results. A model using time and temperature and incorporating the lag effect predicts block displacement more accurately.
A perspective on unstructured grid flow solvers
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.
1995-01-01
This survey paper assesses the status of compressible Euler and Navier-Stokes solvers on unstructured grids. Different spatial and temporal discretization options for steady and unsteady flows are discussed. The integration of these components into an overall framework to solve practical problems is addressed. Issues such as grid adaptation, higher order methods, hybrid discretizations and parallel computing are briefly discussed. Finally, some outstanding issues and future research directions are presented.
Grid Computing Environment using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Alanis, Fransisco; Mahmood, Akhtar
2003-10-01
Custom-made Beowulf clusters using PCs are currently replacing expensive supercomputers to carry out complex scientific computations. At the University of Texas - Pan American, we built a 8 Gflops Beowulf Cluster for doing HEP research using RedHat Linux 7.3 and the LAM-MPI middleware. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes that were compiled in C on the cluster using the LAM-XMPI graphics user environment. We will demonstrate a "simple" prototype grid environment, where we will submit and run parallel jobs remotely across multiple cluster nodes over the internet from the presentation room at Texas Tech. University. The Sphinx Beowulf Cluster will be used for monte-carlo grid test-bed studies for the LHC-ATLAS high energy physics experiment. Grid is a new IT concept for the next generation of the "Super Internet" for high-performance computing. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
NAS Grid Benchmarks: A Tool for Grid Space Exploration
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.
Medical applications for high-performance computers in SKIF-GRID network.
Zhuchkov, Alexey; Tverdokhlebov, Nikolay
2009-01-01
The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Application of LANDSAT data to delimitation of avalanche hazards in Montane Colorado
NASA Technical Reports Server (NTRS)
Knepper, D. H., Jr. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Many avalanche hazard zones can be identified on LANDSAT imagery, but not consistently over a large region. Therefore, regional avalanche hazard mapping, using LANDSAT imagery, must draw on additional sources of information. A method was devised that depicts three levels of avalanche hazards according to three corresponding levels of certainty that active avalanches occur. The lowest level, potential avalanche hazards, was defined by delineating slopes steep enough to support avalanches at elevations where snowfall was likely to be sufficient to produce a thick snowpack. The intermediate level of avalanche hazard was interpreted as avalanche hazard zones. These zones have direct and indirect indicators of active avalanche activity and were interpreted from LANDSAT imagery. The highest level of known or active avalanche hazards was compiled from existing maps. Some landslides in Colorado were identified and, to a degree, delimited on LANDSAT imagery, but the conditions of their identification were highly variable. Because of local topographic, geologic, structural, and vegetational variations, there was no unique landslide spectral appearance.
NASA Astrophysics Data System (ADS)
Esposito, C.; Bianchi-Fasani, G.; Martino, S.; Scarascia-Mugnozza, G.
2013-10-01
This paper focuses on a study aimed at defining the role of geological-structural setting and Quaternary morpho-structural evolution on the onset and development of a deep-seated gravitational slope deformation which affects the western slope of Mt. Genzana ridge (Central Apennines, Italy). This case history is particularly significant as it comprises several aspects of such gravitational processes both in general terms and with particular reference to the Apennines. In fact: i) the morpho-structural setting is representative of widespread conditions in Central Apennines; ii) the deforming slope partially evolved in a large rockslide-avalanche; iii) the deformational process provides evidence of an ongoing state of activity; iv) the rockslide-avalanche debris formed a stable natural dam, thus implying significant variations in the morphologic, hydraulic and hydrogeological setting; v) the gravitational deformation as well as the rockslide-avalanche reveal a strong structural control. The main study activities were addressed to define a detailed geological model of the gravity-driven process, by means of geological, structural, geomorphological and geomechanical surveys. As a result, a robust hypothesis about the kinematics of the process was possible, with particular reference to the identification of geological-structural constraints. The process, in fact, involves a specific section of the slope exactly where a dextral transtensional structure is present, thus implying local structural conditions that favor sliding processes: the rock mass is intensively jointed by high angle discontinuity sets and the bedding attitude is quite parallel to the slope angle. Within this frame the gravitational process can be classified as a structurally constrained translational slide, locally evolved into a rockslide-avalanche. The activation of such a deformation can be in its turn related to the Quaternary morphological evolution of the area, which was affected by a significant topographic stress increase, testified by stratigraphic and morphologic evidence.
High Performance Fortran for Aerospace Applications
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.
Meteorological variables associated with deep slab avalanches on persistent weak layers
Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.
2014-01-01
Deep slab avalanches are a particularly challenging avalanche forecasting problem. These avalanches are typically difficult to trigger, yet when they are triggered they tend to propagate far and result in large and destructive avalanches. For this work we define deep slab avalanches as those that fail on persistent weak layers deeper than 0.9m (3 feet), and that occur after February 1st. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl Ski Area to test the usefulness of meteorological variables for predicting deep slab avalanches. As in previous studies, we used data from the days preceding deep slab cycles, but we also considered meteorological metrics over the early months of the season. We utilized classification trees for our analyses. Our results showed warmer temperatures in the prior twenty-four hours and more loading over the seven days before days with deep slab avalanches on persistent weak layers. In line with previous research, extended periods of above freezing temperatures led to days with deep wet slab avalanches on persistent weak layers. Seasons with either dry or wet avalanches on deep persistent weak layers typically had drier early months, and often had some significant snow depth prior to those dry months. This paper provides insights for ski patrollers, guides, and avalanche forecasters who struggle to forecast deep slab avalanches on persistent weak layers late in the season.
Multiscale Simulations of Magnetic Island Coalescence
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2010-01-01
We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.
Full 3D Analysis of the GE90 Turbofan Primary Flowpath
NASA Technical Reports Server (NTRS)
Turner, Mark G.
2000-01-01
The multistage simulations of the GE90 turbofan primary flowpath components have been performed. The multistage CFD code, APNASA, has been used to analyze the fan, fan OGV and booster, the 10-stage high-pressure compressor and the entire turbine system of the GE90 turbofan engine. The code has two levels of parallel, and for the 18 blade row full turbine simulation has 87.3 percent parallel efficiency with 121 processors on an SGI ORIGIN. Grid generation is accomplished with the multistage Average Passage Grid Generator, APG. Results for each component are shown which compare favorably with test data.
Automated identification of potential snow avalanche release areas based on digital elevation models
NASA Astrophysics Data System (ADS)
Bühler, Y.; Kumar, S.; Veitinger, J.; Christen, M.; Stoffel, A.; Snehmani
2013-05-01
The identification of snow avalanche release areas is a very difficult task. The release mechanism of snow avalanches depends on many different terrain, meteorological, snowpack and triggering parameters and their interactions, which are very difficult to assess. In many alpine regions such as the Indian Himalaya, nearly no information on avalanche release areas exists mainly due to the very rough and poorly accessible terrain, the vast size of the region and the lack of avalanche records. However avalanche release information is urgently required for numerical simulation of avalanche events to plan mitigation measures, for hazard mapping and to secure important roads. The Rohtang tunnel access road near Manali, Himachal Pradesh, India, is such an example. By far the most reliable way to identify avalanche release areas is using historic avalanche records and field investigations accomplished by avalanche experts in the formation zones. But both methods are not feasible for this area due to the rough terrain, its vast extent and lack of time. Therefore, we develop an operational, easy-to-use automated potential release area (PRA) detection tool in Python/ArcGIS which uses high spatial resolution digital elevation models (DEMs) and forest cover information derived from airborne remote sensing instruments as input. Such instruments can acquire spatially continuous data even over inaccessible terrain and cover large areas. We validate our tool using a database of historic avalanches acquired over 56 yr in the neighborhood of Davos, Switzerland, and apply this method for the avalanche tracks along the Rohtang tunnel access road. This tool, used by avalanche experts, delivers valuable input to identify focus areas for more-detailed investigations on avalanche release areas in remote regions such as the Indian Himalaya and is a precondition for large-scale avalanche hazard mapping.
High energy collimating fine grids for HESP program
NASA Technical Reports Server (NTRS)
Eberhard, Carol D.; Frazier, Edward
1993-01-01
There is a need to develop fine pitch x-ray collimator grids as an enabling technology for planned future missions. The grids consist of an array of thin parallel strips of x-ray absorbing material, such as tungsten, with pitches ranging from 34 microns to 2.036 millimeters. The grids are the key components of a new class of spaceborne instruments known as 'x-ray modulation collimators.' These instruments are the first to produce images of celestial sources in the hard x-ray and gamma-ray spectral regions.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
Wide-range radioactive-gas-concentration detector
Anderson, D.F.
1981-11-16
A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica
2017-06-29
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
Geiger mode avalanche photodiodes for microarray systems
NASA Astrophysics Data System (ADS)
Phelan, Don; Jackson, Carl; Redfern, R. Michael; Morrison, Alan P.; Mathewson, Alan
2002-06-01
New Geiger Mode Avalanche Photodiodes (GM-APD) have been designed and characterized specifically for use in microarray systems. Critical parameters such as excess reverse bias voltage, hold-off time and optimum operating temperature have been experimentally determined for these photon-counting devices. The photon detection probability, dark count rate and afterpulsing probability have been measured under different operating conditions. An active- quench circuit (AQC) is presented for operating these GM- APDs. This circuit is relatively simple, robust and has such benefits as reducing average power dissipation and afterpulsing. Arrays of these GM-APDs have already been designed and together with AQCs open up the possibility of having a solid-state microarray detector that enables parallel analysis on a single chip. Another advantage of these GM-APDs over current technology is their low voltage CMOS compatibility which could allow for the fabrication of an AQC on the same device. Small are detectors have already been employed in the time-resolved detection of fluorescence from labeled proteins. It is envisaged that operating these new GM-APDs with this active-quench circuit will have numerous applications for the detection of fluorescence in microarray systems.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Parallel Tensor Compression for Large-Scale Scientific Data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan
As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memorymore » parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.« less
Progress in Unsteady Turbopump Flow Simulations
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Chan, William; Kwak, Dochan; Williams, Robert
2002-01-01
This viewgraph presentation discusses unsteady flow simulations for a turbopump intended for a reusable launch vehicle (RLV). The simulation process makes use of computational grids and parallel processing. The architecture of the parallel computers used is discussed, as is the scripting of turbopump simulations.
Avalanche Accidents Causing Fatalities: Are They Any Different in the Summer?
Pasquier, Mathieu; Hugli, Olivier; Kottmann, Alexandre; Techel, Frank
2017-03-01
Pasquier, Mathieu, Olivier Hugli, Alexandre Kottmann, and Frank Techel. Avalanche accidents causing fatalities: are they any different in the summer? High Alt Med Biol. 18:67-72, 2017. This retrospective study investigated the epidemiology of summer avalanche accidents that occurred in Switzerland and caused at least one fatality between 1984 and 2014. Summer avalanche accidents were defined as those that occurred between June 1st and October 31st. Summer avalanches caused 21 (4%) of the 482 avalanches with at least one fatality occurring during the study period, and 40 (6%) of the 655 fatalities. The number of completely buried victims per avalanche and the proportion of complete burials among trapped people were lower in summer than in winter. Nevertheless, the mean number of fatalities per avalanche was higher in summer than in winter: 1.9 ± 1.2 (standard deviation; range 1-6) versus 1.3 ± 0.9 (range 1-7; p < 0.001). Trauma was the presumed cause of death in 94% (33 of 35) in summer avalanche accidents. Sixty-five percent of fully buried were found due to visual clues at the snow surface. Fatal summer avalanche accidents caused a higher mean number of fatalities per avalanche than winter avalanches, and those deaths resulted mostly from trauma. Rescue teams should anticipate managing polytrauma for victims in summer avalanche accidents rather than hypothermia or asphyxia; they should be trained in prehospital trauma life support and equipped accordingly to ensure efficient patient care.
Analysis of turbine-grid interaction of grid-connected wind turbine using HHT
NASA Astrophysics Data System (ADS)
Chen, A.; Wu, W.; Miao, J.; Xie, D.
2018-05-01
This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.
Distributed data mining on grids: services, tools, and applications.
Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo
2004-12-01
Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.
On the Internal Structure of Mobile Barchan Sand Dunes due to Granular Processes
NASA Astrophysics Data System (ADS)
Vriend, N. M.; Arran, M.; Louge, M. Y.; Hay, A. G.; Valance, A.
2017-12-01
In this work, we visualize the internal structure of mobile barchan desert dunes at the avalanche scale. We reveal an intriguing history of dune building using a novel combination of local sand sampling and advanced geophysical techniques resulting in high resolution measurements of individual avalanche events. Due to progressive rebuilding, granular avalanching, erosional and depositional processes, these marching barchan dunes are reworked every few years and a characteristic zebra-pattern (figure 1a), orientated parallel to the slipface at the angle of repose, appears at regular intervals. We present scientific data on the structure obtained from several mobile barchan dunes of different sizes during recent desert field campaigns (2014, 2015, 2017) in a mobile barchan dune field in Qatar (25.01°N, 51.34°E in the AlWakrah municipality). The site has been equipped with a weather station and has been regularly visited by a multidisciplinary research team in recent years (e.g. [1]). By applying high-frequency (1200 MHz) ground penetrating radar (GPR) transects across the midline (figure 1b) we map the continuous evolution of this cross-bedding at high resolution deep within the dune. The GPR reveals a slope reduction of the slipface near the base of the dune; evidence of irregular wind reversals; and the presence of a harder aeolian cap around the crest and extending to the brink. The data is supplemented with granulometry from layers stabilized by dyed water injection and uncovered by excavating vertical walls perpendicular to old buried avalanches. We attribute visible differences in water penetration between adjacent layers to fine particle segregation processes in granular avalanches. This work was made possible by the support of NPRP grant 6-059-2-023 from the Qatar National Research Fund to MYL and AGH, and a Royal Society Dorothy Hodgkin Research Fellowship to NMV. We thank Jean-Luc Métayer for performing detailed particle size distribution measurements. References: [1] Louge, M. Y., A. Valance, A. Ould el-Moctar, J. Xu, A. G. Hay, and R. Richer, Temperature and humidity within a mobile barchan sand dune, implications for microbial survival, J. Geophys. Res. 118, doi:10.1002/2013JF002839 (2013).
Coe, Jeffrey A.; Baum, Rex L.; Allstadt, Kate E.; Kochevar, Bernard; Schmitt, Robert G.; Morgan, Matthew L.; White, Jonathan L.; Stratton, Benjamin T.; Hayashi, Timothy A.; Kean, Jason W.
2016-01-01
On 25 May 2014, a rain-on-snow–induced rock avalanche occurred in the West Salt Creek valley on the northern flank of Grand Mesa in western Colorado (United States). The avalanche mobilized from a preexisting rock slide in the Green River Formation and traveled 4.6 km down the confined valley, killing three people. The avalanche was rare for the contiguous United States because of its large size (54.5 Mm3) and high mobility (height/length = 0.14). To understand the avalanche failure sequence, mechanisms, and mobility, we conducted a forensic analysis using large-scale (1:1000) structural mapping and seismic data. We used high-resolution, unmanned aircraft system imagery as a base for field mapping, and analyzed seismic data from 22 broadband stations (distances < 656 km from the rock-slide source area) and one short-period network. We inverted broadband data to derive a time series of forces that the avalanche exerted on the earth and tracked these forces using curves in the avalanche path. Our results revealed that the rock avalanche was a cascade of landslide events, rather than a single massive failure. The sequence began with an early morning landslide/debris flow that started ∼10 h before the main avalanche. The main avalanche lasted ∼3.5 min and traveled at average velocities ranging from 15 to 36 m/s. For at least two hours after the avalanche ceased movement, a central, hummock-rich core continued to move slowly. Since 25 May 2014, numerous shallow landslides, rock slides, and rock falls have created new structures and modified avalanche topography. Mobility of the main avalanche and central core was likely enhanced by valley floor material that liquefied from undrained loading by the overriding avalanche. Although the base was likely at least partially liquefied, our mapping indicates that the overriding avalanche internally deformed predominantly by sliding along discrete shear surfaces in material that was nearly dry and had substantial frictional strength. These results indicate that the West Salt Creek avalanche, and probably other long-traveled avalanches, could be modeled as two layers: a thin, liquefied basal layer, and a thicker and stronger overriding layer.
Statistical analysis and trends of wet snow avalanches in the French Alps over the period 1959-2010
NASA Astrophysics Data System (ADS)
Naaim, Mohamed
2017-04-01
Since an avalanche contains a significant proportion of wet snow, its characteristics and its behavior change significantly (heterogeneous and polydisperse). Even if on a steep given slope, wet snow avalanches are slow. They can flow over gentle slopes and reach the same extensions as dry avalanches. To highlight the link between climate warming and the proliferation of wet snow avlanches, we crossed two well-documented avalanche databases: the permanent avalanche chronicle (EPA) and the meteorological re-analyzes. For each avalanche referenced in EPA, a moisture index I is buit. It represents the ratio of the thickness of the wet snow layer to the total snow thickness, at the date of the avalanche on the concerned massif at 2400 m.a.s.l. The daily and annual proportion of avalanches exceeding a given threshold of I are calculated for each massif of the French alps. The statistical distribution of wet avalanches per massif is calculated over the period 1959-2009. The statistical quantities are also calculated over two successive periods of the same duration 1959-1984 and 1984-2009, and the annual evolution of the proportion of wet avalanches is studied using time-series tools to detect potential rupture or trends. This study showed that about 77% of avalanches on the French alpine massif mobilize dry snow. The probability of having an avalanche of a moisture index greater than 10 % in a given year is 0.2. This value varies from one massif to another. The analysis between the two successive periods showed a significant growth of wet avalanches on 20 massifs and a decrease on 3 massifs. The study of time-series confirmed these trends, which are of the inter-annual variability level.
MrGrid: A Portable Grid Based Molecular Replacement Pipeline
Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.
2010-01-01
Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
The design and implementation of a parallel unstructured Euler solver using software primitives
NASA Technical Reports Server (NTRS)
Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.
1992-01-01
This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.
IFKIS - a basis for managing avalanche risk in settlements and on roads in Switzerland
NASA Astrophysics Data System (ADS)
Bründl, M.; Etter, H.-J.; Steiniger, M.; Klingler, Ch.; Rhyner, J.; Ammann, W. J.
2004-04-01
After the avalanche winter of 1999 in Switzerland, which caused 17 deaths and damage of over CHF 600 mill. in buildings and on roads, the project IFKIS, aimed at improving the basics of organizational measures (closure of roads, evacuation etc.) in avalanche risk management, was initiated. The three main parts of the project were the development of a compulsory checklist for avalanche safety services, a modular education and training course program and an information system for safety services. The information system was developed in order to improve both the information flux between the national centre for avalanche forecasting, the Swiss Federal Institute for Snow and Avalanche Research SLF, and the local safety services on the one hand and the communication between avalanche safety services in the communities on the other hand. The results of this project make a valuable contribution to strengthening organizational measures in avalanche risk management and to closing the gaps, which became apparent during the avalanche winter of 1999. They are not restricted to snow avalanches but can also be adapted for dealing with other natural hazard processes and catastrophes.
NASA Astrophysics Data System (ADS)
Teich, M.; Feistl, T.; Fischer, J.; Bartelt, P.; Bebi, P.; Christen, M.; Grêt-Regamey, A.
2013-12-01
Two-dimensional avalanche simulation software operating in three-dimensional terrain are widely used for hazard zoning and engineering to predict runout distances and impact pressures of snow avalanche events. Mountain forests are an effective biological protection measure; however, the protective capacity of forests to decelerate or even to stop avalanches that start within forested areas or directly above the treeline is seldom considered in this context. In particular, runout distances of small- to medium-scale avalanches are strongly influenced by the structural conditions of forests in the avalanche path. This varying decelerating effect has rarely been addressed or implemented in avalanche simulation. We present an evaluation and operationalization of a novel forest detrainment modeling approach implemented in the avalanche simulation software RAMMS. The new approach accounts for the effect of forests in the avalanche path by detraining mass, which leads to a deceleration and runout shortening of avalanches. The extracted avalanche mass caught behind trees stops immediately and, therefore, is instantly subtracted from the flow and the momentum of the stopped mass is removed from the total momentum of the avalanche flow. This relationship is parameterized by the empirical detrainment coefficient K [Pa] which accounts for the braking power of different forest types per unit area. To define K dependent on specific forest characteristics, we simulated 40 well-documented small- to medium-scale avalanches which released in and ran through forests with varying K-values. Comparing two-dimensional simulation results with one-dimensional field observations for a high number of avalanche events and simulations manually is however time consuming and rather subjective. In order to process simulation results in a comprehensive and standardized way, we used a recently developed automatic evaluation and comparison method defining runout distances based on a pressure-based runout indicator in an avalanche path dependent coordinate system. Analyzing and comparing observed and simulated runout distances statistically revealed values for K suitable to simulate the combined influence of four forest characteristics on avalanche runout: forest type, crown coverage, vertical structure and surface roughness, e.g. values for K were higher for dense spruce and mixed spruce-beech forests compared to open larch forests at the upper treeline. Considering forest structural conditions within avalanche simulation will improve current applications for avalanche simulation tools in mountain forest and natural hazard management considerably. Furthermore, we show that an objective and standardized evaluation of two-dimensional simulation results is essential for a successful evaluation and further calibration of avalanche models in general.
Experimental Avalanches in a Rotating Drum
NASA Astrophysics Data System (ADS)
Hubard, Aline; O'Hern, Corey; Shattuck, Mark
We address the question of universality in granular avalanches and the system size effects on it. We set up an experiment made from a quasi-two-dimensional rotating drum half-filled with a monolayer of stainless-steel spheres. We measure the size of the avalanches created by the increased gravitational stress on the pile as we quasi-statically rotate the drum. We find two kinds of avalanches determined by the drum size. The size and duration distributions of the avalanches that do not span the whole system follow a power law and the avalanche shapes are self-similar and nearly parabolic. The distributions of the avalanches that span the whole system are limited by the maximal amount of potential energy stored in the system at the moment of the avalanche. NSF CMMI-1462439, CMMI-1463455.
Forensic Analysis of the May 2014 West Salt Creek Rock Avalanche in Western Colorado
NASA Astrophysics Data System (ADS)
Coe, J. A.; Baum, R. L.; Allstadt, K.; Kochevar, B. F.; Schmitt, R. G.; Morgan, M. L.; White, J. L.; Stratton, B. T.; Hayashi, T. A.; Kean, J. W.
2015-12-01
The rain-on-snow induced West Salt Creek rock avalanche occurred on May 25, 2014 on the northern flank of Grand Mesa. The avalanche was rare for the contiguous U.S. because of its large size (59 M m3) and high mobility (Length/Height=7.2). To understand the avalanche failure sequence, mechanisms, and mobility, we conducted a forensic analysis using large-scale (1:1000) structural mapping and seismic data. We used high-resolution, Unmanned Aircraft System (UAS) imagery as a base for our field mapping and analyzed seismic data from 22 broadband stations (distances <656 km) and one short-period network. We inverted broadband data to derive a time series of forces that the avalanche exerted on the earth and tracked these forces using curves in the avalanche path. Our results revealed that the rock avalanche was a cascade of landslide events, rather than a single massive failure. The sequence began with a landslide/debris flow that started about 10 hours before the main avalanche. The main avalanche lasted just over 3 minutes and traveled at average velocities ranging from 15 to 36 m/s. For at least two hours after the avalanche ceased movement, a central, hummock-rich, strike-slip bound core continued to move slowly. Following movement of the core, numerous shallow landslides, rock slides, and rock falls created new structures and modified topography. Mobility of the main avalanche and central core were likely enhanced by valley floor material that liquefied from undrained loading by the overriding avalanche. Although the base was likely at least partially liquefied, our mapping indicates that the overriding avalanche internally deformed predominantly by sliding along discrete shear surfaces in material that was nearly dry and had substantial frictional strength. These results indicate that the West Salt Creek avalanche, and probably other long-traveled avalanches, could be modeled as two layers: a liquefied basal layer; and a thicker and stronger overriding layer.
Pederson, Gregory T.; Reardon, Blase; Caruso, C.J.; Fagre, Daniel B.
2006-01-01
Effective design of avalanche hazard mitigation measures requires long-term records of natural avalanche frequency and extent. Such records are also vital for determining whether natural avalanche frequency and extent vary over time due to climatic or biophysical changes. Where historic records are lacking, an accepted substitute is a chronology developed from tree-ring responses to avalanche-induced damage. This study evaluates a method for using tree-ring chronologies to provide spatially explicit differentiations of avalanche frequency and temporally explicit records of avalanche extent that are often lacking. The study area - part of John F. Stevens Canyon on the southern border of Glacier National Park – is within a heavily used railroad and highway corridor with two dozen active avalanche paths. Using a spatially geo-referenced network of avalanche-damaged trees (n=109) from a single path, we reconstructed a 96-year tree-ring based chronology of avalanche extent and frequency. Comparison of the chronology with historic records revealed that trees recorded all known events as well as the same number of previously unidentified events. Kriging methods provided spatially explicit estimates of avalanche return periods. Estimated return periods for the entire avalanche path averaged 3.2 years. Within this path, return intervals ranged from ~2.3 yrs in the lower track, to ~9-11 yrs and ~12 to >25 yrs in the runout zone, where the railroad and highway are located. For avalanche professionals, engineers, and transportation managers this technique proves a powerful tool in landscape risk assessment and decision making.
Aaron, Jordan; McDougall, Scott; Moore, Jeffrey R.; Coe, Jeffrey A.; Hungr, Oldrich
2017-01-01
BackgroundRock avalanches are flow-like landslides that can travel at extremely rapid velocities and impact surprisingly large areas. The mechanisms that lead to the unexpected mobility of these flows are unknown and debated. Mechanisms proposed in the literature can be broadly classified into those that rely on intrinsic characteristics of the rock avalanche material, and those that rely on extrinsic factors such as path material. In this work a calibration-based numerical model is used to back-analyze three rock avalanche case histories. The results of these back-analyses are then used to infer factors that govern rock avalanche motionResultsOur study has revealed two key insights that must be considered when analyzing rock avalanches. Results from two of the case histories demonstrate the importance of accounting for the initially coherent phase of rock avalanche motion. Additionally, the back-analyzed basal resistance parameters, as well as the best-fit rheology, are different for each case history. This suggests that the governing mechanisms controlling rock avalanche motion are unlikely to be intrinsic. The back-analyzed strength parameters correspond well to those that would be expected by considering the path material that the rock avalanches overran.ConclusionOur results show that accurate simulation of rock avalanche motion must account for the initially coherent phase of movement, and that the mechanisms governing rock avalanche motion are unlikely to be intrinsic to the failed material. Interaction of rock avalanche debris with path materials is the likely mechanism that governs the motion of many rock avalanches.
Parallel computation of transverse wakes in linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Xiaowei; Ko, Kwok
1996-11-01
SLAC has proposed the detuned structure (DS) as one possible design to control the emittance growth of long bunch trains due to transverse wakefields in the Next Linear Collider (NLC). The DS consists of 206 cells with tapering from cell to cell of the order of few microns to provide Gaussian detuning of the dipole modes. The decoherence of these modes leads to two orders of magnitude reduction in wakefield experienced by the trailing bunch. To model such a large heterogeneous structure realistically is impractical with finite-difference codes using structured grids. The authors have calculated the wakefield in the DSmore » on a parallel computer with a finite-element code using an unstructured grid. The parallel implementation issues are presented along with simulation results that include contributions from higher dipole bands and wall dissipation.« less
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Parallel discontinuous Galerkin FEM for computing hyperbolic conservation law on unstructured grids
NASA Astrophysics Data System (ADS)
Ma, Xinrong; Duan, Zhijian
2018-04-01
High-order resolution Discontinuous Galerkin finite element methods (DGFEM) has been known as a good method for solving Euler equations and Navier-Stokes equations on unstructured grid, but it costs too much computational resources. An efficient parallel algorithm was presented for solving the compressible Euler equations. Moreover, the multigrid strategy based on three-stage three-order TVD Runge-Kutta scheme was used in order to improve the computational efficiency of DGFEM and accelerate the convergence of the solution of unsteady compressible Euler equations. In order to make each processor maintain load balancing, the domain decomposition method was employed. Numerical experiment performed for the inviscid transonic flow fluid problems around NACA0012 airfoil and M6 wing. The results indicated that our parallel algorithm can improve acceleration and efficiency significantly, which is suitable for calculating the complex flow fluid.
3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.
2016-03-15
We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less
A new web-based system to improve the monitoring of snow avalanche hazard in France
NASA Astrophysics Data System (ADS)
Bourova, Ekaterina; Maldonado, Eric; Leroy, Jean-Baptiste; Alouani, Rachid; Eckert, Nicolas; Bonnefoy-Demongeot, Mylene; Deschatres, Michael
2016-05-01
Snow avalanche data in the French Alps and Pyrenees have been recorded for more than 100 years in several databases. The increasing amount of observed data required a more integrative and automated service. Here we report the comprehensive web-based Snow Avalanche Information System newly developed to this end for three important data sets: an avalanche chronicle (Enquête Permanente sur les Avalanches, EPA), an avalanche map (Carte de Localisation des Phénomènes d'Avalanche, CLPA) and a compilation of hazard and vulnerability data recorded on selected paths endangering human settlements (Sites Habités Sensibles aux Avalanches, SSA). These data sets are now integrated into a common database, enabling full interoperability between all different types of snow avalanche records: digitized geographic data, avalanche descriptive parameters, eyewitness reports, photographs, hazard and risk levels, etc. The new information system is implemented through modular components using Java-based web technologies with Spring and Hibernate frameworks. It automates the manual data entry and improves the process of information collection and sharing, enhancing user experience and data quality, and offering new outlooks to explore and exploit the huge amount of snow avalanche data available for fundamental research and more applied risk assessment.
NASA Astrophysics Data System (ADS)
Marchetti, E.; Ripepe, M.; Ulivieri, G.; Kogelnig, A.
2015-11-01
Avalanche risk management is strongly related to the ability to identify and timely report the occurrence of snow avalanches. Infrasound has been applied to avalanche research and monitoring for the last 20 years but it never turned into an operational tool to identify clear signals related to avalanches. We present here a method based on the analysis of infrasound signals recorded by a small aperture array in Ischgl (Austria), which provides a significant improvement to overcome this limit. The method is based on array-derived wave parameters, such as back azimuth and apparent velocity. The method defines threshold criteria for automatic avalanche identification by considering avalanches as a moving source of infrasound. We validate the efficiency of the automatic infrasound detection with continuous observations with Doppler radar and we show how the velocity of a snow avalanche in any given path around the array can be efficiently derived. Our results indicate that a proper infrasound array analysis allows a robust, real-time, remote detection of snow avalanches that is able to provide the number and the time of occurrence of snow avalanches occurring all around the array, which represent key information for a proper validation of avalanche forecast models and risk management in a given area.
NASA Astrophysics Data System (ADS)
Zhan, Weiwei; Fan, Xuanmei; Huang, Runqiu; Pei, Xiangjun; Xu, Qiang; Li, Weile
2017-06-01
Rock avalanches are extremely rapid, massive flow-like movements of fragmented rock. The travel path of the rock avalanches may be confined by channels in some cases, which are referred to as channelized rock avalanches. Channelized rock avalanches are potentially dangerous due to their difficult-to-predict travel distance. In this study, we constructed a dataset with detailed characteristic parameters of 38 channelized rock avalanches triggered by the 2008 Wenchuan earthquake using the visual interpretation of remote sensing imagery, field investigation and literature review. Based on this dataset, we assessed the influence of different factors on the runout distance and developed prediction models of the channelized rock avalanches using the multivariate regression method. The results suggested that the movement of channelized rock avalanche was dominated by the landslide volume, total relief and channel gradient. The performance of both models was then tested with an independent validation dataset of eight rock avalanches that were induced by the 2008 Wenchuan earthquake, the Ms 7.0 Lushan earthquake and heavy rainfall in 2013, showing acceptable good prediction results. Therefore, the travel-distance prediction models for channelized rock avalanches constructed in this study are applicable and reliable for predicting the runout of similar rock avalanches in other regions.
NASA Astrophysics Data System (ADS)
Machalek, P.; Kim, S. M.; Berry, R. D.; Liang, A.; Small, T.; Brevdo, E.; Kuznetsova, A.
2012-12-01
We describe how the Climate Corporation uses Python and Clojure, a language impleneted on top of Java, to generate climatological forecasts for precipitation based on the Advanced Hydrologic Prediction Service (AHPS) radar based daily precipitation measurements. A 2-year-long forecasts is generated on each of the ~650,000 CONUS land based 4-km AHPS grids by constructing 10,000 ensembles sampled from a 30-year reconstructed AHPS history for each grid. The spatial and temporal correlations between neighboring AHPS grids and the sampling of the analogues are handled by Python. The parallelization for all the 650,000 CONUS stations is further achieved by utilizing the MAP-REDUCE framework (http://code.google.com/edu/parallel/mapreduce-tutorial.html). Each full scale computational run requires hundreds of nodes with up to 8 processors each on the Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) distributed computing service resulting in 3 terabyte datasets. We further describe how we have productionalized a monthly run of the simulations process at full scale of the 4km AHPS grids and how the resultant terabyte sized datasets are handled.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Polarization-dependent thin-film wire-grid reflectarray for terahertz waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tiaoming; School of Information Science and Engineering, Lanzhou University, Lanzhou 730000; Upadhyay, Aditi
2015-07-20
A thin-film polarization-dependent reflectarray based on patterned metallic wire grids is realized at 1 THz. Unlike conventional reflectarrays with resonant elements and a solid metal ground, parallel narrow metal strips with uniform spacing are employed in this design to construct both the radiation elements and the ground plane. For each radiation element, a certain number of thin strips with an identical length are grouped to effectively form a patch resonator with equivalent performance. The ground plane is made of continuous metallic strips, similar to conventional wire-grid polarizers. The structure can deflect incident waves with the polarization parallel to the stripsmore » into a designed direction and transmit the orthogonal polarization component. Measured radiation patterns show reasonable deflection efficiency and high polarization discrimination. Utilizing this flexible device approach, similar reflectarray designs can be realized for conformal mounting onto surfaces of cylindrical or spherical devices for terahertz imaging and communications.« less
Wide range radioactive gas concentration detector
Anderson, David F.
1984-01-01
A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Kiris, Cetin C.; Kwak, Dochan; Chan, William
2000-01-01
This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.
Unweighted least squares phase unwrapping by means of multigrid techniques
NASA Astrophysics Data System (ADS)
Pritt, Mark D.
1995-11-01
We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.
Efficient relaxed-Jacobi smoothers for multigrid on parallel computers
NASA Astrophysics Data System (ADS)
Yang, Xiang; Mittal, Rajat
2017-03-01
In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
Measuring neuronal avalanches in disordered systems with absorbing states
NASA Astrophysics Data System (ADS)
Girardi-Schappo, M.; Tragtenberg, M. H. R.
2018-04-01
Power-law-shaped avalanche-size distributions are widely used to probe for critical behavior in many different systems, particularly in neural networks. The definition of avalanche is ambiguous. Usually, theoretical avalanches are defined as the activity between a stimulus and the relaxation to an inactive absorbing state. On the other hand, experimental neuronal avalanches are defined by the activity between consecutive silent states. We claim that the latter definition may be extended to some theoretical models to characterize their power-law avalanches and critical behavior. We study a system in which the separation of driving and relaxation time scales emerges from its structure. We apply both definitions of avalanche to our model. Both yield power-law-distributed avalanches that scale with system size in the critical point as expected. Nevertheless, we find restricted power-law-distributed avalanches outside of the critical region within the experimental procedure, which is not expected by the standard theoretical definition. We remark that these results are dependent on the model details.
NASA Astrophysics Data System (ADS)
Blahut, Jan; Klimes, Jan; Balek, Jan; Taborik, Petr; Juras, Roman; Pavlasek, Jiri
2015-04-01
Run-out modelling of snow avalanches is being widely applied in high mountain areas worldwide. This study presents application of snow avalanche run-out calculation applied to mid-mountain ranges - the Krkonose, Jeseniky and Kralicky Sneznik Mountains. All mentioned mountain ranges lie in the northern part of Czechia, close to the border with Poland. Its highest peak reaches only 1602 m a.s.l. However, climatic conditions and regular snowpack presence are the reason why these mountain ranges experience considerable snow avalanche activity every year, sometimes resulting in injuries or even fatalities. Within the aim of an applied project dealing with snow avalanche hazard prediction a re-assessment of permanent snow avalanche paths has been performed based on extensive statistics covering period from 1961/62 till present. On each avalanche path different avalanches with different return periods were modelled using the RAMMS code. As a result, an up-to-date snow avalanche hazard map was prepared.
Meteorological variables to aid forecasting deep slab avalanches on persistent weak layers
Marienthal, Alex; Hendrikx, Jordy; Birkeland, Karl; Irvine, Kathryn M.
2015-01-01
Deep slab avalanches are particularly challenging to forecast. These avalanches are difficult to trigger, yet when they release they tend to propagate far and can result in large and destructive avalanches. We utilized a 44-year record of avalanche control and meteorological data from Bridger Bowl ski area in southwest Montana to test the usefulness of meteorological variables for predicting seasons and days with deep slab avalanches. We defined deep slab avalanches as those that failed on persistent weak layers deeper than 0.9 m, and that occurred after February 1st. Previous studies often used meteorological variables from days prior to avalanches, but we also considered meteorological variables over the early months of the season. We used classification trees and random forests for our analyses. Our results showed seasons with either dry or wet deep slabs on persistent weak layers typically had less precipitation from November through January than seasons without deep slabs on persistent weak layers. Days with deep slab avalanches on persistent weak layers often had warmer minimum 24-hour air temperatures, and more precipitation over the prior seven days, than days without deep slabs on persistent weak layers. Days with deep wet slab avalanches on persistent weak layers were typically preceded by three days of above freezing air temperatures. Seasonal and daily meteorological variables were found useful to aid forecasting dry and wet deep slab avalanches on persistent weak layers, and should be used in combination with continuous observation of the snowpack and avalanche activity.
Dealing with the white death: avalanche risk management for traffic routes.
Rheinberger, Christoph M; Bründl, Michael; Rhyner, Jakob
2009-01-01
This article discusses mitigation strategies to protect traffic routes from snow avalanches. Up to now, mitigation of snow avalanches on many roads and railways in the Alps has relied on avalanche sheds, which require large initial investments resulting in high opportunity costs. Therefore, avalanche risk managers have increasingly adopted organizational mitigation measures such as warning systems and closure policies instead. The effectiveness of these measures is, however, greatly dependent on human decisions. In this article, we present a method for optimizing avalanche mitigation for traffic routes in terms of both their risk reduction impact and their net benefit to society. First, we introduce a generic framework for assessing avalanche risk and for quantifying the impact of mitigation. This allows for sound cost-benefit comparisons between alternative mitigation strategies. Second, we illustrate the framework with a case study from Switzerland. Our findings suggest that site-specific characteristics of avalanche paths, as well as the economic importance of a traffic route, are decisive for the choice of optimal mitigation strategies. On routes endangered by few avalanche paths with frequent avalanche occurrences, structural measures are most efficient, whereas reliance on organizational mitigation is often the most appropriate strategy on routes endangered by many paths with infrequent or fuzzy avalanche risk. Finally, keeping a traffic route open may be very important for tourism or the transport industry. Hence, local economic value may promote the use of a hybrid strategy that combines organizational and structural measures to optimize the resource allocation of avalanche risk mitigation.
NASA Technical Reports Server (NTRS)
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Massive parallel 3D PIC simulation of negative ion extraction
NASA Astrophysics Data System (ADS)
Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu
2017-09-01
The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.
Avalanches and Criticality in Driven Magnetic Skyrmions
NASA Astrophysics Data System (ADS)
Díaz, S. A.; Reichhardt, C.; Arovas, D. P.; Saxena, A.; Reichhardt, C. J. O.
2018-03-01
We show using numerical simulations that slowly driven Skyrmions interacting with random pinning move via correlated jumps or avalanches. The avalanches exhibit power-law distributions in their duration and size, and the average avalanche shape for different avalanche durations can be scaled to a universal function, in agreement with theoretical predictions for systems in a nonequilibrium critical state. A distinctive feature of Skyrmions is the influence of the nondissipative Magnus term. When we increase the ratio of the Magnus term to the damping term, a change in the universality class of the behavior occurs, the average avalanche shape becomes increasingly asymmetric, and individual avalanches exhibit motion in the direction perpendicular to their own density gradient.
Time-Dependent Simulation of Incompressible Flow in a Turbopump Using Overset Grid Approach
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
This paper reports the progress being made towards complete unsteady turbopump simulation capability by using overset grid systems. A computational model of a turbo-pump impeller is used as a test case for the performance evaluation of the MPI, hybrid MPI/Open-MP, and MLP versions of the INS3D code. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Unsteady computations for a turbo-pump, which contains 114 zones with 34.3 Million grid points, are performed on Origin 2000 systems at NASA Ames Research Center. The approach taken for these simulations, and the performance of the parallel versions of the code are presented.
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
Historic avalanches in the northern front range and the central and northern mountains of Colorado
M. Martinelli; Charles F. Leaf
1999-01-01
Newspaper accounts of avalanche accidents from the 1860s through 1950 have been compiled, summarized, and discussed. Many of the avalanches that caused fatalities came down rather small, innocuous-looking paths. Land use planners can use historical avalanche information as a reminder of the power of snow avalanches and to assure rational development in the future....
NASA Astrophysics Data System (ADS)
Lato, M. J.; Frauenfelder, R.; Bühler, Y.
2012-09-01
Snow avalanches in mountainous areas pose a significant threat to infrastructure (roads, railways, energy transmission corridors), personal property (homes) and recreational areas as well as for lives of people living and moving in alpine terrain. The impacts of snow avalanches range from delays and financial loss through road and railway closures, destruction of property and infrastructure, to loss of life. Avalanche warnings today are mainly based on meteorological information, snow pack information, field observations, historically recorded avalanche events as well as experience and expert knowledge. The ability to automatically identify snow avalanches using Very High Resolution (VHR) optical remote sensing imagery has the potential to assist in the development of accurate, spatially widespread, detailed maps of zones prone to avalanches as well as to build up data bases of past avalanche events in poorly accessible regions. This would provide decision makers with improved knowledge of the frequency and size distributions of avalanches in such areas. We used an object-oriented image interpretation approach, which employs segmentation and classification methodologies, to detect recent snow avalanche deposits within VHR panchromatic optical remote sensing imagery. This produces avalanche deposit maps, which can be integrated with other spatial mapping and terrain data. The object-oriented approach has been tested and validated against manually generated maps in which avalanches are visually recognized and digitized. The accuracy (both users and producers) are over 0.9 with errors of commission less than 0.05. Future research is directed to widespread testing of the algorithm on data generated by various sensors and improvement of the algorithm in high noise regions as well as the mapping of avalanche paths alongside their deposits.
NASA Astrophysics Data System (ADS)
Cooray, Vernon; Cooray, Gerald; Marshall, Thomas; Arabshahi, Shahab; Dwyer, Joseph; Rassoul, Hamid
2014-11-01
In the present study, electromagnetic fields of accelerating charges were utilized to evaluate the electromagnetic fields generated by a relativistic electron avalanche. In the analysis it is assumed that all the electrons in the avalanche are moving with the same speed. In other words, the growth or the decay of the number of electrons takes place only at the head of the avalanche. It is shown that the radiation is emanating only from the head of the avalanche where electrons are being accelerated. It is also shown that an analytical expression for the radiation field of the avalanche at any distance can be written directly in terms of the e-folding length of the avalanche. This model of the avalanche was utilized to test the idea whether the source of the lightning signatures known as narrow bipolar pulses could be relativistic avalanches. The idea was tested by using the simultaneously measured electric fields of narrow bipolar pulses at two distances, one measured far away from the source and the other in the near vicinity. The avalanche parameters were extracted from the distant field and they are used to evaluate the close field. The results show that the source of the NBP can be modeled either as a single or a multiple burst of relativistic avalanches with speed of avalanches in the range of 2-3 × 108 m/s. The multiple avalanche model agrees better with the experimental data in that it can also generate the correct signature of the time derivatives and the HF and VHF radiation bursts of NBP.
During running in place, grid cells integrate elapsed time and distance run
Kraus, Benjamin J.; Brandon, Mark P.; Robinson, Robert J.; Connerney, Michael A.; Hasselmo, Michael E.; Eichenbaum, Howard
2015-01-01
Summary The spatial scale of grid cells may be provided by self-generated motion information or by external sensory information from environmental cues. To determine whether grid cell activity reflects distance traveled or elapsed time independent of external information, we recorded grid cells as animals ran in place on a treadmill. Grid cell activity was only weakly influenced by location but most grid cells and other neurons recorded from the same electrodes strongly signaled a combination of distance and time, with some signaling only distance or time. Grid cells were more sharply tuned to time and distance than non-grid cells. Many grid cells exhibited multiple firing fields during treadmill running, parallel to the periodic firing fields observed in open fields, suggesting a common mode of information processing. These observations indicate that, in the absence of external dynamic cues, grid cells integrate self-generated distance and time information to encode a representation of experience. PMID:26539893
Nano-multiplication region avalanche photodiodes and arrays
NASA Technical Reports Server (NTRS)
Zheng, Xinyu (Inventor); Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor)
2011-01-01
An avalanche photodiode with a nano-scale reach-through structure comprising n-doped and p-doped regions, formed on a silicon island on an insulator, so that the avalanche photodiode may be electrically isolated from other circuitry on other silicon islands on the same silicon chip as the avalanche photodiode. For some embodiments, multiplied holes generated by an avalanche reduces the electric field in the depletion region of the n-doped and p-doped regions to bring about self-quenching of the avalanche photodiode. Other embodiments are described and claimed.
Dynamic magnification factors for tree blow-down by powder snow avalanche air blasts
NASA Astrophysics Data System (ADS)
Bartelt, Perry; Bebi, Peter; Feistl, Thomas; Buser, Othmar; Caviezel, Andrin
2018-03-01
We study how short duration powder avalanche blasts can break and overturn tall trees. Tree blow-down is often used to back-calculate avalanche pressure and therefore constrain avalanche flow velocity and motion. We find that tall trees are susceptible to avalanche air blasts because the duration of the air blast is near to the period of vibration of tall trees, both in bending and root-plate overturning. Dynamic magnification factors for bending and overturning failures should therefore be considered when back-calculating avalanche impact pressures.
Silicon-fiber blanket solar-cell array concept
NASA Technical Reports Server (NTRS)
Eliason, J. T.
1973-01-01
Proposed economical manufacture of solar-cell arrays involves parallel, planar weaving of filaments made of doped silicon fibers with diffused radial junction. Each filament is a solar cell connected either in series or parallel with others to form a blanket of deposited grids or attached electrode wire mesh screens.
National Centers for Environmental Prediction
Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for
High performance x-ray anti-scatter grid
Logan, Clinton M.
1995-01-01
An x-ray anti-scatter grid for x-ray imaging, particularly for screening mammography, and method for fabricating same, x-rays incident along a direct path pass through a grid composed of a plurality of parallel or crossed openings, microchannels, grooves, or slots etched in a substrate, such as silicon, having the walls of the microchannels or slots coated with a high opacity material, such as gold, while x-rays incident at angels with respect to the slots of the grid, arising from scatter, are blocked. The thickness of the substrate is dependent on the specific application of the grid, whereby a substrate of the grid for mammography would be thinner than one for chest radiology. Instead of coating the walls of the slots, such could be filed with an appropriate liquid, such as mercury.
Solving optimization problems on computational grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S. J.; Mathematics and Computer Science
2001-05-01
Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less
Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Pirzadeh, S.
1999-01-01
A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.
Parallel computation of three-dimensional aeroelastic fluid-structure interaction
NASA Astrophysics Data System (ADS)
Sadeghi, Mani
This dissertation presents a numerical method for the parallel computation of aeroelasticity (ParCAE). A flow solver is coupled to a structural solver by use of a fluid-structure interface method. The integration of the three-dimensional unsteady Navier-Stokes equations is performed in the time domain, simultaneously to the integration of a modal three-dimensional structural model. The flow solution is accelerated by using a multigrid method and a parallel multiblock approach. Fluid-structure coupling is achieved by subiteration. A grid-deformation algorithm is developed to interpolate the deformation of the structural boundaries onto the flow grid. The code is formulated to allow application to general, three-dimensional, complex configurations with multiple independent structures. Computational results are presented for various configurations, such as turbomachinery blade rows and aircraft wings. Investigations are performed on vortex-induced vibrations, effects of cascade mistuning on flutter, and cases of nonlinear cascade and wing flutter.
LSPRAY-III: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2008-01-01
LSPRAY-III is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-III, we have advanced the state-of-the-art in spray computations in several important ways.
LSPRAY-II: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2004-01-01
LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.
Real time avalanche detection for high risk areas.
DOT National Transportation Integrated Search
2014-12-01
Avalanches routinely occur on State Highway 21 (SH21) between Lowman and Stanley, Idaho each winter. The avalanches pose : a threat to the safety of maintenance workers and the traveling public. A real-time avalanche detection system will allow the :...
Fundamentals of undervoltage breakdown through the Townsend mechanism
NASA Astrophysics Data System (ADS)
Cooley, James E.
The conditions under which an externally supplied pulse of electrons will induce breakdown in an undervoltaged, low-gain, DC discharge gap are experimentally and theoretically explored. The phenomenon is relevant to fundamental understanding of breakdown physics, to switching applications such as triggered spark gaps and discharge initiation in pulsed-plasma thrusters, and to gas-avalanche particle counters. A dimensionless theoretical description of the phenomenon is formulated and solved numerically. It is found that a significant fraction of the charge on the plates must be injected for breakdown to be achieved at low avalanche-ionization gain, when an electron undergoes fewer than approximately 10 ionizing collisions during one gap transit. It is also found that fewer injected electrons are required as the gain due to electron-impact ionization (alpha process) is increased, or as the sensitivity of the alpha process to electric field is enhanced by decreasing the reduced electric field (electric field divided by pressure, E/p). A predicted insensitivity to ion mobility implies that breakdown is determined during the first electron avalanche when space charge distortion is greatest. A dimensionless, theoretical study of the development of this avalanche reveals a critical value of the reduced electric field to be the value at the Paschen curve minimum divided by 1.6. Below this value, the net result of the electric field distortion is to increase ionization for subsequent avalanches, making undervoltage breakdown possible. Above this value, ionization for subsequent avalanches will be suppressed and undervoltage breakdown is not possible. Using an experimental apparatus in which ultraviolet laser pulses are directed onto a photo-emissive cathode of a parallel-plate discharge gap, it is found that undervoltage breakdown can occur through a Townsend-like mechanism through the buildup of successively larger avalanche generations. The minimum number of injected electrons required to achieve breakdown is measured in argon at pd values of 3-10 Torr-m. The required electron pulse magnitude was found to scale inversely with pressure and voltage in this parameter range. When higher-power infrared laser pulses were used to heat the cathode surface, a faster, streamer-like breakdown mechanism was occasionally observed. As an example application, an investigation into the requirements for initiating discharges in Gas-fed Pulsed Plasma Thrusters (GFPPTs) is conducted. Theoretical investigations based on order-of-magnitude characterizations of previous GFPPT designs reveal that high-conductivity arc discharges are required for critically-damped matching of circuit components, and that relatively fast streamer breakdown is preferable to minimize delay between triggering and current sheet formation. The faster breakdown mechanism observed in the experiments demonstrates that such a discharge process can occur. However, in the parameter space occupied by most thrusters, achieving the phenomenon by way of a space charge distortion caused purely by an electron pulse should not be possible. Either a transient change in the distribution of gas density, through ablation or desorption, or a thruster design that occupies a different parameter space, such as one that uses higher mass bits, higher voltages, or smaller electrode spacing, is required for undervoltage breakdown to occur.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Parallel simulation of tsunami inundation on a large-scale supercomputer
NASA Astrophysics Data System (ADS)
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the finite difference calculation, (2) communication between adjacent layers for the calculations to connect each layer, and (3) global communication to obtain the time step which satisfies the CFL condition in the whole domain. A preliminary test on the K computer showed the parallel efficiency on 1024 cores was 57% relative to 64 cores. We estimate that the parallel efficiency will be considerably improved by applying a 2-D domain decomposition instead of the present 1-D domain decomposition in future work. The present parallel tsunami model was applied to the 2011 Great Tohoku tsunami. The coarsest resolution layer covers a 758 km × 1155 km region with a 405 m grid spacing. A nesting of five layers was used with the resolution ratio of 1/3 between nested layers. The finest resolution region has 5 m resolution and covers most of the coastal region of Sendai city. To complete 2 hours of simulation time, the serial (non-parallel) computation took approximately 4 days on a workstation. To complete the same simulation on 1024 cores of the K computer, it took 45 minutes which is more than two times faster than real-time. This presentation discusses the updated parallel computational performance and the efficient use of the K computer when considering the characteristics of the tsunami inundation simulation model in relation to the characteristics and capabilities of the K computer.
Bessette-Kirton, Erin; Coe, Jeffrey A.; Zhou, Wendy
2018-01-01
The use of preevent and postevent digital elevation models (DEMs) to estimate the volume of rock avalanches on glaciers is complicated by ablation of ice before and after the rock avalanche, scour of material during rock avalanche emplacement, and postevent ablation and compaction of the rock avalanche deposit. We present a model to account for these processes in volume estimates of rock avalanches on glaciers. We applied our model by calculating the volume of the 28 June 2016 Lamplugh rock avalanche in Glacier Bay National Park, Alaska. We derived preevent and postevent 2‐m resolution DEMs from WorldView satellite stereo imagery. Using data from DEM differencing, we reconstructed the rock avalanche and adjacent surfaces at the time of occurrence by accounting for elevation changes due to ablation and scour of the ice surface, and postevent deposit changes. We accounted for uncertainties in our DEMs through precise coregistration and an assessment of relative elevation accuracy in bedrock control areas. The rock avalanche initially displaced 51.7 ± 1.5 Mm3 of intact rock and then scoured and entrained 13.2 ± 2.2 Mm3 of snow and ice during emplacement. We calculated the total deposit volume to be 69.9 ± 7.9 Mm3. Volume estimates that did not account for topographic changes due to ablation, scour, and compaction underestimated the deposit volume by 31.0–46.8 Mm3. Our model provides an improved framework for estimating uncertainties affecting rock avalanche volume measurements in glacial environments. These improvements can contribute to advances in the understanding of rock avalanche hazards and dynamics.
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
NASA Astrophysics Data System (ADS)
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
The Research of the Parallel Computing Development from the Angle of Cloud Computing
NASA Astrophysics Data System (ADS)
Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun
2017-10-01
Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.
I/O Parallelization for the Goddard Earth Observing System Data Assimilation System (GEOS DAS)
NASA Technical Reports Server (NTRS)
Lucchesi, Rob; Sawyer, W.; Takacs, L. L.; Lyster, P.; Zero, J.
1998-01-01
The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center (GSFC) has developed the GEOS DAS, a data assimilation system that provides production support for NASA missions and will support NASA's Earth Observing System (EOS) in the coming years. The GEOS DAS will be used to provide background fields of meteorological quantities to EOS satellite instrument teams for use in their data algorithms as well as providing assimilated data sets for climate studies on decadal time scales. The DAO has been involved in prototyping parallel implementations of the GEOS DAS for a number of years and is now embarking on an effort to convert the production version from shared-memory parallelism to distributed-memory parallelism using the portable Message-Passing Interface (MPI). The GEOS DAS consists of two main components, an atmospheric General Circulation Model (GCM) and a Physical-space Statistical Analysis System (PSAS). The GCM operates on data that are stored on a regular grid while PSAS works with observational data that are scattered irregularly throughout the atmosphere. As a result, the two components have different data decompositions. The GCM is decomposed horizontally as a checkerboard with all vertical levels of each box existing on the same processing element(PE). The dynamical core of the GCM can also operate on a rotated grid, which requires communication-intensive grid transformations during GCM integration. PSAS groups observations on PEs in a more irregular and dynamic fashion.
Domain decomposition by the advancing-partition method for parallel unstructured grid generation
NASA Technical Reports Server (NTRS)
Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)
2012-01-01
In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.
A field-shaping multi-well avalanche detector for direct conversion amorphous selenium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldan, A. H.; Zhao, W.
2013-01-15
Purpose: A practical detector structure is proposed to achieve stable avalanche multiplication gain in direct-conversion amorphous selenium radiation detectors. Methods: The detector structure is referred to as a field-shaping multi-well avalanche detector. Stable avalanche multiplication gain is achieved by eliminating field hot spots using high-density avalanche wells with insulated walls and field-shaping inside each well. Results: The authors demonstrate the impact of high-density insulated wells and field-shaping to eliminate the formation of both field hot spots in the avalanche region and high fields at the metal-semiconductor interface. Results show a semi-Gaussian field distribution inside each well using the field-shaping electrodes,more » and the electric field at the metal-semiconductor interface can be one order-of-magnitude lower than the peak value where avalanche occurs. Conclusions: This is the first attempt to design a practical direct-conversion amorphous selenium detector with avalanche gain.« less
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
NASA Astrophysics Data System (ADS)
Matthews, John A.; Owen, Geraint; McEwen, Lindsey J.; Shakesby, Richard A.; Hill, Jennifer L.; Vater, Amber E.; Ratcliffe, Anna C.
2017-11-01
This regional inventory and study of a globally uncommon landform type reveals similarities in form and process between craters produced by snow-avalanche and meteorite impacts. Fifty-two snow-avalanche impact craters (mean diameter 85 m, range 10-185 m) were investigated through field research, aerial photographic interpretation and analysis of topographic maps. The craters are sited on valley bottoms or lake margins at the foot of steep avalanche paths (α = 28-59°), generally with an easterly aspect, where the slope of the final 200 m of the avalanche path (β) typically exceeds 15°. Crater diameter correlates with the area of the avalanche start zone, which points to snow-avalanche volume as the main control on crater size. Proximal erosional scars ('blast zones') up to 40 m high indicate up-range ejection of material from the crater, assisted by air-launch of the avalanches and impulse waves generated by their impact into water-filled craters. Formation of distal mounds up to 12 m high of variable shape is favoured by more dispersed down-range deposition of ejecta. Key to the development of snow-avalanche impact craters is the repeated occurrence of topographically-focused snow avalanches that impact with a steep angle on unconsolidated sediment. Secondary craters or pits, a few metres in diameter, are attributed to the impact of individual boulders or smaller bodies of snow ejected from the main avalanche. The process of crater formation by low-density, low-velocity, large-volume snow flows occurring as multiple events is broadly comparable with cratering by single-event, high-density, high-velocity, small-volume projectiles such as small meteorites. Simple comparative modelling of snow-avalanche events associated with a crater of average size (diameter 85 m) indicates that the kinetic energy of a single snow-avalanche impact event is two orders of magnitude less than that of a single meteorite-impact event capable of producing a crater of similar size, which is consistent with the incremental development of snow-avalanche impact craters through the Holocene.
Timing of wet snow avalanche activity: An analysis from Glacier National Park, Montana, USA.
Peitzsch, Erich H.; Hendrikx, Jordy; Fagre, Daniel B.
2012-01-01
Wet snow avalanches pose a problem for annual spring road opening operations along the Going-to-the-Sun Road (GTSR) in Glacier National Park, Montana, USA. A suite of meteorological metrics and snow observations has been used to forecast for wet slab and glide avalanche activity. However, the timing of spring wet slab and glide avalanches is a difficult process to forecast and requires new capabilities. For the 2011 and 2012 spring seasons we tested a previously developed classification tree model which had been trained on data from 2003-2010. For 2011, this model yielded a 91% predictive rate for avalanche days. For 2012, the model failed to capture any of the avalanche days observed. We then investigated these misclassified avalanche days in the 2012 season by comparing them to the misclassified days from the original dataset from which the model was trained. Results showed no significant difference in air temperature variables between this year and the original training data set for these misclassified days. This indicates that 2012 was characterized by avalanche days most similar to those that the model struggled with in the original training data. The original classification tree model showed air temperature to be a significant variable in wet avalanche activity which implies that subsequent movement of meltwater through the snowpack is also important. To further understand the timing of water flow we installed two lysimeters in fall 2011 before snow accumulation. Water flow showed a moderate correlation with air temperature later in the season and no synchronous pattern associated with wet slab and glide avalanche activity. We also characterized snowpack structure as the snowpack transitioned from a dry to a wet snowpack throughout the spring. This helped to assess potential failure layers of wet snow avalanches and the timing of avalanches compared to water moving through the snowpack. These tools (classification tree model and lysimeter data), combined with standard meteorological and avalanche observations, proved useful to forecasters regarding the timing of wet snow avalanche activity along the GTSR.
NASA Astrophysics Data System (ADS)
Marchetti, Emanuele; van Herwijnen, Alec; Ripepe, Maurizio
2017-04-01
While flowing downhill a snow avalanche radiates seismic and infrasonic waves being coupled both with the ground and the atmosphere. Infrasound waves are mostly generated by the powder cloud of the avalanche, while seismic waves are mostly generated by the dense flowing snow mass on the ground, resulting in different energy partitioning between seismic and infrasound for different kinds of avalanches. This results into a general uncertainty on the efficiency of seismic and infrasound monitoring, in terms of the size and source-to-receiver distance of detectable events. Nevertheless, both seismic and infrasound have been used as monitoring systems for the remote detection of snow avalanches, being the reliable detection of snow avalanches of crucial importance to better understand triggering mechanisms, identify possible precursors, or improve avalanche forecasting. We present infrasonic and seismic array data collected during the winters of 2015- 2016 and 2016-2017 in the Dischma valley above Davos, Switzerland, where a five element infrasound array and a 7 element seismic array had been deployed at short distance from each other and with several avalanche paths nearby. Avalanche observation in the area is performed through automatic cameras providing additional information on the location, type (dry or wet), size and occurrence time of the avalanches released. The use of arrays instead of single sensors allows increasing the signal-to-noise ratio and identifying events in terms of back-azimuth and apparent velocity of the wave-field, thus providing indication on the source position of the recorded signal. For selected snow avalanches captured with automatic cameras, we therefore perform seismic and infrasound array processing to constrain the avalanche path and dynamics and investigate the partitioning of seismic and infrasound energy for the different portions of the avalanche path. Moreover we compare results of seismic and infrasound array processing for the whole 2015-2016 winter season in order to investigate the ability of the two monitoring systems to identify and characterize snow avalanches and the benefit of the combined seismo-acoustic analysis.
Feng, Shuo
2014-01-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
Multitasking for flows about multiple body configurations using the chimera grid scheme
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Morgan, R. L.
1987-01-01
The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.
Parallel solution of high-order numerical schemes for solving incompressible flows
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Lin, Avi; Liou, May-Fun; Blech, Richard A.
1993-01-01
A new parallel numerical scheme for solving incompressible steady-state flows is presented. The algorithm uses a finite-difference approach to solving the Navier-Stokes equations. The algorithms are scalable and expandable. They may be used with only two processors or with as many processors as are available. The code is general and expandable. Any size grid may be used. Four processors of the NASA LeRC Hypercluster were used to solve for steady-state flow in a driven square cavity. The Hypercluster was configured in a distributed-memory, hypercube-like architecture. By using a 50-by-50 finite-difference solution grid, an efficiency of 74 percent (a speedup of 2.96) was obtained.
Fast, Massively Parallel Data Processors
NASA Technical Reports Server (NTRS)
Heaton, Robert A.; Blevins, Donald W.; Davis, ED
1994-01-01
Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.
Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters
NASA Technical Reports Server (NTRS)
Foster, John E.
2003-01-01
The imposition of a magnetic field has been proposed as a means of reducing the electron backstreaming problem in ion thrusters. Electron backstreaming refers to the backflow of electrons into the ion thruster. Backstreaming electrons are accelerated by the large potential difference that exists between the ion-thruster acceleration electrodes, which otherwise accelerates positive ions out of the engine to develop thrust. The energetic beam formed by the backstreaming electrons can damage the discharge cathode, as well as other discharge surfaces upstream of the acceleration electrodes. The electron-backstreaming condition occurs when the center potential of the ion accelerator grid is no longer sufficiently negative to prevent electron diffusion back into the ion thruster. This typically occurs over extended periods of operation as accelerator-grid apertures enlarge due to erosion. As a result, ion thrusters are required to operate at increasingly negative accelerator-grid voltages in order to prevent electron backstreaming. These larger negative voltages give rise to higher accelerator grid erosion rates, which in turn accelerates aperture enlargement. Electron backstreaming due to accelerator-gridhole enlargement has been identified as a failure mechanism that will limit ionthruster service lifetime. The proposed method would make it possible to not only reduce the electron backstreaming current at and below the backstreaming voltage limit, but also reduce the backstreaming voltage limit itself. This reduction in the voltage at which electron backstreaming occurs provides operating margin and thereby reduces the magnitude of negative voltage that must be placed on the accelerator grid. Such a reduction reduces accelerator- grid erosion rates. The basic idea behind the proposed method is to impose a spatially uniform magnetic field downstream of the accelerator electrode that is oriented transverse to the thruster axis. The magnetic field must be sufficiently strong to impede backstreaming electrons, but not so strong as to significantly perturb ion trajectories. An electromagnet or permanent magnetic circuit can be used to impose the transverse magnetic field downstream of the accelerator-grid electrode. For example, in the case of an accelerator grid containing straight, parallel rows of apertures, one can apply nearly uniform magnetic fields across all the apertures by the use of permanent magnets of alternating polarity connected to pole pieces laid out parallel to the rows, as shown in the left part of the figure. For low-temperature operation, the pole pieces can be replaced with bar magnets of alternating polarity. Alternatively, for the same accelerator grid, one could use an electromagnet in the form of current-carrying rods laid out parallel to the rows.
Crackling to periodic transition in a granular stick-slip experiment
NASA Astrophysics Data System (ADS)
Abed Zadeh, Aghil; BaréS, Jonathan; Behringer, Robert
We perform a stick-slip experiment to characterize avalanches in time and space for granular materials. In our experiment, a constant speed stage pulls a slider which rests on a vertical bed of circular photo-elastic particles in a 2D system. The stage is connected to the slider by a spring. We measure the force on the spring by a force sensor attached to the spring. We study the avalanche size statistics, and other seismicity laws of slip avalanches. Using the power spectrum of the force signal and avalanche statistics, we analyze the effect of the loading speed and of the spring stiffness and we capture a transition from crackling to periodic regime by changing these parameters. From a more local point of view and by using a high speed camera and the photo-elastic properties of our particles, we characterize the local stress change and flow of particles during slip avalanches. By image processing, we detect the local avalanches as connected components in space and time, and we study the avalanche size probability density functions (PDF). The PDF of avalanches obey power laws both at global and local scales, but with different exponents. We try to understand the correlation of local avalanches in space and the way they coarse grain to the global avalanches. NSF Grant DMR-1206351, NASA Grant NNX15AD38G, and the William M. Keck Foundation.
NASA Astrophysics Data System (ADS)
Vickers, H.; Eckerstorfer, M.; Malnes, E.; Larsen, Y.; Hindberg, H.
2016-11-01
Avalanches are a natural hazard that occur in mountainous regions of Troms County in northern Norway during winter and can cause loss of human life and damage to infrastructure. Knowledge of when and where they occur especially in remote, high mountain areas is often lacking due to difficult access. However, complete, spatiotemporal avalanche activity data sets are important for accurate avalanche forecasting, as well as for deeper understanding of the link between avalanche occurrences and the triggering snowpack and meteorological factors. It is therefore desirable to develop a technique that enables active mapping and monitoring of avalanches over an entire winter. Avalanche debris can be observed remotely over large spatial areas, under all weather and light conditions by synthetic aperture radar (SAR) satellites. The recently launched Sentinel-1A satellite acquires SAR images covering the entire Troms County with frequent updates. By focusing on a case study from New Year 2015 we use Sentinel-1A images to develop an automated avalanche debris detection algorithm that utilizes change detection and unsupervised object classification methods. We compare our results with manually identified avalanche debris and field-based images to quantify the algorithm accuracy. Our results indicate that a correct detection rate of over 60% can be achieved, which is sensitive to several algorithm parameters that may need revising. With further development and refinement of the algorithm, we believe that this method could play an effective role in future operational monitoring of avalanches within Troms and has potential application in avalanche forecasting areas worldwide.
High performance x-ray anti-scatter grid
Logan, C.M.
1995-05-23
Disclosed are an x-ray anti-scatter grid for x-ray imaging, particularly for screening mammography, and method for fabricating same, x-rays incident along a direct path pass through a grid composed of a plurality of parallel or crossed openings, microchannels, grooves, or slots etched in a substrate, such as silicon, having the walls of the microchannels or slots coated with a high opacity material, such as gold, while x-rays incident at angels with respect to the slots of the grid, arising from scatter, are blocked. The thickness of the substrate is dependent on the specific application of the grid, whereby a substrate of the grid for mammography would be thinner than one for chest radiology. Instead of coating the walls of the slots, such could be filed with an appropriate liquid, such as mercury. 4 Figs.
NASA Astrophysics Data System (ADS)
Yugsi Molina, F. X.; Hermanns, R. L.; Crosta, G. B.; Dehls, J.; Sosio, R.; Sepúlveda, S. A.
2012-04-01
Iquique is a city of about 215,000 inhabitants (Chilean national census 2002) settled on one of the seismic gaps in the South American subduction zone, where a M >8 earthquake with overdue return periods of ca. 100 yr is expected in the near future. The city has only two access roads coming from the east and south. The road to the east comes down along the escarpment that connects the Coastal Cordillera to the Coastal Plain. The road has been blocked by small magnitude earthquake-triggered landslides at least once in recent years. The second road, coming from the south, crosses along the Coastal Plain and connects the city to the airport where at least ten ancient debris deposits related to rock avalanches are found. These facts show the importance of determining the effects of a future high magnitude earthquake on the stability of the slopes in the area and the impact of possible slope failures on people, infrastructure and emergency management. The present work covers an area of approximately 130 km2 parallel to the coastline to the south of Iquique, divided into the two main morphological units briefly mentioned above. The eastern part corresponds to the Coastal Cordillera, a set of smoothed hills and shallow valleys that reaches up to 1200 m asl. This sector is limited to the west by a steep escarpment followed by the Coastal Plain and a narrow emerged marine plateau (1-3 km wide) locally overlaid by deposits of recent rock avalanches. Rock avalanche events have recurrently occurred at two sites to the north and center of the study area on the Coastal Cordillera escarpment. Another major single event has been mapped to the south. Marls, red and black shales, and shallow marine glauconitic deposits from Jurassic constitute the source rock for the rock avalanches in all sites. Clusters of deposits are found in the first two sites (retrogressive advance) with younger events running shorter distances and partially overlaying the older ones. Multiple lobes have been mapped characterized by well defined lateral levees and clear internal morphological features (ridges and furrows, hummocks). Rock avalanche run out simulations have been carried out to back analyze the sites using DAN 3D and a 3 m pixel resolution digital elevation model (DEM) obtained from stereoscopic Geoeye-1 images to assess parameters that controlled propagation mechanism and impact area extent of the events. The older lobes were dated by radiocarbon methods. Results indicate ages higher than 40,000 yr BP for the northern site. The second site could only be dated relatively with an underlying terrace that resulted older than the age limit of radiocarbon dating (43.500 yr BP). All the deposits are positioned well above (40-70 m) the present sea level rise, and at the reported uplift rates for the area, they could be associated to events older than some hundreds of thousand years. A more complete record of the failure history of the sites will be obtained when results of cosmogenic nuclides (CN) and luminescence dating will become available later this year. Several other smaller rock avalanches have been mapped in the study area. Satellite-based radar interferometry (InSAR) was performed using ERS-1 and ERS-2 scenes from 1995-2000 as well as ENVISAT ASAR scenes from 2004-2010. Both datasets show only small deformation in the area. This deformation includes sliding of small surficial slope deposits and subsidence apparently due to local groundwater withdrawal. No deformation of bedrock along the escarpment edge is observed. Results show that only major rock avalanches could reach the main access roads to Iquique and currently no large slope segments show signs of large displacement rates. Moreover, there is no strong correlation between M > 8 earthquakes return periods and age of the dated deposits, which implies that large rock avalanches could have been triggered by other factors. Hence, from a hazard and risk perspective, it is unlikely that large rock avalanches, that could block the access roads to the city, would occur in the near future. Results from CN and luminescence dating will help to get a better understanding of the conditioning and triggering of past events.
Generalizing the TRAPRG and TRAPAX finite elements
NASA Technical Reports Server (NTRS)
Hurwitz, M. M.
1983-01-01
The NASTRAN TRAPRG and TRAPAX finite elements are very restrictive as to shape and grid point numbering. The elements must be trapezoidal with two sides parallel to the radial axis. In addition, the ordering of the grid points on the element connection card must follow strict rules. The paper describes the generalization of these elements so that these restrictions no longer apply.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Reconfigurable Parallel Computer Architectures for Space Applications
2012-08-07
Overview......................... 2.2.6 Cellular Wiring Grid Convention.................................................. 2 2 3 3 4 4 5 5...The panel is a pegboard-like structure, which does not articulate specific sockets, but rather provides a continuous grid of contact pads and...platforms (such as spacecraft). We envision that this might be achieved by assembling a number of tile-like panels, each a “ smart substrate
PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD
NASA Technical Reports Server (NTRS)
Suhs, Norman E.; Rogers, Stuart E.; Dietz, William E.; Kwak, Dochan (Technical Monitor)
2002-01-01
An all new, automated version of the PEGASUS software has been developed and tested. PEGASUS provides the hole-cutting and connectivity information between overlapping grids, and is used as the final part of the grid generation process for overset-grid computational fluid dynamics approaches. The new PEGASUS code (Version 5) has many new features: automated hole cutting; a projection scheme for fixing gaps in overset surfaces; more efficient interpolation search methods using an alternating digital tree; hole-size optimization based on adding additional layers of fringe points; and an automatic restart capability. The new code has also been parallelized using the Message Passing Interface standard. The parallelization performance provides efficient speed-up of the execution time by an order of magnitude, and up to a factor of 30 for very large problems. The results of three example cases are presented: a three-element high-lift airfoil, a generic business jet configuration, and a complete Boeing 777-200 aircraft in a high-lift landing configuration. Comparisons of the computed flow fields for the airfoil and 777 test cases between the old and new versions of the PEGASUS codes show excellent agreement with each other and with experimental results.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
1983-01-01
The resolution of the compu- and also leads to an expression for "dz,"*. tational grid is thereby defined according to e the actual requirements of...computational economy are achieved simultaneously by redistributing the computational grid points according to the physical requirements of the problem...computational Eulerian grid points according to implemented using a two-dimensionl time- the physical requirements of the nonlinear dependent finite
Breaking CFD Bottlenecks in Gas-Turbine Flow-Path Design
NASA Technical Reports Server (NTRS)
Davis, Roger L.; Dannenhoffer, John F., III; Clark, John P.
2010-01-01
New ideas are forthcoming to break existing bottlenecks in using CFD during design. CAD-based automated grid generation. Multi-disciplinary use of embedded, overset grids to eliminate complex gridding problems. Use of time-averaged detached-eddy simulations as norm instead of "steady" RANS to include effects of self-excited unsteadiness. Combined GPU/Core parallel computing to provide over an order of magnitude increase in performance/price ratio. Gas-turbine applications are shown here but these ideas can be used for other Air Force, Navy, and NASA applications.
Modeling and Scaling of the Distribution of Trade Avalanches in a STOCK Market
NASA Astrophysics Data System (ADS)
Kim, Hyun-Joo
We study the trading activity in the Korea Stock Exchange by considering trade avalanches. A series of successive trading with small trade time interval is regarded as a trade avalanche of which the size s is defined as the number of trade in a series of successive trades. We measure the distribution of trade avalanches sizes P(s) and find that it follows the power-law behavior P(s) ~ s-α with the exponent α ≈ 2 for two stocks with the largest number of trades. A simple stochastic model which describes the power-law behavior of the distribution of trade avalanche size is introduced. In the model it is assumed that the some trades induce the accompanying trades, which results in the trade avalanches and we find that the distribution of the trade avalanche size also follows power-law behavior with the exponent α ≈ 2.
Negative feedback avalanche diode
NASA Technical Reports Server (NTRS)
Itzler, Mark Allen (Inventor)
2010-01-01
A single-photon avalanche detector is disclosed that is operable at wavelengths greater than 1000 nm and at operating speeds greater than 10 MHz. The single-photon avalanche detector comprises a thin-film resistor and avalanche photodiode that are monolithically integrated such that little or no additional capacitance is associated with the addition of the resistor.
DOT National Transportation Integrated Search
2009-04-01
The 151 Avalanche, near Jackson, Wyoming has, historically, avalanched to the road below 1.5 to 2 times a year. The road, US 89/191 is four lanes and carries an estimated 8,000 vehicles per day in the winter months. The starting zone of the 151 Avala...
Reardon, Blase; Lundy, Chris
2004-01-01
The annual spring opening of the Going-to-the-Sun Road in Glacier National Park presents a unique avalanche forecasting challenge. The highway traverses dozens of avalanche paths mid-track in a 23-kilometer section that crosses the Continental Divide. Workers removing seasonal snow and avalanche debris are exposed to paths that can produce avalanches of destructive class 4. The starting zones for most slide paths are within proposed Wilderness, and explosive testing or control are not currently used. Spring weather along the Divide is highly variable; rain-on-snow events are common, storms can bring several feet of new snow as late as June, and temperature swings can be dramatic. Natural avalanches - dry and wet slab, dry and wet loose, and glide avalanches - present a wide range of hazards and forecasting issues. This paper summarizes the forecasting program instituted in 2002 for the annual snow removal operations. It focuses on tools and techniques for forecasting natural wet snow avalanches by incorporating two case studies, including a widespread climax wet slab cycle in 2003. We examine weather and snowpack conditions conducive to wet snow avalanches, indicators for instability, and suggest a conceptual model for wet snow stability in a northern intermountain snow climate.
Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows
NASA Astrophysics Data System (ADS)
Xiao, Xudong
Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.
NASA Astrophysics Data System (ADS)
Xue, Peng; Fu, Guicui
2017-03-01
The dynamic avalanche has a huge impact on the switching robustness of carrier stored trench bipolar transistor (CSTBT). The purpose of this work is to investigate the CSTBT's dynamic avalanche mechanism during clamped inductive turn-off transient. At first, with a Mitsubishi 600 V/150 A CSTBT and a Infineon 600 V/200 A field stop insulated gate bipolar transistor (FS-IGBT) utilized, the clamped inductive turn-off characteristics are obtained by double pulse test. The unclamped inductive switching (UIS) test is also utilized to identify the CSTBT's clamping voltage under dynamic avalanche condition. After the test data analysis, it is found that the CSTBT's dynamic avalanche is abnormal and can be triggered under much looser condition than the conventional buffer layer IGBT. The comparison between the FS-IGBT and CSTBT's experimental results implies that the CSTBT's abnormal dynamic avalanche phenomenon may be induced by the carrier storage (CS) layer. Based on the semiconductor physics, the electric field distribution and dynamic avalanche generation in the depletion region are analyzed. The analysis confirms that the CS layer is the root cause of the CSTBT's abnormal dynamic avalanche mechanism. Moreover, the CSTBT's negative gate capacitance effect is also investigated to clarify the underlying mechanism of the gate voltage bump observed in the test. In the end, the mixed-mode numerical simulation is utilized to reproduce the CSTBT's dynamic avalanche behavior. The simulation results validate the proposed dynamic avalanche mechanisms.
Execution of a parallel edge-based Navier-Stokes solver on commodity graphics processor units
NASA Astrophysics Data System (ADS)
Corral, Roque; Gisbert, Fernando; Pueblas, Jesus
2017-02-01
The implementation of an edge-based three-dimensional Reynolds Average Navier-Stokes solver for unstructured grids able to run on multiple graphics processing units (GPUs) is presented. Loops over edges, which are the most time-consuming part of the solver, have been written to exploit the massively parallel capabilities of GPUs. Non-blocking communications between parallel processes and between the GPU and the central processor unit (CPU) have been used to enhance code scalability. The code is written using a mixture of C++ and OpenCL, to allow the execution of the source code on GPUs. The Message Passage Interface (MPI) library is used to allow the parallel execution of the solver on multiple GPUs. A comparative study of the solver parallel performance is carried out using a cluster of CPUs and another of GPUs. It is shown that a single GPU is up to 64 times faster than a single CPU core. The parallel scalability of the solver is mainly degraded due to the loss of computing efficiency of the GPU when the size of the case decreases. However, for large enough grid sizes, the scalability is strongly improved. A cluster featuring commodity GPUs and a high bandwidth network is ten times less costly and consumes 33% less energy than a CPU-based cluster with an equivalent computational power.
Multilevel Parallelization of AutoDock 4.2.
Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P
2011-04-28
Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Bühler, Yves; Marty, Mauro; Korup, Oliver
2017-10-01
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km-2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79-0.85. Testing the method for a larger area of 226.3 km-2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
Automatic detection of snow avalanches in continuous seismic data using hidden Markov models
NASA Astrophysics Data System (ADS)
Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat
2018-01-01
Snow avalanches generate seismic signals as many other mass movements. Detection of avalanches by seismic monitoring is highly relevant to assess avalanche danger. In contrast to other seismic events, signals generated by avalanches do not have a characteristic first arrival nor is it possible to detect different wave phases. In addition, the moving source character of avalanches increases the intricacy of the signals. Although it is possible to visually detect seismic signals produced by avalanches, reliable automatic detection methods for all types of avalanches do not exist yet. We therefore evaluate whether hidden Markov models (HMMs) are suitable for the automatic detection of avalanches in continuous seismic data. We analyzed data recorded during the winter season 2010 by a seismic array deployed in an avalanche starting zone above Davos, Switzerland. We re-evaluated a reference catalogue containing 385 events by grouping the events in seven probability classes. Since most of the data consist of noise, we first applied a simple amplitude threshold to reduce the amount of data. As first classification results were unsatisfying, we analyzed the temporal behavior of the seismic signals for the whole data set and found that there is a high variability in the seismic signals. We therefore applied further post-processing steps to reduce the number of false alarms by defining a minimal duration for the detected event, implementing a voting-based approach and analyzing the coherence of the detected events. We obtained the best classification results for events detected by at least five sensors and with a minimal duration of 12 s. These processing steps allowed identifying two periods of high avalanche activity, suggesting that HMMs are suitable for the automatic detection of avalanches in seismic data. However, our results also showed that more sensitive sensors and more appropriate sensor locations are needed to improve the signal-to-noise ratio of the signals and therefore the classification.
NASA Astrophysics Data System (ADS)
Techel, F.; Zweifel, B.; Winkler, K.
2015-09-01
Recreational activities in snow-covered mountainous terrain in the backcountry account for the vast majority of avalanche accidents. Studies analyzing avalanche risk mostly rely on accident statistics without considering exposure (or the elements at risk), i.e., how many, when and where people are recreating, as data on recreational activity in the winter mountains are scarce. To fill this gap, we explored volunteered geographic information on two social media mountaineering websites - bergportal.ch and camptocamp.org. Based on these data, we present a spatiotemporal pattern of winter backcountry touring activity in the Swiss Alps and compare this with accident statistics. Geographically, activity was concentrated in Alpine regions relatively close to the main Swiss population centers in the west and north. In contrast, accidents occurred equally often in the less-frequented inner-alpine regions. Weekends, weather and avalanche conditions influenced the number of recreationists, while the odds to be involved in a severe avalanche accident did not depend on weekends or weather conditions. However, the likelihood of being involved in an accident increased with increasing avalanche danger level, but also with a more unfavorable snowpack containing persistent weak layers (also referred to as an old snow problem). In fact, the most critical situation for backcountry recreationists and professionals occurred on days and in regions when both the avalanche danger was critical and when the snowpack contained persistent weak layers. The frequently occurring geographical pattern of a more unfavorable snowpack structure also explains the relatively high proportion of accidents in the less-frequented inner-alpine regions. These results have practical implications: avalanche forecasters should clearly communicate the avalanche danger and the avalanche problem to the backcountry user, particularly if persistent weak layers are of concern. Professionals and recreationists, on the other hand, require the expertise to adjust the planning of a tour and their backcountry travel behavior depending on the avalanche danger and the avalanche problem.
Risk analysis for dry snow slab avalanche release by skier triggering
NASA Astrophysics Data System (ADS)
McClung, David
2013-04-01
Risk analysis is of primary importance for skier triggering of avalanches since human triggering is responsible for about 90% of deaths from slab avalanches in Europe and North America. Two key measureable quantities about dry slab avalanche release prior to initiation are the depth to the weak layer and the slope angle. Both are important in risk analysis. As the slope angle increases, the probability of avalanche release increases dramatically. As the slab depth increases, the consequences increase if an avalanche releases. Among the simplest risk definitions is (Vick, 2002): Risk = (Probability of failure) x (Consequences of failure). Here, these two components of risk are the probability or chance of avalanche release and the consequences given avalanche release. In this paper, for the first time, skier triggered avalanches were analyzed from probability theory and its relation to risk for both the D and . The data consisted of two quantities : (,D) taken from avalanche fracture line profiles after an avalanche has taken place. Two data sets from accidentally skier triggered avalanches were considered: (1) 718 for and (2) a set of 1242 values of D which represent average values along the fracture line. The values of D were both estimated (about 2/3) and measured (about 1/3) by ski guides from Canadian Mountain Holidays CMH). I also analyzed 1231 accidentally skier triggered avalanches reported by CMH ski guides for avalanche size (representing destructive potential) on the Canadian scale. The size analysis provided a second analysis of consequences to verify that using D. The results showed that there is an intermediate range of both D and with highest risk. ForD, the risk (product of consequences and probability of occurrence) is highest for D in the approximate range 0.6 m - 1.0 m. The consequences are low for lower values of D and the chance of release is low for higher values of D. Thus, the highest product is in the intermediate range. For slope angles, the risk analysis showed there are two ranges: ˜ 320; × 460for which risk is lowest. In this case, both the range of and the consequences vary by about a factor of two so the probability of release dominates the risk analysis to yield low risk at the tails of the distribution of with highest risk in the middle (330 - 450) of the expected range (250 - 550).
Modeling of snow avalanches for protection measures designing
NASA Astrophysics Data System (ADS)
Turchaninova, Alla; Lazarev, Anton; Loginova, Ekaterina; Seliverstov, Yuri; Glazovskaya, Tatiana; Komarov, Anton
2017-04-01
Avalanche protection structures such as dams have to be designed using well known standard engineering procedures that differ in different countries. Our intent is to conduct a research on structural avalanche protection measures designing and their reliability assessment during the operation using numerical modeling. In the Khibini Mountains, Russia, several avalanche dams have been constructed at different times to protect settlements and mining. Compared with other mitigation structures dams are often less expensive to construct in mining regions. The main goal of our investigation was to test the capabilities of Swiss avalanche dynamics model RAMMS and Russian methods to simulate the interaction of avalanches with mitigation structures such as catching and reflecting dams as well as to reach the observed runout distances after the transition through a dam. We present the RAMMS back-calculation results of an artificially triggered and well-documented catastrophic avalanche occurred in the town of Kirovsk, Khibini Mountains in February 2016 that has unexpectedly passed through a system of two catching dams and took the lives of 3 victims. The estimated volume of an avalanche was approximately 120,000 m3. For the calculation we used a 5 m DEM including catching dams generated from field measurements in summer 2015. We simulated this avalanche (occurred below 1000 m.a.s.l.) in RAMMS having taken the friction parameters (µ and ζ) from the upper altitude limit (above 1500 m.a.s.l.) from the table recommended for Switzerland (implemented into RAMMS) according to the results of our previous research. RAMMS reproduced the observed avalanche behavior and runout distance. No information is available concerning the flow velocity; however, calculated values correspond in general to the values measured in this avalanche track before. We applied RAMMS using an option of adding structures to DEM (including a dam in GIS) in other to test other operating catching dams in Khibini Mountains by different avalanche scenarios and discuss the technical procedure and obtained results. RAMMS results were compared with field observations data and values received with Russian well-known one dimensional avalanche models. In the Caucasus, Russia, new ski resorts are being under the development which is impossible without avalanche protection. The choice of the avalanche mitigation type has to be done by experts depending on many factors. Within the ski resort Arkhyz, Caucasus we implemented RAMMS into the procedure of the structural measures type decision making. RAMMS as well as Russian well-known one-dimensional models were used to calculate the key input parameters for structures designing. The calculation results were coupled with field observations data and historical records. Finally we suggested the avalanche protection plan for the area of interest. The interpretation of RAMMS simulations including mitigation structures has been made in order to assess the reliability of the proposed protection.
Avalanche mode of motion - Implications from lunar examples.
NASA Technical Reports Server (NTRS)
Howard, K. A.
1973-01-01
A large avalanche (21 square kilometers) at the Apollo 17 landing site moved out several kilometers over flat ground beyond its source slope. If not triggered by impacts, then it was as 'efficient' as terrestrial avalanches attributed to air-cushion sliding. Evidently lunar avalanches are able to flow despite the lack of lubricating or cushioning fluid.
Avalanche mode of motion: Implications from lunar examples
Howard, K.A.
1973-01-01
A large avalanche (21 square kilometers) at the Apollo 17 landing site moved out several kilometers over flat ground beyond its source slope. If not triggered by impacts, then it was as "efficient" as terrestrial avalanches attributed to air-cushion sliding. Evidently lunar avalanches are able to flow despite the lack of lubricating or cushioning fluid.
Avalanches and scaling collapse in the large-N Kuramoto model
NASA Astrophysics Data System (ADS)
Coleman, J. Patrick; Dahmen, Karin A.; Weaver, Richard L.
2018-04-01
We study avalanches in the Kuramoto model, defined as excursions of the order parameter due to ephemeral episodes of synchronization. We present scaling collapses of the avalanche sizes, durations, heights, and temporal profiles, extracting scaling exponents, exponent relations, and scaling functions that are shown to be consistent with the scaling behavior of the power spectrum, a quantity independent of our particular definition of an avalanche. A comprehensive scaling picture of the noise in the subcritical finite-N Kuramoto model is developed, linking this undriven system to a larger class of driven avalanching systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Y; Park, M; Kim, H
Purpose: This study aims to identify the feasibility of a novel cesium-iodine (CsI)-based flat-panel detector (FPD) for removing scatter radiation in diagnostic radiology. Methods: The indirect FPD comprises three layers: a substrate, scintillation, and thin-film-transistor (TFT) layer. The TFT layer has a matrix structure with pixels. There are ineffective dimensions on the TFT layer, such as the voltage and data lines; therefore, we devised a new FPD system having net-like lead in the substrate layer, matching the ineffective area, to block the scatter radiation so that only primary X-rays could reach the effective dimension.To evaluate the performance of this newmore » FPD system, we conducted a Monte Carlo simulation using MCNPX 2.6.0 software. Scatter fractions (SFs) were acquired using no grid, a parallel grid (8:1 grid ratio), and the new system, and the performances were compared.Two systems having different thicknesses of lead in the substrate layer—10 and 20μm—were simulated. Additionally, we examined the effects of different pixel sizes (153×153 and 163×163μm) on the image quality, while keeping the effective area of pixels constant (143×143μm). Results: In case of 10μm lead, the SFs of the new system (∼11%) were lower than those of the other system (∼27% with no grid, ∼16% with parallel grid) at 40kV. However, as the tube voltage increased, the SF of new system (∼19%) was higher than that of parallel grid (∼18%) at 120kV. In the case of 20μm lead, the SFs of the new system were lower than those of the other systems at all ranges of the tube voltage (40–120kV). Conclusion: The novel CsI-based FPD system for removing scatter radiation is feasible for improving the image contrast but must be optimized with respect to the lead thickness, considering the system’s purposes and the ranges of the tube voltage in diagnostic radiology. This study was supported by a grant(K1422651) from Institute of Health Science, Korea University.« less
Task Assignment Heuristics for Parallel and Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
Fast Particle Methods for Multiscale Phenomena Simulations
NASA Technical Reports Server (NTRS)
Koumoutsakos, P.; Wray, A.; Shariff, K.; Pohorille, Andrew
2000-01-01
We are developing particle methods oriented at improving computational modeling capabilities of multiscale physical phenomena in : (i) high Reynolds number unsteady vortical flows, (ii) particle laden and interfacial flows, (iii)molecular dynamics studies of nanoscale droplets and studies of the structure, functions, and evolution of the earliest living cell. The unifying computational approach involves particle methods implemented in parallel computer architectures. The inherent adaptivity, robustness and efficiency of particle methods makes them a multidisciplinary computational tool capable of bridging the gap of micro-scale and continuum flow simulations. Using efficient tree data structures, multipole expansion algorithms, and improved particle-grid interpolation, particle methods allow for simulations using millions of computational elements, making possible the resolution of a wide range of length and time scales of these important physical phenomena.The current challenges in these simulations are in : [i] the proper formulation of particle methods in the molecular and continuous level for the discretization of the governing equations [ii] the resolution of the wide range of time and length scales governing the phenomena under investigation. [iii] the minimization of numerical artifacts that may interfere with the physics of the systems under consideration. [iv] the parallelization of processes such as tree traversal and grid-particle interpolations We are conducting simulations using vortex methods, molecular dynamics and smooth particle hydrodynamics, exploiting their unifying concepts such as : the solution of the N-body problem in parallel computers, highly accurate particle-particle and grid-particle interpolations, parallel FFT's and the formulation of processes such as diffusion in the context of particle methods. This approach enables us to transcend among seemingly unrelated areas of research.
Performances of a HGCDTE APD Based Detector with Electric Cooling for 2-μm DIAL/IPDA Applications
NASA Astrophysics Data System (ADS)
Dumas, A.; Rothman, J.; Gibert, F.; Lasfargues, G.; Zanatta, J.-P.; Edouart, D.
2016-06-01
In this work we report on design and testing of an HgCdTe Avalanche Photodiode (APD) detector assembly for lidar applications in the Short Wavelength Infrared Region (SWIR : 1,5 - 2 μm). This detector consists in a set of diodes set in parallel -making a 200 μm large sensitive area- and connected to a custom high gain TransImpedance Amplifier (TIA). A commercial four stages Peltier cooler is used to reach an operating temperature of 185K. Crucial performances for lidar use are investigated : linearity, dynamic range, spatial homogeneity, noise and resistance to intense illumination.
Quantum key distribution with 1.25 Gbps clock synchronization.
Bienfang, J; Gross, A; Mink, A; Hershman, B; Nakassis, A; Tang, X; Lu, R; Su, D; Clark, Charles; Williams, Carl; Hagley, E; Wen, Jesse
2004-05-03
We have demonstrated the exchange of sifted quantum cryptographic key over a 730 meter free-space link at rates of up to 1.0 Mbps, two orders of magnitude faster than previously reported results. A classical channel at 1550 nm operates in parallel with a quantum channel at 845 nm. Clock recovery techniques on the classical channel at 1.25 Gbps enable quantum transmission at up to the clock rate. System performance is currently limited by the timing resolution of our silicon avalanche photodiode detectors. With improved detector resolution, our technique will yield another order of magnitude increase in performance, with existing technology.
Disordered artificial spin ices: Avalanches and criticality (invited)
NASA Astrophysics Data System (ADS)
Reichhardt, Cynthia J. Olson; Chern, Gia-Wei; Libál, Andras; Reichhardt, Charles
2015-05-01
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in the square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.
Risk assessment in the North Caucasus ski resorts
NASA Astrophysics Data System (ADS)
Komarov, Anton Y.; Seliverstov, Yury G.; Glazovskaya, Tatyana G.; Turchaninova, Alla S.
2016-10-01
Avalanches pose a significant problem in most mountain regions of Russia. The constant growth of economic activity, and therefore the increased avalanche hazard, in the North Caucasus region lead to demand for the development of large-scale avalanche risk assessment methods. Such methods are needed for the determination of appropriate avalanche protection measures as well as for economic assessments.The requirement of natural hazard risk assessments is determined by the Federal Law of the Russian Federation (Federal Law 21.12.1994 N 68-FZ, 2016). However, Russian guidelines (SNIP 11-02-96, 2013; SNIP 22-02-2003, 2012) are not clearly presented concerning avalanche risk assessment calculations. Thus, we discuss these problems by presenting a new avalanche risk assessment approach, with the example of developing but poorly researched ski resort areas. The suggested method includes the formulas to calculate collective and individual avalanche risk. The results of risk analysis are shown in quantitative data that can be used to determine levels of avalanche risk (appropriate, acceptable and inappropriate) and to suggest methods to decrease the individual risk to an acceptable level or better. The analysis makes it possible to compare risk quantitative data obtained from different regions, analyze them and evaluate the economic feasibility of protection measures.
Disordered artificial spin ices: Avalanches and criticality (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichhardt, Cynthia J. Olson, E-mail: cjrx@lanl.gov; Chern, Gia-Wei; Reichhardt, Charles
2015-05-07
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in themore » square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.« less
O'Sullivan, G.A.; O'Sullivan, J.A.
1999-07-27
In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources. 31 figs.
O'Sullivan, George A.; O'Sullivan, Joseph A.
1999-01-01
In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.
Particle simulation of plasmas on the massively parallel processor
NASA Technical Reports Server (NTRS)
Gledhill, I. M. A.; Storey, L. R. O.
1987-01-01
Particle simulations, in which collective phenomena in plasmas are studied by following the self consistent motions of many discrete particles, involve several highly repetitive sets of calculations that are readily adaptable to SIMD parallel processing. A fully electromagnetic, relativistic plasma simulation for the massively parallel processor is described. The particle motions are followed in 2 1/2 dimensions on a 128 x 128 grid, with periodic boundary conditions. The two dimensional simulation space is mapped directly onto the processor network; a Fast Fourier Transform is used to solve the field equations. Particle data are stored according to an Eulerian scheme, i.e., the information associated with each particle is moved from one local memory to another as the particle moves across the spatial grid. The method is applied to the study of the nonlinear development of the whistler instability in a magnetospheric plasma model, with an anisotropic electron temperature. The wave distribution function is included as a new diagnostic to allow simulation results to be compared with satellite observations.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
NASA Astrophysics Data System (ADS)
Lucas, Célia; Bühler, Yves; Leinss, Silvan; Hajnsek, Irena
2017-04-01
Wet and full-depth glide snow avalanches can be of considerable danger for people and infrastructure in alpine regions. In Switzerland avalanche hazard predictions are performed by the Institute for Snow and Avalanche Research SLF. However these predictions are issued on regional scale and do not yield information about the current status of particular slopes of interest. To investigate the potential of radar technology for avalanche prediction on the slope scale, we performed the following experiment. During the winter seasons 2015/2016 and 2016/2017, a ground-based Ku-band radar was placed in the vicinity of Davos (GR) in order to monitor the Dorfberg slope with 4-minute measurement intervals [1]. With Differential Interferometry [2] line of sight movements on the order of a fraction of the radar wavelength (1.7 cm) can be measured. Applying this technique to the Dorfberg scenario, it was possible to detect snowpack displacement of up to 0.4 m over 3 days in the avalanche release area prior to a snow avalanche event. A proof of concept of this approach was previously made by [3-5]. The analysis of the snowpack displacement history of such release areas shows that an avalanche is generally released after several cycles of acceleration and deceleration of a specific area of the snowpack, followed by an abrupt termination of the movement at the moment of the avalanche release. The acceleration and deceleration trends are related to thawing and refreezing of the snowpack induced by the daily temperature variations. The proposed method for the detection of snowpack displacements as indication for potential wet and full-depth glide snow avalanches is a promising tool to increase avalanche safety on specific slopes putting infrastructure or people at risk. The identification of a singular signature to discriminate the time window immediately prior to the release is still under investigation, but the ability to monitor snowpack displacement allows for mapping of zones prone to wet and full-depth glide snow avalanches in the near future. Therefore in the current winter season, we attempt to automatically detect snowpack displacement and avalanche releases at Dorfberg. Automatic warnings issued by the radar about the presence and amount of displacement and information about location and altitude of creeping regions as well as released avalanches will be combined with simulated LWC (Liquid Water Content) for the observed area. This slope-specific knowledge will be evaluated for inclusion into the more regional avalanche bulletin issued by SLF. Two cameras capture photographs at 1 and 10 minute intervals respectively to reference the opening of optically visible tensile cracks and triggering of avalanches. [1] C. Lucas, Y. Buehler, A. Marino, I. Hajnsek: Investigation of Snow Avalanches wit Ground Based Ku-band Radar, EUSAR 2016; 11th European Conference on Synthetic Aperture Radar; Proceedings of, 2016 [2] R. Bamler, P. Hartl: Synthetic aperture radar interferometry, Inverse Problems, Vol. 14 R1-R54, 1988 [3] Y. Buehler, C. Pielmeier, R. Frauenfelder, C. Jaedicke, G. Bippus, A. Wiesmann and R. Caduff: Improved Alpine Avalanche Forecast Service AAF, Final Report, European Space Agency ESA, 2014 [4] R. Caduff, A. Wiesmann, Y. Buehler, and C. Pielmeier: Continuous monitoring of snowpack displacement at high spatial and temporal resolution with terrestrial radar interferometry, Geophysical Research Letters, vol. 42, no. 3, 2015. [5] R. Caduff, A. Wiesmann, Y. Bühler, C. Bieler, and P. Limpach, "Terrestrial radar interferometry for snow glide activity monitoring and its potential as precursor of wet snow," in Interpraevent, 2016, pp. 239-248.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
Information processing occurs via critical avalanches in a model of the primary visual cortex
NASA Astrophysics Data System (ADS)
Bortolotto, G. S.; Girardi-Schappo, M.; Gonsalves, J. J.; Pinto, L. T.; Tragtenberg, M. H. R.
2016-01-01
We study a new biologically motivated model for the Macaque monkey primary visual cortex which presents power-law avalanches after a visual stimulus. The signal propagates through all the layers of the model via avalanches that depend on network structure and synaptic parameter. We identify four different avalanche profiles as a function of the excitatory postsynaptic potential. The avalanches follow a size-duration scaling relation and present critical exponents that match experiments. The structure of the network gives rise to a regime of two characteristic spatial scales, one of which vanishes in the thermodynamic limit.
Ekstrom, Philip A.
1981-01-01
A photon detector includes a semiconductor device, such as a Schottky barrier diode, which has an avalanche breakdown characteristic. The diode is cooled to cryogenic temperatures to eliminate thermally generated charge carriers from the device. The diode is then biased to a voltage level exceeding the avalanche breakdown threshold level such that, upon receipt of a photon, avalanche breakdown occurs. This breakdown is detected by appropriate circuitry which thereafter reduces the diode bias potential to a level below the avalanche breakdown threshold level to terminate the avalanche condition. Subsequently, the bias potential is reapplied to the diode in preparation for detection of a subsequently received photon.
NASA Astrophysics Data System (ADS)
Giacona, Florie; Martin, Brice; David, Pierre-Marie
2010-05-01
To mention avalanche risks in the Vosges generally causes certain disbelief because of its modest height. Moreover, as far as natural risks are concerned, and especially the avalanche risk, medium-high mountains are not usually studied. The attention is more focused on the spectacular and destructive phenomena that occur in highest mountains such as the Alps or the Pyrenees. However, in January and February 2000, fifteen people were victims of avalanches and three of them died. These accidents have suddenly drawn attention to the fact that avalanche risk is underestimated. In opposition to the Alps and Pyrenees there is no study or systematic inventory of avalanches in the medium-high mountain ranges. Moreover, the many research and methodological articles dedicated to studies on avalanches in the high mountain ranges do not, unfortunately, raise any concerns about medium-high mountain ranges. So, we had to develop a new research method based on handwritten, printed, and oral sources as well as on observations. The results of this historical research exceeded all expectations. About 300 avalanche events have been reported since the end of the 18th century; they happened in about 90 avalanche paths. Spatial and temporal distributions of the avalanche events can be explained by climate, vulnerability and land use evolutions. The vulnerability has evolved since the 18th century: material vulnerability decreased whereas human vulnerability increased due to the expansion of winter sports. Finally we focus our study on the perception of the avalanche risk by the winter sports adepts in the Vosges mountains. Indeed, at the beginning of this research, we were directly confronted to a lack of knowledge, or even to an ignorance, of the avalanche risk. Several factors contribute to this situation among which the topography. Even though some places in the Vosges mountains look like the alpine topography, most of the summits are rounded. Furthermore, this mountain presents an annual and seasonal variability of snowfall and snow height. And the summits and slopes which present an avalanche risk can be easily reached in wintertime thanks to car parks close to the summits and the clearing of snow from the roads. A study is therefore being carried out in order to understand the mechanisms of perception and awareness of the avalanche risk. This is the first step towards the development of a new prevention method adapted to the recreational public in medium-high mountains.
Sub-Second Parallel State Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.
This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less
Explosive-residue compounds resulting from snow avalanche control in the Wasatch Mountains of Utah
Naftz, David L.; Kanagy, Leslie K.; Susong, David D.; Wydoski, Duane S.; Kanagy, Christopher J.
2003-01-01
A snow avalanche is a powerful force of nature that can play a significant role in developing mountain landscapes (Perla and Martinelli, 1975). More importantly, loss of life can occur when people are caught in the path of snow avalanches (Grossman, 1999). Increasing winter recreation, including skiing, snowboarding, snowmobiling, snowshoeing, and climbing in mountainous areas, has increased the likelihood of people encountering snow avalanches (fig. 1). Explosives are used by most ski areas and State highway departments throughout the Western United States to control the release of snow avalanches, thus minimizing the loss of human life during winter recreation and highway travel (fig. 2).Common explosives used for snow avalanche control include trinitrotoluene (TNT), pentaerythritoltetranitrate (PETN), cyclotrimethylenetrinitramine (RDX), tetrytol, ammonium nitrate, and nitroglycerin (Perla and Martinelli, 1975). During and after snowfall or wind loading of potential avalanche slopes, ski patrollers and Utah Department of Transportation personnel deliver explosive charges onto predetermined targets to artificially release snow avalanches, thereby rendering the slope safer for winter activities. Explosives can be thrown by hand onto target zones or shot from cannons for more remote delivery of explosive charges. Hand-delivered charges typically contain about 2 pounds of TNT or its equivalent (Perla and Martinelli, 1975).Depending on the size of the ski area, acreage of potential avalanche terrain, and weather conditions, the annual quantity of explosives used during a season of snow avalanche control can be substantial. For example, the three ski areas of Alta, Snowbird, and Brighton, plus the Utah Department of Transportation, may use as many as 11,200 hand charges per year (Wasatch Powderbird Guides, unpub. data, 1999) for snow avalanche control in Big and Little Cottonwood Canyons (fig. 3). If each charge is assumed to weigh 2 pounds, this equates to about 22,400 pounds of explosive hand charges per year. In addition, 2,240 to 3,160 Avalauncher rounds and 626 to 958 military artillery rounds (explosive mass not specified) are used each year by the three ski areas and the Utah Department of Transportation for snow avalanche control in Big and Little Cottonwood Canyons (Wasatch Powderbird Guides, unpub. data, 1999). The other ski area in Big Cottonwood Canyon, Brighton, uses about 2,000 pounds of explosives per year for snow avalanche control (Michele Weidner, Cirrus Ecological Solutions consultant, written commun., 2001).
Hunt, D C; Tanioka, Kenkichi; Rowlands, J A
2007-12-01
The flat-panel detector (FPD) is the state-of-the-art detector for digital radiography. The FPD can acquire images in real-time, has superior spatial resolution, and is free of the problems of x-ray image intensifiers-veiling glare, pin-cushion and magnetic distortion. However, FPDs suffer from poor signal to noise ratio performance at typical fluoroscopic exposure rates where the quantum noise is reduced to the point that it becomes comparable to the fixed electronic noise. It has been shown previously that avalanche multiplication gain in amorphous selenium (a-Se) can provide the necessary amplification to overcome the electronic noise of the FPD. Avalanche multiplication, however, comes with its own intrinsic contribution to the noise in the form of gain fluctuation noise. In this article a cascaded systems analysis is used to present a modified metric related to the detective quantum efficiency. The modified metric is used to study a diagnostic x-ray imaging system in the presence of intrinsic avalanche multiplication noise independently from other noise sources, such as electronic noise. An indirect conversion imaging system is considered to make the study independent of other avalanche multiplication related noise sources, such as the fluctuations arising from the depth of x-ray absorption. In this case all the avalanche events are initiated at the surface of the avalanche layer, and there are no fluctuations in the depth of absorption. Experiments on an indirect conversion x-ray imaging system using avalanche multiplication in a layer of a-Se are also presented. The cascaded systems analysis shows that intrinsic noise of avalanche multiplication will not have any deleterious influence on detector performance at zero spatial frequency in x-ray imaging provided the product of conversion gain, coupling efficiency, and optical quantum efficiency are much greater than a factor of 2. The experimental results show that avalanche multiplication in a-Se behaves as an intrinsic noise free avalanche multiplication, in accordance with our theory. Provided good coupling efficiency and high optical quantum efficiency are maintained, avalanche multiplication in a-Se has the potential to increase the gain and make negligible contribution to the noise, thereby improving the performance of indirect FPDs in fluoroscopy.
Huggel, C.; Caplan-Auerbach, J.; Waythomas, C.F.; Wessels, R.L.
2007-01-01
Iliamna is an andesitic stratovolcano of the Aleutian arc with regular gas and steam emissions and mantled by several large glaciers. Iliamna Volcano exhibits an unusual combination of frequent and large ice-rock avalanches in the order of 1 ?? 106??m3 to 3 ?? 107??m3 with recent return periods of 2-4??years. We have reconstructed an avalanche event record for the past 45??years that indicates Iliamna avalanches occur at higher frequency at a given magnitude than other mass failures in volcanic and alpine environments. Iliamna Volcano is thus an ideal site to study such mass failures and its relation to volcanic activity. In this study, we present different methods that fit into a concept of (1) long-term monitoring, (2) early warning, and (3) event documentation and analysis of ice-rock avalanches on ice-capped active volcanoes. Long-term monitoring methods include seismic signal analysis, and space-and airborne observations. Landsat and ASTER satellite data was used to study the extent of hydrothermally altered rocks and surface thermal anomalies at the summit region of Iliamna. Subpixel heat source calculation for the summit regions where avalanches initiate yielded temperatures of 307 to 613??K assuming heat source areas of 1000 to 25??m2, respectively, indicating strong convective heat flux processes. Such heat flow causes ice melting conditions and is thus likely to reduce the strength at the base of the glacier. We furthermore demonstrate typical seismic records of Iliamna avalanches with rarely observed precursory signals up to two hours prior to failure, and show how such signals could be used for a multi-stage avalanche warning system in the future. For event analysis and documentation, space- and airborne observations and seismic records in combination with SRTM and ASTER derived terrain data allowed us to reconstruct avalanche dynamics and to identify remarkably similar failure and propagation mechanisms of Iliamna avalanches for the past 45??years. Simple avalanche flow modeling was able to reasonably replicate Iliamna avalanches and can thus be applied for hazard assessments. Hazards at Iliamna Volcano are low due to its remote location; however, we emphasize the transfer potential of the methods presented here to other ice-capped volcanoes with much higher hazards such as those in the Cascades or the Andes. ?? 2007 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, D. C.; Tanioka, Kenkichi; Rowlands, J. A.
2007-12-15
The flat-panel detector (FPD) is the state-of-the-art detector for digital radiography. The FPD can acquire images in real-time, has superior spatial resolution, and is free of the problems of x-ray image intensifiers--veiling glare, pin-cushion and magnetic distortion. However, FPDs suffer from poor signal to noise ratio performance at typical fluoroscopic exposure rates where the quantum noise is reduced to the point that it becomes comparable to the fixed electronic noise. It has been shown previously that avalanche multiplication gain in amorphous selenium (a-Se) can provide the necessary amplification to overcome the electronic noise of the FPD. Avalanche multiplication, however, comesmore » with its own intrinsic contribution to the noise in the form of gain fluctuation noise. In this article a cascaded systems analysis is used to present a modified metric related to the detective quantum efficiency. The modified metric is used to study a diagnostic x-ray imaging system in the presence of intrinsic avalanche multiplication noise independently from other noise sources, such as electronic noise. An indirect conversion imaging system is considered to make the study independent of other avalanche multiplication related noise sources, such as the fluctuations arising from the depth of x-ray absorption. In this case all the avalanche events are initiated at the surface of the avalanche layer, and there are no fluctuations in the depth of absorption. Experiments on an indirect conversion x-ray imaging system using avalanche multiplication in a layer of a-Se are also presented. The cascaded systems analysis shows that intrinsic noise of avalanche multiplication will not have any deleterious influence on detector performance at zero spatial frequency in x-ray imaging provided the product of conversion gain, coupling efficiency, and optical quantum efficiency are much greater than a factor of 2. The experimental results show that avalanche multiplication in a-Se behaves as an intrinsic noise free avalanche multiplication, in accordance with our theory. Provided good coupling efficiency and high optical quantum efficiency are maintained, avalanche multiplication in a-Se has the potential to increase the gain and make negligible contribution to the noise, thereby improving the performance of indirect FPDs in fluoroscopy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shneider, Mikhail N.; Zhang Zhili; Miles, Richard B.
2008-07-15
Resonant enhanced multiphoton ionization (REMPI) and electron avalanche ionization (EAI) are measured simultaneously in Ar:Xe mixtures at different partial pressures of mixture components. A simple theory for combined REMPI+EAI in gas mixture is developed. It is shown that the REMPI electrons seed the avalanche process, and thus the avalanche process amplifies the REMPI signal. Possible applications are discussed.
Ballistic Deficits for Ionization Chamber Pulses in Pulse Shaping Amplifiers
NASA Astrophysics Data System (ADS)
Kumar, G. Anil; Sharma, S. L.; Choudhury, R. K.
2007-04-01
In order to understand the dependence of the ballistic deficit on the shape of rising portion of the voltage pulse at the input of a pulse shaping amplifier, we have estimated the ballistic deficits for the pulses from a two-electrode parallel plate ionization chamber as well as for the pulses from a gridded parallel plate ionization chamber. These estimations have been made using numerical integration method when the pulses are processed through the CR-RCn (n=1-6) shaping network as well as when the pulses are processed through the complex shaping network of the ORTEC Model 472 spectroscopic amplifier. Further, we have made simulations to see the effect of ballistic deficit on the pulse-height spectra under different conditions. We have also carried out measurements of the ballistic deficits for the pulses from a two-electrode parallel plate ionization chamber as well as for the pulses from a gridded parallel plate ionization chamber when these pulses are processed through the ORTEC 572 linear amplifier having a simple CR-RC shaping network. The reasonable matching of the simulated ballistic deficits with the experimental ballistic deficits for the CR-RC shaping network clearly establishes the validity of the simulation technique
Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.
NASA Technical Reports Server (NTRS)
Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad
1994-01-01
The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.
Properties of a Variable-Delay Polarization Modulator
NASA Technical Reports Server (NTRS)
Chuss, David T.; Wollack, Edward J.; Henry, Ross; Hui, Howard; Juarez, Aaron J.; Krenjy, Megan; Moseley, Harvey; Novak, Giles
2011-01-01
We investigate the polarization modulation properties of a variable-delay polarization modulator (VPM). The VPM modulates polarization via a variable separation between a polarizing grid and a parallel mirror. We find that in the limit where the wavelength is much larger than the diameter of the metal wires that comprise the grid, the phase delay derived from the geometric separation between the mirror and the grid is sufficient to characterize the device. However, outside of this range, additional parameters describing the polarizing grid geometry must be included to fully characterize the modulator response. In this paper, we report test results of a VPM at wavelengths of 350 micron and 3 mm. Electromagnetic simulations of wire grid polarizers were performed and are summarized using a simple circuit model that incorporates the loss and polarization properties of the device.
Method of constructing dished ion thruster grids to provide hole array spacing compensation
NASA Technical Reports Server (NTRS)
Banks, B. A. (Inventor)
1976-01-01
The center-to-center spacings of a photoresist pattern for an array of holes applied to a thin metal sheet are increased by uniformly stretching the thin metal sheet in all directions along the plane of the sheet. The uniform stretching is provided by securely clamping the periphery of the sheet and applying an annular force against the face of the sheet, within the periphery of the sheet and around the photoresist pattern. The technique is used in the construction of ion thruster grid units where the outer or downstream grid is subjected to uniform stretching prior to convex molding. The technique provides alignment of the holes of grid pairs so as to direct the ion beamlets in a direction parallel to the axis of the grid unit and thereby provide optimization of the available thrust.
IFKIS a basis for organizational measures in avalanche risk management
NASA Astrophysics Data System (ADS)
Bründl, M.; Etter, H.-J.; Klingler, Ch.; Steiniger, M.; Rhyner, J.; Ammann, W.
2003-04-01
The avalanche winter 1999 in Switzerland showed that the combination of protection measures like avalanche barriers, hazard zone mapping, artificial avalanche release and organisational measures (closure of roads, evacuation etc.) proved to perform well. However, education as well as information and communication between the involved organizations proved to be a weak link in the crisis management. In the first part of the project IFKIS we developed a modular education and training course program for security responsibles of settlements and roads. In the second part an information system was developed which improves on the one hand the information fluxes between the national center for avalanche forecasting, the Swiss Federal Institute for Snow and Avalanche Research SLF, and the local forecasters. On the other hand the communication between the avalanche security services in the communities can be enhanced. During the last two years an information system based on Internet technology has been developed for this purpose. This system allows the transmission of measured data and observations to a central database at SLF and visualization of the data for different users. It also provides the possibility to exchange information on organizational measures like closure of roads, artificial avalanche release etc. on a local and regional scale. This improves the information fluxes and the coordination of safety-measures because all users, although at different places, are on the same information level. Inconsistent safety-measures can be avoided and information and communication concerning avalanche safety becomes much more transparent for all persons involved in hazard management. The training program as well the concept for the information-system are important basics for an efficient avalanche risk management but also for other natural processes and catastrophes.
Natural avalanches and transportation: A case study from Glacier National Park, Montana, USA
Reardon, B.A.; Fagre, Daniel B.; Steiner, R.W.
2004-01-01
In January 2004, two natural avalanches (destructive class 3) derailed a freight train in John F. Stevens Canyon, on the southern boundary of Glacier National Park. The railroad tracks were closed for 29 hours due to cleanup and lingering avalanche hazard, backing up 112km of trains and shutting down Amtrak’s passenger service. The incident marked the fourth time in three winters that natural avalanches have disrupted transportation in the canyon, which is also the route of U.S. Highway 2. It was the latest in a 94-year history of accidents that includes three fatalities and the destruction of a major highway bridge. Despite that history and the presence of over 40 avalanche paths in the 16km canyon, mitigation is limited to nine railroad snow sheds and occasional highway closures. This case study examines natural avalanche cycles of the past 28 winters using data from field observations, a Natural Resources Conservation Service (NRCS) SNOTEL station, and data collected since 2001 at a high-elevation weather station. The avalanches occurred when storms with sustained snowfall buried a persistent near-surface faceted layer and/or were followed by rain-on-snow or dramatic warming (as much as 21oC in 30 minutes). Natural avalanche activity peaked when temperatures clustered near freezing (mean of -1.5oC at 1800m elev.). Avalanches initiated through rapid loading, rain falling on new snow, and/ or temperature-related changes in the mechanical properties of slabs. Lastly, the case study describes how recent incidents have prompted a unique partnership of land management agencies, private corporations and non-profit organizations to develop an avalanche mitigation program for the transportation corridor.
Statistical analyses support power law distributions found in neuronal avalanches.
Klaus, Andreas; Yu, Shan; Plenz, Dietmar
2011-01-01
The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.
NASA Astrophysics Data System (ADS)
Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.
2018-03-01
A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.
Apparatus and method for recharging a string a avalanche transistors within a pulse generator
Fulkerson, E. Stephen
2000-01-01
An apparatus and method for recharging a string of avalanche transistors within a pulse generator is disclosed. A plurality of amplification stages are connected in series. Each stage includes an avalanche transistor and a capacitor. A trigger signal, causes the apparatus to generate a very high voltage pulse of a very brief duration which discharges the capacitors. Charge resistors inject current into the string of avalanche transistors at various points, recharging the capacitors. The method of the present invention includes the steps of supplying current to charge resistors from a power supply; using the charge resistors to charge capacitors connected to a set of serially connected avalanche transistors; triggering the avalanche transistors; generating a high-voltage pulse from the charge stored in the capacitors; and recharging the capacitors through the charge resistors.
A practitioner's tool for assessing glide crack activity
Hendrikx, Jordy; Peitzsch, Erich H.; Fagre, Daniel B.
2010-01-01
Glide cracks can result in full-depth glide avalanche release. Avalanches from glide cracks are notoriously difficult to forecast, but are a reoccurring problem in a number of different avalanche forecasting programs across a range of snow climates. Despite this, there is no consensus for how to best manage, mitigate, or even observe glide cracks and the potential resultant avalanche activity. It is thought that an increase in the rate of snow gliding occurs prior to full-depth avalanche activity, so frequent measuring of glide crack movement provides an index of instability. Therefore, a comprehensive avalanche program with glide crack avalanche activity, should at the least, undertake some form of direct monitoring of glide crack movement. In this paper we present a simple, cheap and repeatable method to track glide crack activity using a series of stakes, reflectors and a laser rangefinder (LaserTech TruPulse360B) linked to a GPS (Trimble Geo XH). We tested the methodology in April 2010, on a glide crack above the Going to the Sun Road in Glacier National Park, Montana, USA. This study suggests a new method to better track the development and movement of glide cracks. It is hoped that by introducing a workable method to easily record glide crack movement, avalanche forecasters will improve their understanding of when, or if, avalanche activity will ensue. Our initial results suggest that these new observations, when combined with local micrometeorological data will result in improved process understanding and forecasting of these phenomena.
Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis
2013-01-01
We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331
Parallel Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Baggag, Abdalkader; Atkins, Harold; Keyes, David
1999-01-01
This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.
Ostermann, Marc; Sanders, Diethard; Ivy-Ochs, Susan; Alfimov, Vasily; Rockenschaub, Manfred; Römer, Alexander
2012-10-15
In the Obernberg valley, the Eastern Alps, landforms recently interpreted as moraines are re-interpreted as rock avalanche deposits. The catastrophic slope failure involved an initial rock volume of about 45 million m³, with a runout of 7.2 km over a total vertical distance of 1330 m (fahrböschung 10°). 36 Cl surface-exposure dating of boulders of the avalanche mass indicates an event age of 8.6 ± 0.6 ka. A 14 C age of 7785 ± 190 cal yr BP of a palaeosoil within an alluvial fan downlapping the rock avalanche is consistent with the event age. The distal 2 km of the rock-avalanche deposit is characterized by a highly regular array of transverse ridges that were previously interpreted as terminal moraines of Late-Glacial. 'Jigsaw-puzzle structure' of gravel to boulder-size clasts in the ridges and a matrix of cataclastic gouge indicate a rock avalanche origin. For a wide altitude range the avalanche deposit is preserved, and the event age of mass-wasting precludes both runout over glacial ice and subsequent glacial overprint. The regularly arrayed transverse ridges thus were formed during freezing of the rock avalanche deposits.
Avalanche risk in backcountry terrain based on usage frequency and accident data
NASA Astrophysics Data System (ADS)
Techel, F.; Zweifel, B.; Winkler, K.
2014-08-01
In Switzerland, the vast majority of avalanche accidents occurs during recreational activities. Risk analysis studies mostly rely on accident statistics without considering exposure (or the elements at risk), i.e. how many and where people are recreating. We compared the accident data (backcountry touring) with reports from two social media mountaineering networks - bergportal.ch and camptocamp.org. On these websites, users reported more than 15 000 backcountry tours during the five winters 2009/2010 to 2013/2014. We noted similar patterns in avalanche accident data and user data like demographics of recreationists, distribution of the day of the week (weekday vs. weekend) or weather conditions (fine vs. poor weather). However, we also found differences such as the avalanche danger conditions on days with activities and accidents, but also the geographic distribution. While backcountry activities are concentrated in proximity to the main population centres in the West and North of the Swiss Alps, a large proportion of the severe avalanche accidents occurred in the inner-alpine, more continental regions with frequently unfavorably snowpack structure. This suggests that even greater emphasis should be put on the type of avalanche problem in avalanche education and avalanche forecasting to increase the safety of backcountry recreationists.
Modeling the influence of snow cover temperature and water content on wet-snow avalanche runout
NASA Astrophysics Data System (ADS)
Valero, Cesar Vera; Wever, Nander; Christen, Marc; Bartelt, Perry
2018-03-01
Snow avalanche motion is strongly dependent on the temperature and water content of the snow cover. In this paper we use a snow cover model, driven by measured meteorological data, to set the initial and boundary conditions for wet-snow avalanche calculations. The snow cover model provides estimates of snow height, density, temperature and liquid water content. This information is used to prescribe fracture heights and erosion heights for an avalanche dynamics model. We compare simulated runout distances with observed avalanche deposition fields using a contingency table analysis. Our analysis of the simulations reveals a large variability in predicted runout for tracks with flat terraces and gradual slope transitions to the runout zone. Reliable estimates of avalanche mass (height and density) in the release and erosion zones are identified to be more important than an exact specification of temperature and water content. For wet-snow avalanches, this implies that the layers where meltwater accumulates in the release zone must be identified accurately as this defines the height of the fracture slab and therefore the release mass. Advanced thermomechanical models appear to be better suited to simulate wet-snow avalanche inundation areas than existing guideline procedures if and only if accurate snow cover information is available.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
Integrating Grid Services into the Cray XT4 Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy
2009-05-01
The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less
The Avalanche Catastrophe of El Teniente-chile: August 8 of 1944.
NASA Astrophysics Data System (ADS)
Vergara, J.; Baros, M.
The avalanche of El Teniente-Chile (~34S) August 8 of 1944, was the most serious avalanche accident in Chile of the last 100 years. On the night of August 8, 1944, a major avalanche impacted a The Sewell, a worked village of the Copper Mine of El Teniente, there were 102 fatalities, 8 building, one school and one bridged de- stroyed. Due to a storm over the central part of Chile where intense precipitation fall over the Andes mountains during nine days. Historical precipitation records near to Sewell shows that total rainfall during the storms was 299mm (La Rufina) and 349mm (Bullileo), and the day before of avalanche the 24 hours rain intensity was 93mm. The Weilbull statistical analysis of monthly snowfall (water equivalent) record in Sewell from 1912-2001 show that the total August 1944 snowfall (621mm) was the larger of the all historical records and the return period is close one events in 180 years, and the annual snowfall during 1944 was 1140mm and return periods was 3.8 years. KEYWRODS: Chile, Avalanches, Andes Mountains, Avalanche Disaster, Historical Snow Records.
Avalanches, plasticity, and ordering in colloidal crystals under compression.
McDermott, D; Reichhardt, C J Olson; Reichhardt, C
2016-06-01
Using numerical simulations we examine colloids with a long-range Coulomb interaction confined in a two-dimensional trough potential undergoing dynamical compression. As the depth of the confining well is increased, the colloids move via elastic distortions interspersed with intermittent bursts or avalanches of plastic motion. In these avalanches, the colloids rearrange to minimize their colloid-colloid repulsive interaction energy by adopting an average lattice constant that is isotropic despite the anisotropic nature of the compression. The avalanches take the form of shear banding events that decrease or increase the structural order of the system. At larger compression, the avalanches are associated with a reduction of the number of rows of colloids that fit within the confining potential, and between avalanches the colloids can exhibit partially crystalline or anisotropic ordering. The colloid velocity distributions during the avalanches have a non-Gaussian form with power-law tails and exponents that are consistent with those found for the velocity distributions of gliding dislocations. We observe similar behavior when we subsequently decompress the system, and find a partially hysteretic response reflecting the irreversibility of the plastic events.
A solid-state amorphous selenium avalanche technology for low photon flux imaging applications
Wronski, M. M.; Zhao, W.; Reznik, A.; Tanioka, K.; DeCrescenzo, G.; Rowlands, J. A.
2010-01-01
Purpose: The feasibility of a practical solid-state technology for low photon flux imaging applications was investigated. The technology is based on an amorphous selenium photoreceptor with a voltage-controlled avalanche multiplication gain. If this photoreceptor can provide sufficient internal gain, it will be useful for an extensive range of diagnostic imaging systems. Methods: The avalanche photoreceptor under investigation is referred to as HARP-DRL. This is a novel concept in which a high-gain avalanche rushing photoconductor (HARP) is integrated with a distributed resistance layer (DRL) and sandwiched between two electrodes. The avalanche gain and leakage current characteristics of this photoreceptor were measured. Results: HARP-DRL has been found to sustain very high electric field strengths without electrical breakdown. It has shown avalanche multiplication gains as high as 104 and a very low leakage current (≤20 pA∕mm2). Conclusions: This is the first experimental demonstration of a solid-state amorphous photoreceptor which provides sufficient internal avalanche gain for photon counting and photon starved imaging applications. PMID:20964217
Capacitively coupled RF diamond-like-carbon reactor
Devlin, David James; Coates, Don Mayo; Archuleta, Thomas Arthur; Barbero, Robert Steven
2000-01-01
A process of coating a non-conductive fiber with diamond-like carbon, including passing a non-conductive fiber between a pair of parallel metal grids within a reaction chamber, introducing a hydrocarbon gas into the reaction chamber, forming a plasma within the reaction chamber for a sufficient period of time whereby diamond-like carbon is formed upon the non-conductive fiber, is provided together with a reactor chamber for deposition of diamond-like carbon upon a non-conductive fiber, including a vacuum chamber, a cathode assembly including a pair of electrically isolated opposingly parallel metal grids spaced apart at a distance of less than about 1 centimeter, an anode, a means of introducing a hydrocarbon gas into said vacuum chamber, and a means of generating a plasma within said vacuum chamber.
Change Detection of Mobile LIDAR Data Using Cloud Computing
NASA Astrophysics Data System (ADS)
Liu, Kun; Boehm, Jan; Alis, Christian
2016-06-01
Change detection has long been a challenging problem although a lot of research has been conducted in different fields such as remote sensing and photogrammetry, computer vision, and robotics. In this paper, we blend voxel grid and Apache Spark together to propose an efficient method to address the problem in the context of big data. Voxel grid is a regular geometry representation consisting of the voxels with the same size, which fairly suites parallel computation. Apache Spark is a popular distributed parallel computing platform which allows fault tolerance and memory cache. These features can significantly enhance the performance of Apache Spark and results in an efficient and robust implementation. In our experiments, both synthetic and real point cloud data are employed to demonstrate the quality of our method.
First approximations in avalanche model validations using seismic information
NASA Astrophysics Data System (ADS)
Roig Lafon, Pere; Suriñach, Emma; Bartelt, Perry; Pérez-Guillén, Cristina; Tapia, Mar; Sovilla, Betty
2017-04-01
Avalanche dynamics modelling is an essential tool for snow hazard management. Scenario based numerical modelling provides quantitative arguments for decision-making. The software tool RAMMS (WSL Institute for Snow and Avalanche Research SLF) is one such tool, often used by government authorities and geotechnical offices. As avalanche models improve, the quality of the numerical results will depend increasingly on user experience on the specification of input (e.g. release and entrainment volumes, secondary releases, snow temperature and quality). New model developments must continue to be validated using real phenomena data, for improving performance and reliability. The avalanches group form University of Barcelona (RISKNAT - UB), has studied the seismic signals generated from avalanches since 1994. Presently, the group manages the seismic installation at SLF's Vallée de la Sionne experimental site (VDLS). At VDLS the recorded seismic signals can be correlated to other avalanche measurement techniques, including both advanced remote sensing methods (radars, videogrammetry) and obstacle based sensors (pressure, capacitance, optical sender-reflector barriers). This comparison between different measurement techniques allows the group to address the question if seismic analysis can be used alone, on more additional avalanche tracks, to gain insight and validate numerical avalanche dynamics models in different terrain conditions. In this study, we aim to add the seismic data as an external record of the phenomena, able to validate RAMMS models. The seismic sensors are considerable easy and cheaper to install than other physical measuring tools, and are able to record data from the phenomena in every atmospheric conditions (e.g. bad weather, low light, freezing make photography, and other kind of sensors not usable). With seismic signals, we record the temporal evolution of the inner and denser parts of the avalanche. We are able to recognize the approximate position of the flow in the slope, and make observations of the internal flow dynamics, especially flow regimes transitions, which depend on the slope-perpendicular energy fluxes induced by collisions at the basal boundary. The recorded data over several experimental seasons provide a catalogue of seismic data from different types and sizes of avalanches triggered at the VDLS experimental site. These avalanches are recorded also by the SLF instrumentation (FMCW radars, photography, photogrammetry, video, videogrammetry, pressure sensors). We select the best-quality avalanche data to model and establish comparisons. All this information allows us to calibrate parameters governing the internal energy fluxes, especially parameters governing the interaction of the avalanche with the incumbent snow cover. For the comparison between the seismic signal and the RAMMS models, we are focusing at the temporal evolution of the flow, trying to find the same arrival times of the front at the seismic sensor location in the avalanche path. We make direct quantitative comparisons between measurements and model outputs, using modelled flow height, normal stress, velocity, and pressure values, compared with the seismic signal, its envelope and its running spectrogram. In all cases, the first comparisons between the seismic signal and RAMMS outputs are very promising.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Microwave Power Combiners for Signals of Arbitrary Amplitude
NASA Technical Reports Server (NTRS)
Conroy, Bruce; Hoppe, Daniel
2009-01-01
Schemes for combining power from coherent microwave sources of arbitrary (unequal or equal) amplitude have been proposed. Most prior microwave-power-combining schemes are limited to sources of equal amplitude. The basic principle of the schemes now proposed is to use quasi-optical components to manipulate the polarizations and phases of two arbitrary-amplitude input signals in such a way as to combine them into one output signal having a specified, fixed polarization. To combine power from more than two sources, one could use multiple powercombining stages based on this principle, feeding the outputs of lower-power stages as inputs to higher-power stages. Quasi-optical components suitable for implementing these schemes include grids of parallel wires, vane polarizers, and a variety of waveguide structures. For the sake of brevity, the remainder of this article illustrates the basic principle by focusing on one scheme in which a wire grid and two vane polarizers would be used. Wire grids are the key quasi-optical elements in many prior equal-power combiners. In somewhat oversimplified terms, a wire grid reflects an incident beam having an electric field parallel to the wires and passes an incident beam having an electric field perpendicular to the wires. In a typical prior equal-power combining scheme, one provides for two properly phased, equal-amplitude signals having mutually perpendicular linear polarizations to impinge from two mutually perpendicular directions on a wire grid in a plane oriented at an angle of 45 with respect to both beam axes. The wires in the grid are oriented to pass one of the incident beams straight through onto the output path and to reflect the other incident beam onto the output path along with the first-mentioned beam.
Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li
2018-07-01
Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Practical operational implementation of Teton Pass avalanche monitoring infrasound system.
DOT National Transportation Integrated Search
2008-12-01
Highway snow avalanche forecasting programs typically rely on weather and field observations to make road closure and hazard : evaluations. Recently, infrasonic avalanche monitoring technology has been developed for practical use near Teton Pass, WY ...
NASA Technical Reports Server (NTRS)
Blonski, Slawomir
2007-01-01
This Candidate Solution is based on using active and passive microwave measurements acquired from NASA satellites to improve USDA (U.S. Department of Agriculture) Forest Service forecasting of avalanche danger. Regional Avalanche Centers prepare avalanche forecasts using ground measurements of snowpack and mountain weather conditions. In this Solution, range of the in situ observations is extended by adding remote sensing measurements of snow depth, snow water equivalent, and snowfall rate acquired by satellite missions that include Aqua, CloudSat, future GPM (Global Precipitation Measurement), and the proposed SCLP (Snow and Cold Land Processes). Measurements of snowpack conditions and time evolution are improved by combining the in situ and satellite observations with a snow model. Recurring snow observations from NASA satellites increase accuracy of avalanche forecasting, which helps the public and the managers of public facilities make better avalanche safety decisions.
Avalanche risk assessment in Russia
NASA Astrophysics Data System (ADS)
Komarov, Anton; Seliverstov, Yury; Sokratov, Sergey; Glazovskaya, Tatiana; Turchaniniva, Alla
2017-04-01
The avalanche prone area covers about 3 million square kilometers or 18% of total area of Russia and pose a significant problem in most mountain regions of the country. The constant growth of economic activity, especially in the North Caucasus region and therefore the increased avalanche hazard lead to the demand of the large-scale avalanche risk assessment methods development. Such methods are needed for the determination of appropriate avalanche protection measures as well as for economic assessments during all stages of spatial planning of the territory. The requirement of natural hazard risk assessments is determined by the Federal Law of Russian Federation. However, Russian Guidelines (SP 11-103-97; SP 47.13330.2012) are not clearly presented concerning avalanche risk assessment calculations. A great size of Russia territory, vast diversity of natural conditions and large variations in type and level of economic development of different regions cause significant variations in avalanche risk values. At the first stage of research the small scale avalanche risk assessment was performed in order to identify the most common patterns of risk situations and to calculate full social risk and individual risk. The full social avalanche risk for the territory of country was estimated at 91 victims. The area of territory with individual risk values lesser then 1×10(-6) covers more than 92 % of mountain areas of the country. Within these territories the safety of population can be achieved mainly by organizational activities. Approximately 7% of mountain areas have 1×10(-6) - 1×10(-4) individual risk values and require specific mitigation measures to protect people and infrastructure. Territories with individual risk values 1×10(-4) and above covers about 0,1 % of the territory and include the most severe and hazardous mountain areas. The whole specter of mitigation measures is required in order to minimize risk. The future development of such areas is not recommended. The case studies of specific territories are performed using large-scale risk assessment methods. Thus, we discuss these problems by presenting an avalanche risk assessment approach on example of the developing but poorly researched ski resort areas in the North Caucasus. The suggested method includes the formulas to calculate collective and individual avalanche risk. The results of risk analysis are shown in quantitative data that can be used to determine levels of avalanche risk (acceptable, admissible and unacceptable) and to suggest methods to decrease the individual risk to acceptable level or better. It makes possible to compare risk quantitative data obtained from different mountain regions, analyze it and evaluate the economic feasibility of protection measures. At present, we are developing methods of avalanche risk assessment in economic performance. It conceder costs of objects located in avalanche prone area, traffic density values and probability of financial loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, S.; Labanca, I.; Rech, I.
2014-10-15
Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments.more » However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.« less
Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations
NASA Astrophysics Data System (ADS)
Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.
2012-09-01
Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.
Advanced Avalanche Safety Equipment of Backcountry Users: Current Trends and Perceptions.
Ng, Pearlly; Smith, William R; Wheeler, Albert; McIntosh, Scott E
2015-09-01
Backcountry travelers should carry a standard set of safety gear (transceiver, shovel, and probe) to improve rescue chances and reduce mortality risk. Many backcountry enthusiasts are using other advanced equipment such as an artificial air pocket (eg, the AvaLung) or an avalanche air bag. Our goal was to determine the numbers of backcountry users carrying advanced equipment and their perceptions of mortality and morbidity benefit while carrying this gear. A convenience sample of backcountry skiers, snowboarders, snowshoers, and snowmobilers was surveyed between February and April 2014. Participants of this study were backcountry mountain users recruited at trailheads in the Wasatch and Teton mountain ranges of Utah and Wyoming, respectively. Questions included prior avalanche education, equipment carried, and perceived safety benefit derived from advanced equipment. In all, 193 surveys were collected. Skiers and snowboarders were likely to have taken an avalanche safety course, whereas snowshoers and snowmobilers were less likely to have taken a course. Most backcountry users (149, 77.2%), predominantly skiers and snowboarders, carried standard safety equipment. The AvaLung was carried more often (47 users) than an avalanche air bag (10 users). The avalanche air bag had a more favorable perceived safety benefit. A majority of participants reported cost as the barrier to obtaining advanced equipment. Standard avalanche safety practices, including taking an avalanche safety course and carrying standard equipment, remain the most common safety practices among backcountry users in the Wasatch and Tetons. Snowshoers remain an ideal target for outreach to increase avalanche awareness and safety. Copyright © 2015 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
Natural glide slab avalanches, Glacier National Park, USA: A unique hazard and forecasting challenge
Reardon, Blase; Fagre, Daniel B.; Dundas, Mark; Lundy, Chris
2006-01-01
In a museum of avalanche phenomena, glide cracks and glide avalanches might be housed in the “strange but true” section. These oddities are uncommon in most snow climates and tend to be isolated to specific terrain features such as bedrock slabs. Many glide cracks never result in avalanches, and when they do, the wide range of time between crack formation and slab failure makes them highly unpredictable. Despite their relative rarity, glide cracks and glide avalanches pose a regular threat and complex forecasting challenge during the annual spring opening of the Going-to-the-Sun Road in Glacier National Park, U.S.A. During the 2006 season, a series of unusual glide cracks delayed snow removal operations by over a week and provided a unique opportunity to record detailed observations of glide avalanches and characterize their occurrence and associated weather conditions. Field observations were from snowpits, crown profiles and where possible, measurements of slab thickness, bed surface slope angle, substrate and other physical characteristics. Weather data were recorded at one SNOTEL site and two automated stations located from 0.6-10 km of observed glide slab avalanches. Nearly half (43%) of the 35 glide slab avalanches recorded were Class D2-2.5, with 15% Class D3-D3.5. The time between glide crack opening and failure ranged from 2 days to over six weeks, and the avalanches occurred in cycles associated with loss of snow water equivalent and spikes in temperature and radiation. We conclude with suggest ions for further study.
NASA Astrophysics Data System (ADS)
Flores-Marquez, L.; Suriñach-Cornet, E., Sr.
2017-12-01
Seismic signals generated by snow avalanches and other mass movements are analyzed in their spectrogram representation. Spectrogram displays the evolution in time of the frequency content of the signals. The spectrogram of a seismic signal of a station to which a sliding mass, such as a snow avalanche, approaches, exhibits a triangular time / frequency signature. This increase in its higher frequency content over time is a consequence of the attenuation of the waves propagating in a media. Recognition of characteristic footprints in a spectrogram could help to identify and characterize diverse mass movement events such as landslides or snow avalanches. In order to recognize spectrogram features of seismic signals of Alpine snow avalanches, we propose an algorithm based on the Hough transform. The proposed algorithm is applied on an edge representation image of the seismic spectrogram obtained after fixing a threshold filter to the spectrogram, which enhances the most interesting frequencies of the seismogram that appear over time. This enables us to identify parameters (slopes) that correspond to the speeds associated with the type of snow avalanches, such as, powder, dense or transitional snow avalanches. The data analyzed in this work correspond to twenty different seismic signals generated by snow avalanches artificially released in the experimental site of Vallée de la Sionne (VDLS, SLF, Switzerland). The shape of the signal spectrograms are linked to the flow regimes previously identified. Our findings show that some ranges of speeds are inherent to the type of avalanche.
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Robust snow avalanche detection using machine learning on infrasonic array data
NASA Astrophysics Data System (ADS)
Thüring, Thomas; Schoch, Marcel; van Herwijnen, Alec; Schweizer, Jürg
2014-05-01
Snow avalanches may threaten people and infrastructure in mountain areas. Automated detection of avalanche activity would be highly desirable, in particular during times of poor visibility, to improve hazard assessment, but also to monitor the effectiveness of avalanche control by explosives. In the past, a variety of remote sensing techniques and instruments for the automated detection of avalanche activity have been reported, which are based on radio waves (radar), seismic signals (geophone), optical signals (imaging sensor) or infrasonic signals (microphone). Optical imagery enables to assess avalanche activity with very high spatial resolution, however it is strongly weather dependent. Radar and geophone-based detection typically provide robust avalanche detection for all weather conditions, but are very limited in the size of the monitoring area. On the other hand, due to the long propagation distance of infrasound through air, the monitoring area of infrasonic sensors can cover a large territory using a single sensor (or an array). In addition, they are by far more cost effective than radars or optical imaging systems. Unfortunately, the reliability of infrasonic sensor systems has so far been rather low due to the strong variation of ambient noise (e.g. wind) causing a high false alarm rate. We analyzed the data collected by a low-cost infrasonic array system consisting of four sensors for the automated detection of avalanche activity at Lavin in the eastern Swiss Alps. A comparably large array aperture (~350m) allows highly accurate time delay estimations of signals which arrive at different times at the sensors, enabling precise source localization. An array of four sensors is sufficient for the time resolved source localization of signals in full 3D space, which is an excellent method to anticipate true avalanche activity. Robust avalanche detection is then achieved by using machine learning methods such as support vector machines. The system is initially trained by using characteristic data features from known avalanche and non-avalanche events. Data features are obtained from output signals of the source localization algorithm or from Fourier or time domain processing and support the learning phase of the system. A significantly improved detection rate as well as a reduction of the false alarm rate was achieved compared to previous approaches.
Transient events in bright debris discs: Collisional avalanches revisited
NASA Astrophysics Data System (ADS)
Thebault, P.; Kral, Q.
2018-01-01
Context. A collisional avalanche is set off by the breakup of a large planetesimal, releasing vast amounts of small unbound grains that enter a debris disc located further away from the star, triggering there a collisional chain reaction that could potentially create detectable transient structures. Aims: We investigate this mechanism, using for the first time a fully self-consistent code coupling dynamical and collisional evolutions. We also quantify for the first time the photometric evolution of the system and investigate whether or not avalanches could explain the short-term luminosity variations recently observed in some extremely bright debris discs. Methods: We use the state-of-the-art LIDT-DD code. We consider an avalanche-favoring A6V star, and two set-ups: a "cold disc" case, with a dust release at 10 au and an outer disc extending from 50 to 120 au, and a "warm disc" case with the release at 1 au and a 5-12 au outer disc. We explore, in addition, two key parameters: the density (parameterized by its optical depth τ) of the main outer disc and the amount of dust released by the initial breakup. Results: We find that avalanches could leave detectable structures on resolved images, for both "cold" and "warm" disc cases, in discs with τ of a few 10-3, provided that large dust masses (≳1020-5 × 1022 g) are initially released. The integrated photometric excess due to an avalanche is relatively limited, less than 10% for these released dust masses, peaking in the λ 10-20 μm domain and becoming insignificant beyond 40-50 μm. Contrary to earlier studies, we do not obtain stronger avalanches when increasing τ to higher values. Likewise, we do not observe a significant luminosity deficit, as compared to the pre-avalanche level, after the passage of the avalanche. These two results concur to make avalanches an unlikely explanation for the sharp luminosity drops observed in some extremely bright debris discs. The ideal configuration for observing an avalanche would be a two-belt structure, with an inner belt (at 1 or 10 au for the "warm" and "cold" disc cases, respectively) of fractional luminosity f ≳ 10-4 where breakups of massive planetesimals occur, and a more massive outer belt, with τ of a few 10-3, into which the avalanche chain reaction develops and propagates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wronski, M.; Zhao, W.; Tanioka, K.
Purpose: The authors are investigating the feasibility of a new type of solid-state x-ray imaging sensor with programmable avalanche gain: scintillator high-gain avalanche rushing photoconductor active matrix flat panel imager (SHARP-AMFPI). The purpose of the present work is to investigate the inherent x-ray detection properties of SHARP and demonstrate its wide dynamic range through programmable gain. Methods: A distributed resistive layer (DRL) was developed to maintain stable avalanche gain operation in a solid-state HARP. The signal and noise properties of the HARP-DRL for optical photon detection were investigated as a function of avalanche gain both theoretically and experimentally, and themore » results were compared with HARP tube (with electron beam readout) used in previous investigations of zero spatial frequency performance of SHARP. For this new investigation, a solid-state SHARP x-ray image sensor was formed by direct optical coupling of the HARP-DRL with a structured cesium iodide (CsI) scintillator. The x-ray sensitivity of this sensor was measured as a function of avalanche gain and the results were compared with the sensitivity of HARP-DRL measured optically. The dynamic range of HARP-DRL with variable avalanche gain was investigated for the entire exposure range encountered in radiography/fluoroscopy (R/F) applications. Results: The signal from HARP-DRL as a function of electric field showed stable avalanche gain, and the noise associated with the avalanche process agrees well with theory and previous measurements from a HARP tube. This result indicates that when coupled with CsI for x-ray detection, the additional noise associated with avalanche gain in HARP-DRL is negligible. The x-ray sensitivity measurements using the SHARP sensor produced identical avalanche gain dependence on electric field as the optical measurements with HARP-DRL. Adjusting the avalanche multiplication gain in HARP-DRL enabled a very wide dynamic range which encompassed all clinically relevant medical x-ray exposures. Conclusions: This work demonstrates that the HARP-DRL sensor enables the practical implementation of a SHARP solid-state x-ray sensor capable of quantum noise limited operation throughout the entire range of clinically relevant x-ray exposures. This is an important step toward the realization of a SHARP-AMFPI x-ray flat-panel imager.« less
Avalanche multiplication in AlGaN-based heterostructures for the ultraviolet spectral range
NASA Astrophysics Data System (ADS)
Hahn, L.; Fuchs, F.; Kirste, L.; Driad, R.; Rutz, F.; Passow, T.; Köhler, K.; Rehm, R.; Ambacher, O.
2018-04-01
AlxGa1-xN based avalanche photodiodes grown on sapphire substrate with Al-contents of x = 0.65 and x = 0.60 have been examined under back- and frontside illumination with respect to their avalanche gain properties. The photodetectors suitable for the solar-blind ultraviolet spectral regime show avalanche gain for voltages in excess of 30 V reverse bias in the linear gain mode. Devices with a mesa diameter of 100 μm exhibit stable avalanche gain below the break through threshold voltage, exceeding a multiplication gain of 5500 at 84 V reverse bias. A dark current below 1 pA can be found for reverse voltages up to 60 V.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2003-07-01
During ESCOMPTE precampaign (15 June to 10 July 2000), three days of intensive pollution (IOP0) have been observed and simulated. The comprehensive RAMS model, version 4.3, coupled online with a chemical module including 29 species, has been used to follow the chemistry of the zone polluted over southern France. This online method can be used because the code is paralleled and the SGI 3800 computer is very powerful. Two runs have been performed: run1 with one grid and run2 with two nested grids. The redistribution of simulated chemical species (ozone, carbon monoxide, sulphur dioxide and nitrogen oxides) was compared to aircraft measurements and surface stations. The 2-grid run has given substantially better results than the one-grid run only because the former takes the outer pollutants into account. This online method helps to explain dynamics and to retrieve the chemical species redistribution with a good agreement.
Simulation of ozone production in a complex circulation region using nested grids
NASA Astrophysics Data System (ADS)
Taghavi, M.; Cautenet, S.; Foret, G.
2004-06-01
During the ESCOMPTE precampaign (summer 2000, over Southern France), a 3-day period of intensive observation (IOP0), associated with ozone peaks, has been simulated. The comprehensive RAMS model, version 4.3, coupled on-line with a chemical module including 29 species, is used to follow the chemistry of the polluted zone. This efficient but time consuming method can be used because the code is installed on a parallel computer, the SGI 3800. Two runs are performed: run 1 with a single grid and run 2 with two nested grids. The simulated fields of ozone, carbon monoxide, nitrogen oxides and sulfur dioxide are compared with aircraft and surface station measurements. The 2-grid run looks substantially better than the run with one grid because the former takes the outer pollutants into account. This on-line method helps to satisfactorily retrieve the chemical species redistribution and to explain the impact of dynamics on this redistribution.
Grid of Supergiant B[e] Models from HDUST Radiative Transfer
NASA Astrophysics Data System (ADS)
Domiciano de Souza, A.; Carciofi, A. C.
2012-12-01
By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.
Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2011-01-01
An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.
Ostermann, Marc; Sanders, Diethard; Ivy-Ochs, Susan; Alfimov, Vasily; Rockenschaub, Manfred; Römer, Alexander
2012-01-01
In the Obernberg valley, the Eastern Alps, landforms recently interpreted as moraines are re-interpreted as rock avalanche deposits. The catastrophic slope failure involved an initial rock volume of about 45 million m³, with a runout of 7.2 km over a total vertical distance of 1330 m (fahrböschung 10°). 36Cl surface-exposure dating of boulders of the avalanche mass indicates an event age of 8.6 ± 0.6 ka. A 14C age of 7785 ± 190 cal yr BP of a palaeosoil within an alluvial fan downlapping the rock avalanche is consistent with the event age. The distal 2 km of the rock-avalanche deposit is characterized by a highly regular array of transverse ridges that were previously interpreted as terminal moraines of Late-Glacial. ‘Jigsaw-puzzle structure’ of gravel to boulder-size clasts in the ridges and a matrix of cataclastic gouge indicate a rock avalanche origin. For a wide altitude range the avalanche deposit is preserved, and the event age of mass-wasting precludes both runout over glacial ice and subsequent glacial overprint. The regularly arrayed transverse ridges thus were formed during freezing of the rock avalanche deposits. PMID:24966447
Neumann, M; Herten, D P; Dietrich, A; Wolfrum, J; Sauer, M
2000-02-25
The first capillary array scanner for time-resolved fluorescence detection in parallel capillary electrophoresis based on semiconductor technology is described. The system consists essentially of a confocal fluorescence microscope and a x,y-microscope scanning stage. Fluorescence of the labelled probe molecules was excited using a short-pulse diode laser emitting at 640 nm with a repetition rate of 50 MHz. Using a single filter system the fluorescence decays of different labels were detected by an avalanche photodiode in combination with a PC plug-in card for time-correlated single-photon counting (TCSPC). The time-resolved fluorescence signals were analyzed and identified by a maximum likelihood estimator (MLE). The x,y-microscope scanning stage allows for discontinuous, bidirectional scanning of up to 16 capillaries in an array, resulting in longer fluorescence collection times per capillary compared to scanners working in a continuous mode. Synchronization of the alignment and measurement process were developed to allow for data acquisition without overhead. Detection limits in the subzeptomol range for different dye molecules separated in parallel capillaries have been achieved. In addition, we report on parallel time-resolved detection and separation of more than 400 bases of single base extension DNA fragments in capillary array electrophoresis. Using only semiconductor technology the presented technique represents a low-cost alternative for high throughput DNA sequencing in parallel capillaries.
Avalanche risk assessment - a multi-temporal approach, results from Galtür, Austria
NASA Astrophysics Data System (ADS)
Keiler, M.; Sailer, R.; Jörg, P.; Weber, C.; Fuchs, S.; Zischg, A.; Sauermoser, S.
2006-07-01
Snow avalanches pose a threat to settlements and infrastructure in alpine environments. Due to the catastrophic events in recent years, the public is more aware of this phenomenon. Alpine settlements have always been confronted with natural hazards, but changes in land use and in dealing with avalanche hazards lead to an altering perception of this threat. In this study, a multi-temporal risk assessment is presented for three avalanche tracks in the municipality of Galtür, Austria. Changes in avalanche risk as well as changes in the risk-influencing factors (process behaviour, values at risk (buildings) and vulnerability) between 1950 and 2000 are quantified. An additional focus is put on the interconnection between these factors and their influence on the resulting risk. The avalanche processes were calculated using different simulation models (SAMOS as well as ELBA+). For each avalanche track, different scenarios were calculated according to the development of mitigation measures. The focus of the study was on a multi-temporal risk assessment; consequently the used models could be replaced with other snow avalanche models providing the same functionalities. The monetary values of buildings were estimated using the volume of the buildings and average prices per cubic meter. The changing size of the buildings over time was inferred from construction plans. The vulnerability of the buildings is understood as a degree of loss to a given element within the area affected by natural hazards. A vulnerability function for different construction types of buildings that depends on avalanche pressure was used to assess the degree of loss. No general risk trend could be determined for the studied avalanche tracks. Due to the high complexity of the variations in risk, small changes of one of several influencing factors can cause considerable differences in the resulting risk. This multi-temporal approach leads to better understanding of the today's risk by identifying the main changes and the underlying processes. Furthermore, this knowledge can be implemented in strategies for sustainable development in Alpine settlements.
A debris avalanche at Forest Falls, San Bernardino County, California, July 11, 1999
Morton, Douglas M.; Hauser, Rachel M.
2001-01-01
This publication consists of the online version of a CD-ROM publication, U.S. Geological Survey Open-File Report 01-146. The data for this publication total 557 MB on the CD-ROM. For speed of transfer, the main PDF document has been compressed (with a subsequent loss of image quality) from 145 to 18.1 MB. The community of Forest Falls, California, is frequently subject to relatively slow moving debris flows. Some 11 debris flow events that were destructive to property have been recorded between 1955 and 1998. On July 11 and 13, 1999, debris flows again occurred, produced by high-intensity, short-duration monsoon rains. Unlike previous debris flow events, the July 11 rainfall generated a high-velocity debris avalanche in Snow Creek, one of the several creeks crossing the composite, debris flow dominated, alluvial fan on which Forest Falls is located. This debris avalanche overshot the bank of the active debris flow channel of Snow Creek, destroying property in the near vicinity and taking a life. The minimum velocity of this avalanche is calculated to have been in the range of 40 to 55 miles per hour. Impact from high-velocity boulders removed trees where the avalanche overshot the channel bank. Further down the fan, the rapidly moving debris fragmented the outer parts of the upslope side of large pine trees and embedded rock fragments into the tree trunks. Unlike the characteristic deposits formed by debris flows, the avalanche spread out down-slope and left no deposit suggestive of a debris avalanche. This summer monsoon-generated debris avalanche is apparently the first recorded for Forest Falls. The best indications of past debris avalanches may be the degree of permanent scars produced by extensive abrasion and splintering of the outer parts of pine trees that were in the path of an avalanche.
Are dragon-king neuronal avalanches dungeons for self-organized brain activity?
NASA Astrophysics Data System (ADS)
de Arcangelis, L.
2012-05-01
Recent experiments have detected a novel form of spontaneous neuronal activity both in vitro and in vivo: neuronal avalanches. The statistical properties of this activity are typical of critical phenomena, with power laws characterizing the distributions of avalanche size and duration. A critical behaviour for the spontaneous brain activity has important consequences on stimulated activity and learning. Very interestingly, these statistical properties can be altered in significant ways in epilepsy and by pharmacological manipulations. In particular, there can be an increase in the number of large events anticipated by the power law, referred to herein as dragon-king avalanches. This behaviour, as verified by numerical models, can originate from a number of different mechanisms. For instance, it is observed experimentally that the emergence of a critical behaviour depends on the subtle balance between excitatory and inhibitory mechanisms acting in the system. Perturbing this balance, by increasing either synaptic excitation or the incidence of depolarized neuronal up-states causes frequent dragon-king avalanches. Conversely, an unbalanced GABAergic inhibition or long periods of low activity in the network give rise to sub-critical behaviour. Moreover, the existence of power laws, common to other stochastic processes, like earthquakes or solar flares, suggests that correlations are relevant in these phenomena. The dragon-king avalanches may then also be the expression of pathological correlations leading to frequent avalanches encompassing all neurons. We will review the statistics of neuronal avalanches in experimental systems. We then present numerical simulations of a neuronal network model introducing within the self-organized criticality framework ingredients from the physiology of real neurons, as the refractory period, synaptic plasticity and inhibitory synapses. The avalanche critical behaviour and the role of dragon-king avalanches will be discussed in relation to different drives, neuronal states and microscopic mechanisms of charge storage and release in neuronal networks.
Extracting functionally feedforward networks from a population of spiking neurons
Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits. PMID:23091458
Extracting functionally feedforward networks from a population of spiking neurons.
Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits.
Conditions for Triggering Avalanches in Mn12-acetate.
NASA Astrophysics Data System (ADS)
Suzuki, Yoko; McHugh, S.; Jaafar, R.; Sarachik, M. P.; Myasoedov, Y.; Shtrikman, H.; Zeldov, E.; Bagai, R.; Chakov, N. E.; Christou, G.
2007-03-01
Recent measurements in Mn12-acetate have shown that magnetic avalanches (corresponding to fast magnetization reversal) propagate as a narrow front with a velocity that is roughly two orders of magnitude smaller than the speed of sound. This phenomenon is closely analogous to the propagation of a flame front through a flammable chemical substance (deflagration) [1]. The conditions for nucleation of avalanches triggered in response to a time-varying (swept) magnetic field were studied for different fields and temperatures. In these crystals, avalanches happened only at low temperatures and were found to occur stochastically at fields ranging from 1.0 T to 4.5 T. There is no apparent structure in the distribution of avalanches for fields below 3.5 T; at higher fields we find evidence that the probability is lower at ``nonresonant'' magnetic fields where tunneling across the anisotropy barrier is suppressed. This provides evidence that lowering the barrier by quantum mechanical tunneling facilitates the ignition of avalanches. Based on these and other measurements, we suggest that avalanches are triggered below 3.5 T by defects with lower energy barriers. [1] Y. Suzuki, et al., Phys. Rev. Lett. 95, 147201 (2005).
NASA Astrophysics Data System (ADS)
Leiva, J. C.; Casteller, A.; Martínez, H. H.; Norte, F. A.; Simonelli, S. C.
2010-03-01
Snow avalanches commonly threaten people and infrastructure in mountainous areas worldwide. Winter precipitation events in the Central Andes are caused by the interaction of the atmospheric general circulation and their steep orography. Almost every winter season snow storms and winds cause the blockage of routes and lead to the snowpack conditions that generate avalanche events. The amount of winter snow accumulation is highly variable and is one of the most important factors for assessing the impacts of climate change not only on the water availability, but also to plan future mitigation measures to reduce the avalanche hazard. The authors have conducted studies on snow avalanches that regularly affect the international route linking Mendoza (Argentina) with Santiago de Chile (Chile) but none of them was done at the Aconcagua Provincial Park The park is nearby this route, about 13 km kilometers east from the international border, which in this sector of the Andes coincides with the continental divide. On the night of 17 August 2009, seven people were caught by an avalanche that hit the Aconcagua Park rangers refuge (32° 48' 40'' S, 69° 56' 33'' W; 2950 masl).This paper describes the meteorological and snow precipitation conditions originating the event. On August 14 th. the synoptic surface and upper-air conditions from NCEP reanalysis were those associated with a severe Zonda wind occurrence in the region, that is: a 500 hPa level trough, a deep low-pressure surface system located over the Pacific Ocean close to the Chilean coast, approximately over 48 ° S and 80° W, and a jet stream at middle upper-air levels. The avalanche event occurred during a new and very heavy snowfall a while more than two days later of these extreme episodes. The topographical characteristics of the avalanche path, the snow storm intensity and the snow accumulation on the avalanche starting zone allowed the authors to simulate the avalanche flow. Snow storm intensity and snow accumulation data from Los Penitentes ski resort (about 10 km east of the Park entrance) were used as input data for the avalanche modeling. However, an additional snow mass was considered due to the fact that the starting zone is in a leeward slope. Vertical aerial photographs (1974), topographic profiles, a DEM generated from ASTER images and the snow accumulation data enabled the authors to simulate the avalanche flow using a bi-dimensional and a three-dimensional avalanche dynamics model. Our results indicate that the studied avalanche event was originated by two main factors. Firstly, prior to the studied event, the snowpack had gone through several cycles of high and low temperatures, thus producing a highly metamorphosed snowpack that facilitated the slide of the new snow. Secondly, the high intensity of the new snow precipitation did not allow for its good settlement. This study is the first step towards an avalanche hazard map of Aconcagua Park and will serve as a basis for advising the Park authorities in regards to the definition of the location of a new refuge and the necessary building structure requirements to be fulfilled.
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2006-01-01
This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.
High-power microwave generation using optically activated semiconductor switches
NASA Astrophysics Data System (ADS)
Nunnally, William C.
1990-12-01
The two prominent types of optically controlled switches, the optically controlled linear (OCL) switch and the optically initiated avalanche (OIA) switch, are described, and their operating parameters are characterized. Two transmission line approaches, one using a frozen-wave generator and the other using an injected-wave generator, for generation of multiple cycles of high-power microwave energy using optically controlled switches are discussed. The point design performances of the series-switch, frozen-wave generator and the parallel-switch, injected-wave generator are compared. The operating and performance limitations of the optically controlled switch types are discussed, and additional research needed to advance the development of the optically controlled, bulk, semiconductor switches is indicated.
New views of granular mass flows
Iverson, R.M.; Vallance, J.W.
2001-01-01
Concentrated grain-fluid mixtures in rock avalanches, debris flows, and pyroclastic flows do not behave as simple materials with fixed rheologies. Instead, rheology evolves as mixture agitation, grain concentration, and fluid-pressure change during flow initiation, transit, and deposition. Throughout a flow, however, normal forces on planes parallel to the free upper surface approximately balance the weight of the superincumbent mixture, and the Coulomb friction rule describes bulk intergranular shear stresses on such planes. Pore-fluid pressure can temporarily or locally enhance mixture mobility by reducing Coulomb friction and transferring shear stress to the fluid phase. Initial conditions, boundary conditions, and grain comminution and sorting can influence pore-fluid pressures and cause variations in flow dynamics and deposits.
Two-threshold model for scaling laws of noninteracting snow avalanches
Faillettaz, J.; Louchet, F.; Grasso, J.-R.
2004-01-01
A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.
Influence of snow temperature on avalanche impact pressure
NASA Astrophysics Data System (ADS)
Sovilla, Betty; Koehler, Anselm; Steinkogler, Walter; Fischer, Jan-Thomas
2015-04-01
The properties of the snow entrained by an avalanche during its motion (density, temperature) significantly affect flow dynamics and can determine whether the flowing material forms granules or maintains its original fine-grained structure. In general, a cold and light snow cover typically fluidizes, while warmer and more cohesive snow may form a granular denser layer in a flowing avalanche. This structural difference has a fundamental influence not only in the mobility of the flow but also on the impact pressure of avalanches. Using measurements of impact pressure, velocity, density and snow temperature performed at the Swiss Vallée de la Sionne full-scale test site, we show that, impact pressure fundamentally changes with snow temperature. A transition threshold of about -2°C is determined, the same temperature at which snow granulation starts. On the one hand warm avalanches, characterized by temperatures larger than -2°C, move as a plug and exert impact pressures linearly proportional to the avalanche depth. For Froude numbers larger than 1, an additional square-velocity dependent contribution cannot be neglected. On the other hand cold avalanches, characterized by a temperature smaller than -2°C, move as dense sheared flows, or completely dilute powder clouds and exert impact pressures, which are mainly proportional to the square of the flow velocity. For these avalanches the impact pressures strongly depend on density variations within the flow. We suggest that the proposed temperature threshold can be used as a criterion to define the transition between the impact pressures exerted by warm and cold avalanches, thus offering a new way to elude the notorious difficulties in defining the differences between wet and dry flow, respectively.
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen; Taboada, Alfredo
2009-09-01
We present Contact Dynamics discrete element simulations of the earthquake-triggered Jiufengershan avalanche, which mobilized a 60 m thick, 1.5 km long sedimentary layer, dipping ˜22°SE toward a valley. The dynamic behavior of the avalanche is simulated under different assumptions about rock behavior, water table height, and boundary shear strength. Additionally, seismic shaking is introduced using strong motion records from nearby stations. We assume that seismic shaking generates shearing and frictional heating along the surface of rupture, which, in turn, may induce dynamic weakening and avalanche triggering; a simple "slip-weakening" criterion was adopted to simulate shear strength drop along the rupture surface. We investigate the mechanical processes occurring during triggering and propagation of an avalanche mobilizing shallowly dipping layers. Incipient deformation forms a pop-up structure at the toe of the dip slope. As the avalanche propagates, the pop-up deforms into an overturned fold, which overrides the surface of separation along a décollement. Simultaneously, uphill layers slide at high velocity (125 km/h) and are folded and disrupted as they reach the toe of the dip slope. The avalanche foot forms a wedge that is pushed forward as deformed rocks accrete at its rear. We simulated five cross sections across the Jiufengershan avalanche, which differ in the geometry of the surface of separation. Topographic and simulated surface profiles are similar. The friction coefficient at the surface of separation determined from back analysis is abnormally low (μSS = 0.2), possibly due to lubrication by liquefied soils. The granular deposits of simulated earthquake- and rain-triggered avalanches are similar.
Moore, Jeffrey R.; Pankow, Kristine L.; Ford, Sean R.; ...
2017-03-01
The 2013 Bingham Canyon Mine rock avalanches represent one of the largest cumulative landslide events in recorded U.S. history and provide a unique opportunity to test remote analysis techniques for landslide characterization. We combine aerial photogrammetry surveying, topographic reconstruction, numerical runout modeling, and analysis of broadband seismic and infrasound data to extract salient details of the dynamics and evolution of the multiphase landslide event. Our results reveal a cumulative intact rock source volume of 52 Mm 3, which mobilized in two main rock avalanche phases separated by 1.5 h. We estimate that the first rock avalanche had 1.5–2 times greatermore » volume than the second. Each failure initiated by sliding along a gently dipping (21°), highly persistent basal fault before transitioning to a rock avalanche and spilling into the inner pit. The trajectory and duration of the two rock avalanches were reconstructed using runout modeling and independent force history inversion of intermediate-period (10–50 s) seismic data. Intermediate- and shorter-period (1–50 s) seismic data were sensitive to intervals of mass redirection and constrained finer details of the individual slide dynamics. Back projecting short-period (0.2–1 s) seismic energy, we located the two rock avalanches within 2 and 4 km of the mine. Further analysis of infrasound and seismic data revealed that the cumulative event included an additional 11 smaller landslides (volumes ~10 4–10 5 m 3) and that a trailing signal following the second rock avalanche may result from an air-coupled Rayleigh wave. These results demonstrate new and refined techniques for detailed remote characterization of the dynamics and evolution of large landslides.« less
Reevaluation of tsunami formation by debris avalanche at Augustine Volcano, Alaska
Waythomas, C.F.
2000-01-01
Debris avalanches entering the sea at Augustine Volcano, Alaska have been proposed as a mechanism for generating tsunamis. Historical accounts of the 1883 eruption of the volcano describe 6- to 9-meter-high waves that struck the coastline at English Bay (Nanwalek), Alaska about 80 kilometers east of Augustine Island. These accounts are often cited as proof that volcanigenic tsunamis from Augustine Volcano are significant hazards to the coastal zone of lower Cook Inlet. This claim is disputed because deposits of unequivocal tsunami origin are not evident at more than 50 sites along the lower Cook Inlet coastline where they might be preserved. Shallow water (<25 m) around Augustine Island, in the run-out zone for debris avalanches, limits the size of an avalanche-caused wave. If the two most recent debris avalanches, Burr Point (A.D. 1883) and West Island (<500 yr. B.P.) were traveling at velocities in the range of 50 to 100 meters per second, the kinetic energy of the avalanches at the point of impact with the ocean would have been between 1014 and 1015 joules. Although some of this energy would be dissipated through boundary interactions and momentum transfer between the avalanche and the sea, the initial wave should have possessed sufficient kinetic energy to do geomorphic work (erosion, sediment transport, formation of wave-cut features) on the coastline of lowwer Cook Inlet. Because widespread evidence of the effects of large waves cannot be found, it appears that the debris avalanches could not have been traveling very fast when they entered the sea, or they happened during low tide and displaced only small volumes of water. In light of these results, the hazard from volcanigenic tsunamis from Augustine Volcano appears minor, unless a very large debris avalanche occurs at high tide.
Relation of the runaway avalanche threshold to momentum space topology
NASA Astrophysics Data System (ADS)
McDevitt, Christopher J.; Guo, Zehua; Tang, Xian-Zhu
2018-02-01
The underlying physics responsible for the formation of an avalanche instability due to the generation of secondary electrons is studied. A careful examination of the momentum space topology of the runaway electron population is carried out with an eye toward identifying how qualitative changes in the momentum space of the runaway electrons is correlated with the avalanche threshold. It is found that the avalanche threshold is tied to the merger of an O and X point in the momentum space of the primary runaway electron population. Such a change of the momentum space topology is shown to be accurately described by a simple analytic model, thus providing a powerful means of determining the avalanche threshold for a range of model assumptions.
Observation of the avalanche of runaway electrons in air in a strong electric field.
Gurevich, A V; Mesyats, G A; Zybin, K P; Yalandin, M I; Reutova, A G; Shpak, V G; Shunailov, S A
2012-08-24
The generation of an avalanche of runaway electrons is demonstrated for the first time in a laboratory experiment. Two flows of runaway electrons are formed sequentially in an extended air discharge gap at the stage of delay of a pulsed breakdown. The first, picosecond, runaway electron flow is emitted in the cathode region where the field is enhanced. Being accelerated in the gap, this beam generates electrons due to impact ionization. These secondary electrons form a delayed avalanche of runaway electrons if the field is strong enough. The properties of the avalanche correspond to the existing notions about the runaway breakdown in air. The measured current of the avalanche exceeds up to an order the current of the initiating electron beam.
Observation of the Avalanche of Runaway Electrons in Air in a Strong Electric Field
NASA Astrophysics Data System (ADS)
Gurevich, A. V.; Mesyats, G. A.; Zybin, K. P.; Yalandin, M. I.; Reutova, A. G.; Shpak, V. G.; Shunailov, S. A.
2012-08-01
The generation of an avalanche of runaway electrons is demonstrated for the first time in a laboratory experiment. Two flows of runaway electrons are formed sequentially in an extended air discharge gap at the stage of delay of a pulsed breakdown. The first, picosecond, runaway electron flow is emitted in the cathode region where the field is enhanced. Being accelerated in the gap, this beam generates electrons due to impact ionization. These secondary electrons form a delayed avalanche of runaway electrons if the field is strong enough. The properties of the avalanche correspond to the existing notions about the runaway breakdown in air. The measured current of the avalanche exceeds up to an order the current of the initiating electron beam.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
LSPRAY-IV: A Lagrangian Spray Module
NASA Technical Reports Server (NTRS)
Raju, M. S.
2012-01-01
LSPRAY-IV is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray. Some important research areas covered as a part of the code development are: (1) the extension of combined CFD/scalar-Monte- Carlo-PDF method to spray modeling, (2) the multi-component liquid spray modeling, and (3) the assessment of various atomization models used in spray calculations. The current version contains the extension to the modeling of superheated sprays. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
WE-EF-207-10: Striped Ratio Grids: A New Concept for Scatter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsieh, S
2015-06-15
Purpose: To propose a new method for estimating scatter in x-ray imaging. We propose the “striped ratio grid,” an anti-scatter grid with alternating stripes of high scatter rejection (attained, for example, by high grid ratio) and low scatter rejection. To minimize artifacts, stripes are oriented parallel to the direction of the ramp filter. Signal discontinuities at the boundaries between stripes provide information on local scatter content, although these discontinuities are contaminated by variation in primary radiation. Methods: We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid, andmore » processed them together to mimic a striped ratio grid. Two phantoms were scanned with the emulated striped ratio grid and compared with a conventional anti-scatter grid and a fan-beam acquisition, which served as ground truth. A nonlinear image processing algorithm was developed to mitigate the problem of primary variation. Results: The emulated striped ratio grid reduced scatter more effectively than the conventional grid alone. Contrast is thereby improved in projection imaging. In CT imaging, cupping is markedly reduced. Artifacts introduced by the striped ratio grid appear to be minimal. Conclusion: Striped ratio grids could be a simple and effective evolution of conventional anti-scatter grids. Unlike several other approaches currently under investigation for scatter management, striped ratio grids require minimal computation, little new hardware (at least for systems which already use removable grids) and impose few assumptions on the nature of the object being scanned.« less
NASA Technical Reports Server (NTRS)
Lee-Rausch, Elizabeth M.; Hammond, Dana P.; Nielsen, Eric J.; Pirzadeh, S. Z.; Rumsey, Christopher L.
2010-01-01
FUN3D Navier-Stokes solutions were computed for the 4th AIAA Drag Prediction Workshop grid convergence study, downwash study, and Reynolds number study on a set of node-based mixed-element grids. All of the baseline tetrahedral grids were generated with the VGRID (developmental) advancing-layer and advancing-front grid generation software package following the gridding guidelines developed for the workshop. With maximum grid sizes exceeding 100 million nodes, the grid convergence study was particularly challenging for the node-based unstructured grid generators and flow solvers. At the time of the workshop, the super-fine grid with 105 million nodes and 600 million elements was the largest grid known to have been generated using VGRID. FUN3D Version 11.0 has a completely new pre- and post-processing paradigm that has been incorporated directly into the solver and functions entirely in a parallel, distributed memory environment. This feature allowed for practical pre-processing and solution times on the largest unstructured-grid size requested for the workshop. For the constant-lift grid convergence case, the convergence of total drag is approximately second-order on the finest three grids. The variation in total drag between the finest two grids is only 2 counts. At the finest grid levels, only small variations in wing and tail pressure distributions are seen with grid refinement. Similarly, a small wing side-of-body separation also shows little variation at the finest grid levels. Overall, the FUN3D results compare well with the structured-grid code CFL3D. The FUN3D downwash study and Reynolds number study results compare well with the range of results shown in the workshop presentations.
Transient Finite Element Computations on a Variable Transputer System
NASA Technical Reports Server (NTRS)
Smolinski, Patrick J.; Lapczyk, Ireneusz
1993-01-01
A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.
Joint Experiment on Scalable Parallel Processors (JESPP) Parallel Data Management
2006-05-01
management and analysis tool, called Simulation Data Grid ( SDG ). The design principles driving the design of SDG are: 1) minimize network communication...or SDG . In this report, an initial prototype implementation of this system is described. This project follows on earlier research, primarily...distributed logging system had some 2 limitations. These limitations will be described in this report, and how the SDG addresses these limitations. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Look-ahead dynamic simulation software system incorporates the high performance parallel computing technologies, significantly reduces the solution time for each transient simulation case, and brings the dynamic simulation analysis into on-line applications to enable more transparency for better reliability and asset utilization. It takes the snapshot of the current power grid status, functions in parallel computing the system dynamic simulation, and outputs the transient response of the power system in real time.
Design and characterization of single photon avalanche diodes arrays
NASA Astrophysics Data System (ADS)
Neri, L.; Tudisco, S.; Lanzanò, L.; Musumeci, F.; Privitera, S.; Scordino, A.; Condorelli, G.; Fallica, G.; Mazzillo, M.; Sanfilippo, D.; Valvo, G.
2010-05-01
During the last years, in collaboration with ST-Microelectronics, we developed a new avalanche photo sensor, single photon avalanche diode (SPAD) see Ref.[S. Privitera, et al., Sensors 8 (2008) 4636 [1];S. Tudisco et al., IEEE Sensors Journal 8 (2008) 1324 [2
Bellay, Timothy; Klaus, Andreas; Seshadri, Saurav; Plenz, Dietmar
2015-01-01
Spontaneous fluctuations in neuronal activity emerge at many spatial and temporal scales in cortex. Population measures found these fluctuations to organize as scale-invariant neuronal avalanches, suggesting cortical dynamics to be critical. Macroscopic dynamics, though, depend on physiological states and are ambiguous as to their cellular composition, spatiotemporal origin, and contributions from synaptic input or action potential (AP) output. Here, we study spontaneous firing in pyramidal neurons (PNs) from rat superficial cortical layers in vivo and in vitro using 2-photon imaging. As the animal transitions from the anesthetized to awake state, spontaneous single neuron firing increases in irregularity and assembles into scale-invariant avalanches at the group level. In vitro spike avalanches emerged naturally yet required balanced excitation and inhibition. This demonstrates that neuronal avalanches are linked to the global physiological state of wakefulness and that cortical resting activity organizes as avalanches from firing of local PN groups to global population activity. DOI: http://dx.doi.org/10.7554/eLife.07224.001 PMID:26151674
Thermally Driven Inhibition of Superconducting Vortex Avalanches
NASA Astrophysics Data System (ADS)
Lara, Antonio; Aliev, Farkhad G.; Moshchalkov, Victor V.; Galperin, Yuri M.
2017-09-01
Complex systems close to their critical state can exhibit abrupt transitions—avalanches—between their metastable states. It is a challenging task to understand the mechanism of the avalanches and control their behavior. Here, we investigate microwave stimulation of avalanches in the so-called vortex matter of type-II superconductors—a system of interacting Abrikosov vortices close to the critical (Bean) state. Our main finding is that the avalanche incubation strongly depends on the excitation frequency, a completely unexpected behavior observed close to the so-called depinning frequencies. Namely, the triggered vortex avalanches in Pb superconducting films become effectively inhibited approaching the critical temperature or critical magnetic field when the microwave stimulus is close to the vortex depinning frequency. We suggest a simple model explaining the observed counterintuitive behaviors as a manifestation of the strongly nonlinear dependence of the driven vortex core size on the microwave excitation intensity. This paves the way to controlling avalanches in superconductor-based devices through their nonlinear response.
Time-Dependent Simulations of Incompressible Flow in a Turbopump Using Overset Grid Approach
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
This viewgraph presentation provides information on mathematical modelling of the SSME (space shuttle main engine). The unsteady SSME-rig1 start-up procedure from the pump at rest has been initiated by using 34.3 million grid points. The computational model for the SSME-rig1 has been completed. Moving boundary capability is obtained by using DCF module in OVERFLOW-D. MPI (Message Passing Interface)/OpenMP hybrid parallel code has been benchmarked.
OpenMP performance for benchmark 2D shallow water equations using LBM
NASA Astrophysics Data System (ADS)
Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry
2018-03-01
Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.
ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033
2016-09-15
Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Low dose digital X-ray imaging with avalanche amorphous selenium
NASA Astrophysics Data System (ADS)
Scheuermann, James R.; Goldan, Amir H.; Tousignant, Olivier; Léveillé, Sébastien; Zhao, Wei
2015-03-01
Active Matrix Flat Panel Imagers (AMFPI) based on an array of thin film transistors (TFT) have become the dominant technology for digital x-ray imaging. In low dose applications, the performance of both direct and indirect conversion detectors are limited by the electronic noise associated with the TFT array. New concepts of direct and indirect detectors have been proposed using avalanche amorphous selenium (a-Se), referred to as high gain avalanche rushing photoconductor (HARP). The indirect detector utilizes a planar layer of HARP to detect light from an x-ray scintillator and amplify the photogenerated charge. The direct detector utilizes separate interaction (non-avalanche) and amplification (avalanche) regions within the a-Se to achieve depth-independent signal gain. Both detectors require the development of large area, solid state HARP. We have previously reported the first avalanche gain in a-Se with deposition techniques scalable to large area detectors. The goal of the present work is to demonstrate the feasibility of large area HARP fabrication in an a-Se deposition facility established for commercial large area AMFPI. We also examine the effect of alternative pixel electrode materials on avalanche gain. The results show that avalanche gain > 50 is achievable in the HARP layers developed in large area coaters, which is sufficient to achieve x-ray quantum noise limited performance down to a single x-ray photon per pixel. Both chromium (Cr) and indium tin oxide (ITO) have been successfully tested as pixel electrodes.
Unusual gravitational failures on lava domes of Tatun Volcanic Group, Northern Taiwan.
NASA Astrophysics Data System (ADS)
Belousov, Alexander; Belousova, Marina; Chen, Chang-Hwa; Zellmer, Georg
2010-05-01
Tatun Volcanic Group of Northern Taiwan was formed mainly during the Pleistocene - Early Holocene. Most of the volcanoes are represented by andesitic lava domes of moderate sizes: heights up to 400 m (absolute altitudes 800-1100 m a.s.l.), base diameters up to 2 km, and volumes up to 0.3 km³. Many of the domes have broadly opened (0.5-1.0 km across and up to 140° wide), shallow-incised horseshoe-shaped scars formed by gravitational collapses. The failure planes did not intersect the volcanic conduits, and the scars were not filled by younger volcanic edifices: most of the collapses occurred a long time after the eruptions had ceased. The largest collapse, with a volume 0.1 km³, occurred at eastern part of Datun lava dome. Specific feature of the collapse was that the rear slide blocks did not travel far from the source; they stopped high inside the collapse scar, forming multiple narrow toreva blocks descending downslope. The leading slide blocks formed a low mobile debris avalanche (L~5 km; H~1 km; H/L~0.2). The deposit is composed mainly of block facies. The age of the collapse is older than 24,000 yrs, because the related debris avalanche deposit is covered by a younger debris avalanche deposit of Siaoguanyin volcano having calibrated 14C age 22,600-23,780 BP. The Siaoguanyin debris avalanche was formed as a result of collapse of southern part of a small flank dome. Specific feature of the resulted avalanche - it was hot during deposition. The deposit contains carbonized wood; andesite boulders within the deposit frequently have radial cooling joints, and in rare cases "bread-crust" surfaces. The paucity of fine fractions in the deposit can be connected with elutriation of fines into the convective cloud when the hot avalanche travelled downslope. However in several locations the deposit is represented by typical avalanche blocks surrounded by heterolithologic mixed facies containing abundant clasts of Miocene sandstone (picked up from the substrate). Thus the deposit bears features of both debris avalanches and lithic-rich block-and-ash flows. The avalanche was rather mobile (L~6 km; H~1 km; H/L~0.16), despite its small volume (0.02 km³). Its speed reached 40 m/s at a distance of 5 km from the source (based on 80 m high runup of the avalanche). The characteristics of the avalanche deposit indicate that crystallized, degassed, but still hot material of a newly extruded lava dome was involved in the collapse. Unusual low mobile debris avalanche was formed as a result of collapse of western slope of Mt. Cising. A former lava coulee, which was involved in the collapse, underwent only weak disintegration: debris avalanche deposit is represented by big boulders with few fine grained matrix. Leading snout of the landslide traveled only 2 km, while rear slide blocks stopped near the landslide source forming multiple narrow toreva blocks descending downslope. Volume of the collapse 0.05 km³; maximum dropped height 0.5 km, H/L 0.25. Around the distal snout of the avalanche a "bulldozer facies" is well developed. Dating of vegetation entrained into the deposit gave 14C calibrated age 6000-6080 BP. Mobility of the studied debris avalanches was twice smaller than the average mobility of volcanic debris avalanches. Relatively small volume of the collapses, the particular type of material involved (massive lava domes) and the fact that the collapses occurred long after the volcanoes stopped erupting may have played a role in the low mobility of the debris avalanches of the Tatun Group.
NASA Astrophysics Data System (ADS)
King, Nelson E.; Liu, Brent; Zhou, Zheng; Documet, Jorge; Huang, H. K.
2005-04-01
Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models that can address the problem of fault-tolerant storage for backup and recovery of clinical images. We have researched and developed a novel Data Grid testbed involving several federated PAC systems based on grid architecture. By integrating a grid computing architecture to the DICOM environment, a failed PACS archive can recover its image data from others in the federation in a timely and seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid architecture representing three federated PAC systems, the Fault-Tolerant PACS archive server at the Image Processing and Informatics Laboratory, Marina del Rey, the clinical PACS at Saint John's Health Center, Santa Monica, and the clinical PACS at the Healthcare Consultation Center II, USC Health Science Campus, will be presented. The successful demonstration of the Data Grid in the testbed will provide an understanding of the Data Grid concept in clinical image data backup as well as establishment of benchmarks for performance from future grid technology improvements and serve as a road map for expanded research into large enterprise and federation level data grids to guarantee 99.999 % up time.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.
1999-01-01
The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.
Challenges in Modeling of the Global Atmosphere
NASA Astrophysics Data System (ADS)
Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom
2015-04-01
The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.
Denlinger, Roger P.
2012-01-01
The eruption of Mount St. Helens in 1980 produced a debris avalanche that flowed down the upper reaches of the North Fork Toutle River in southwestern Washington, clogging this drainage with sediment. In response to continuous anomalously high sediment flux into the Toutle and Cowlitz Rivers resulting from this avalanche and associated debris flows, the U.S. Army Corps of Engineers completed a Sediment Retention Structure (SRS) on the North Fork Toutle River in May 1989. For one decade, the SRS effectively blocked most of the sediment transport down the Toutle River. In 1999, the sediment level behind the SRS reached the elevation of the spillway base. Since then, a higher percentage of sediment has been passing the SRS and increasing the flood risk in the Cowlitz River. Currently (2012), the dam is filling with sediment at a rate that cannot be sustained for its original design life, and the U.S. Army Corps of Engineers is concerned with the current ability of the SRS to manage floods. This report presents an assessment of the ability of the dam to pass large flows from three types of scenarios (it is assumed that no damage to the spillway will occur). These scenarios are (1) a failure of the debris-avalanche blockage forming Castle Lake that produces a dambreak flood, (2) a debris flow from failure of that blockage, or (3) a debris flow originating in the crater of Mount St. Helens. In each case, the flows are routed down the Toutle River and through the SRS using numerical models on a gridded domain produced from a digital elevation model constructed with existing topography and dam infrastructure. The results of these simulations show that a structurally sound spillway is capable of passing large floods without risk of overtopping the crest of the dam. In addition, large debris flows originating from Castle Lake or the crater of Mount St. Helens never reach the SRS. Instead, debris flows fill the braided channels upstream of the dam and reduce its storage capacity.
XeCl Avalanche discharge laser employing Ar as a diluent
Sze, Robert C.
1981-01-01
A XeCl avalanche discharge exciplex laser which uses a gaseous lasing starting mixture of: (0.2%-0.4% chlorine donor/2.5%-10% Xe/97.3%-89.6% Ar). The chlorine donor normally comprises HCl but can also comprise CCl.sub.4 BCl.sub.3. Use of Ar as a diluent gas reduces operating pressures over other rare gas halide lasers to near atmospheric pressure, increases output lasing power of the XeCl avalanche discharge laser by 30% to exceed KrF avalanche discharge lasing outputs, and is less expensive to operate.
Application of LANDSAT data to delimitation of avalanche hazards in Montane, Colorado
NASA Technical Reports Server (NTRS)
Knepper, D. H. (Principal Investigator); Ives, J. D.; Summer, R.
1976-01-01
The author has identified the following significant results. Photointerpretation of individual avalanche paths on single band black and white LANDSAT images is greatly hindered by terrain shadows and the low spatial resolution of the LANDSAT system. Maps produced in this way are biased towards the larger avalanche paths that are under the most favorable illumination conditions during imaging; other large avalanche paths, under less favorable illumination, are often not detectable and the smaller paths, even those defined by sharp trimlines, are only rarely identifiable.
Relation of the runaway avalanche threshold to momentum space topology
McDevitt, Christopher J.; Guo, Zehua; Tang, Xian -Zhu
2018-01-05
Here, the underlying physics responsible for the formation of an avalanche instability due to the generation of secondary electrons is studied. A careful examination of the momentum space topology of the runaway electron population is carried out with an eye toward identifying how qualitative changes in the momentum space of the runaway electrons is correlated with the avalanche threshold. It is found that the avalanche threshold is tied to the merger of an O and X point in the momentum space of the primary runaway electron population. Such a change of the momentum space topology is shown to be accuratelymore » described by a simple analytic model, thus providing a powerful means of determining the avalanche threshold for a range of model assumptions.« less
Relation of the runaway avalanche threshold to momentum space topology
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDevitt, Christopher J.; Guo, Zehua; Tang, Xian -Zhu
Here, the underlying physics responsible for the formation of an avalanche instability due to the generation of secondary electrons is studied. A careful examination of the momentum space topology of the runaway electron population is carried out with an eye toward identifying how qualitative changes in the momentum space of the runaway electrons is correlated with the avalanche threshold. It is found that the avalanche threshold is tied to the merger of an O and X point in the momentum space of the primary runaway electron population. Such a change of the momentum space topology is shown to be accuratelymore » described by a simple analytic model, thus providing a powerful means of determining the avalanche threshold for a range of model assumptions.« less
Precursory seismicity associated with frequent, large ice avalanches on Iliamna Volcano, Alaska, USA
Caplan-Auerbach, Jacqueline; Huggel, C.
2007-01-01
Since 1994, at least six major (volume>106 m3) ice and rock avalanches have occurred on Iliamna volcano, Alaska, USA. Each of the avalanches was preceded by up to 2 hours of seismicity believed to represent the initial stages of failure. Each seismic sequence begins with a series of repeating earthquakes thought to represent slip on an ice-rock interface, or between layers of ice. This stage is followed by a prolonged period of continuous ground-shaking that reflects constant slip accommodated by deformation at the glacier base. Finally the glacier fails in a large avalanche. Some of the events appear to have entrained large amounts of rock, while others comprise mostly snow and ice. Several avalanches initiated from the same source region, suggesting that this part of the volcano is particularly susceptible to failure, possibly due to the presence of nearby fumaroles. Although thermal conditions at the time of failure are not well constrained, it is likely that geothermal energy causes melting at the glacier base, promoting slip and culminating in failure. The frequent nature and predictable failure sequence of Iliamna avalanches makes the volcano an excellent laboratory for the study of ice avalanches. The prolonged nature of the seismic signal suggests that warning may one day be given for similar events occurring in populated regions.
Parallel 3D Multi-Stage Simulation of a Turbofan Engine
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Topp, David A.
1998-01-01
A 3D multistage simulation of each component of a modern GE Turbofan engine has been made. An axisymmetric view of this engine is presented in the document. This includes a fan, booster rig, high pressure compressor rig, high pressure turbine rig and a low pressure turbine rig. In the near future, all components will be run in a single calculation for a solution of 49 blade rows. The simulation exploits the use of parallel computations by using two levels of parallelism. Each blade row is run in parallel and each blade row grid is decomposed into several domains and run in parallel. 20 processors are used for the 4 blade row analysis. The average passage approach developed by John Adamczyk at NASA Lewis Research Center has been further developed and parallelized. This is APNASA Version A. It is a Navier-Stokes solver using a 4-stage explicit Runge-Kutta time marching scheme with variable time steps and residual smoothing for convergence acceleration. It has an implicit K-E turbulence model which uses an ADI solver to factor the matrix. Between 50 and 100 explicit time steps are solved before a blade row body force is calculated and exchanged with the other blade rows. This outer iteration has been coined a "flip." Efforts have been made to make the solver linearly scaleable with the number of blade rows. Enough flips are run (between 50 and 200) so the solution in the entire machine is not changing. The K-E equations are generally solved every other explicit time step. One of the key requirements in the development of the parallel code was to make the parallel solution exactly (bit for bit) match the serial solution. This has helped isolate many small parallel bugs and guarantee the parallelization was done correctly. The domain decomposition is done only in the axial direction since the number of points axially is much larger than the other two directions. This code uses MPI for message passing. The parallel speed up of the solver portion (no 1/0 or body force calculation) for a grid which has 227 points axially.
Probabilistic Learning by Rodent Grid Cells
Cheung, Allen
2016-01-01
Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723
Implementation of ADI: Schemes on MIMD parallel computers
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1993-01-01
In order to simulate the effects of the impingement of hot exhaust jets of High Performance Aircraft on landing surfaces a multi-disciplinary computation coupling flow dynamics to heat conduction in the runway needs to be carried out. Such simulations, which are essentially unsteady, require very large computational power in order to be completed within a reasonable time frame of the order of an hour. Such power can be furnished by the latest generation of massively parallel computers. These remove the bottleneck of ever more congested data paths to one or a few highly specialized central processing units (CPU's) by having many off-the-shelf CPU's work independently on their own data, and exchange information only when needed. During the past year the first phase of this project was completed, in which the optimal strategy for mapping an ADI-algorithm for the three dimensional unsteady heat equation to a MIMD parallel computer was identified. This was done by implementing and comparing three different domain decomposition techniques that define the tasks for the CPU's in the parallel machine. These implementations were done for a Cartesian grid and Dirichlet boundary conditions. The most promising technique was then used to implement the heat equation solver on a general curvilinear grid with a suite of nontrivial boundary conditions. Finally, this technique was also used to implement the Scalar Penta-diagonal (SP) benchmark, which was taken from the NAS Parallel Benchmarks report. All implementations were done in the programming language C on the Intel iPSC/860 computer.
Parallel computing techniques for rotorcraft aerodynamics
NASA Astrophysics Data System (ADS)
Ekici, Kivanc
The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).
NASA Technical Reports Server (NTRS)
Swinbank, Richard; Purser, James
2006-01-01
Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
Turbulent Boundary Layers - Experiments, Theory and Modelling
1980-01-01
direction (-1 <y < 1), a nonuniform grid spacing is used. The following transformation gives the location of grid points in the vertical direction (Ref. 9...circu- laire d’axe parallele ä 1’ecoulement et dans la partie aval oü eile est pleinement developpee, le cylindre est mis en rotation autour de...des profils po- laires (figures 10 et 11) dans les regions tres voisines de la paroi. En particulier, 1’experience indique que l’ecoulement ne peut
Parallel Unsteady Overset Mesh Methodology for a Multi-Solver Paradigm with Adaptive Cartesian Grids
2008-08-21
Engineer, U.S. Army Research Laboratory ., Matthew.W.Floros@nasa.gov, AIAA Member ‡Senior Research Scientist, Scaled Numerical Physics LLC., awissink...IV.E and IV.D). Good linear scalability was observed for all three cases up to 12 processors. Beyond that the scalability drops off depending on grid...Research Laboratory for the usage of SUGGAR module and Yikloon Lee at NAVAIR for the usage of the NAVAIR-IHC code. 13 of 22 American Institute of
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
Global Swath and Gridded Data Tiling
NASA Technical Reports Server (NTRS)
Thompson, Charles K.
2012-01-01
This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.
Charon Message-Passing Toolkit for Scientific Computations
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)
2000-01-01
Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.
2008-09-01
algorithms that have been proposed to accomplish it fall into three broad categories. Eikonal solvers (e.g., Vidale, 1988, 1990; Podvin and Lecomte, 1991...difference eikonal solvers, the FMM algorithm works by following a wavefront as it moves across a volume of grid points, updating the travel times in...the grid according to the eikonal differential equation, using a second-order finite-difference scheme. We chose to use FMM for our comparison because
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
Coe, Jeffrey A.; Bessette-Kirton, Erin; Geertsema, Marten
2018-01-01
In the USA, climate change is expected to have an adverse impact on slope stability in Alaska. However, to date, there has been limited work done in Alaska to assess if changes in slope stability are occurring. To address this issue, we used 30-m Landsat imagery acquired from 1984 to 2016 to establish an inventory of 24 rock avalanches in a 5000-km2 area of Glacier Bay National Park and Preserve in southeast Alaska. A search of available earthquake catalogs revealed that none of the avalanches were triggered by earthquakes. Analyses of rock-avalanche magnitude, mobility, and frequency reveal a cluster of large (areas ranging from 5.5 to 22.2 km2), highly mobile (height/length < 0.3) rock avalanches that occurred from June 2012 through June 2016 (near the end of the 33-year period of record). These rock avalanches began about 2 years after the long-term trend in mean annual maximum air temperature may have exceeded 0 °C. Possibly more important, most of these rock avalanches occurred during a multiple-year period of record-breaking warm winter and spring air temperatures. These observations suggested to us that rock avalanches in the study area may be becoming larger because of rock-permafrost degradation. However, other factors, such as accumulating elastic strain, glacial thinning, and increased precipitation, may also play an important role in preconditioning slopes for failure during periods of warm temperatures.
Emplacement of rock avalanche material across saturated sediments, Southern Alp, New Zealand
NASA Astrophysics Data System (ADS)
Dufresne, A.; Davies, T. R.; McSaveney, M. J.
2012-04-01
The spreading of material from slope failure events is not only influenced by the volume and nature of the source material and the local topography, but also by the materials encountered in the runout path. In this study, evidence of complex interactions between rock avalanche and sedimentary runout path material were investigated at the 45 x 106 m3 long-runout (L: 4.8 km) Round Top rock avalanche deposit, New Zealand. It was sourced within myolinitic schists of the active strike-slip Alpine Fault. The narrow and in-failure-direction elongate source scarp is deep-seated, indicating slope failure was triggered by strong seismic activity. The most striking morphological deposit features are longitudinal ridges aligned radially to source. Trenching and geophysical surveys show bulldozed and sheared substrate material at ridge termini and laterally displaced sedimentary strata. The substrate failed at a minimum depth of 3 m indicating a ploughing motion of the ridges into the saturated material below. Internal avalanche compression features suggest deceleration behind the bulldozed substrate obstacle. Contorted fabric in material ahead of the ridge document substrate disruption by the overriding avalanche material deposited as the next down-motion hummock. Comparison with rock avalanches of similar volume but different emplacement environments places Round Top between longer runout avalanches emplaced over e.g. playa lake sediments and those with shorter travel distances, whose runout was apparently retarded by topographic obstacles or that entrained high-friction debris. These empirical observations indicate the importance of runout path materials on tentative trends in rock avalanche emplacement dynamics and runout behaviour.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
Numerical grid generation in computational field simulations. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, B.K.; Thompson, J.F.; Haeuser, J.
1996-12-31
To enhance the CFS technology to its next level of applicability (i.e., to create acceptance of CFS in an integrated product and process development involving multidisciplinary optimization) the basic requirements are: rapid turn-around time, reliable and accurate simulation, affordability and appropriate linkage to other engineering disciplines. In response to this demand, there has been a considerable growth in the grid generation related research activities involving automization, parallel processing, linkage with the CAD-CAM systems, CFS with dynamic motion and moving boundaries, strategies and algorithms associated with multi-block structured, unstructured, hybrid, hexahedral, and Cartesian grids, along with its applicability to various disciplinesmore » including biomedical, semiconductor, geophysical, ocean modeling, and multidisciplinary optimization.« less
Mapping implicit spectral methods to distributed memory architectures
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Vanrosendale, John
1991-01-01
Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.
Positron lifetime spectrometer using a DC positron beam
Xu, Jun; Moxom, Jeremy
2003-10-21
An entrance grid is positioned in the incident beam path of a DC beam positron lifetime spectrometer. The electrical potential difference between the sample and the entrance grid provides simultaneous acceleration of both the primary positrons and the secondary electrons. The result is a reduction in the time spread induced by the energy distribution of the secondary electrons. In addition, the sample, sample holder, entrance grid, and entrance face of the multichannel plate electron detector assembly are made parallel to each other, and are arranged at a tilt angle to the axis of the positron beam to effectively separate the path of the secondary electrons from the path of the incident positrons.
Distributed computations in a dynamic, heterogeneous Grid environment
NASA Astrophysics Data System (ADS)
Dramlitsch, Thomas
2003-06-01
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance - develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes - implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment. The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications. The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus. Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen. Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen. Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
Terrain Classification of Norwegian Slab Avalanche Accidents
ERIC Educational Resources Information Center
Hallandvik, Linda; Aadland, Eivind; Vikene, Odd Lennart
2016-01-01
It is difficult to rely on snow conditions, weather, and human factors when making judgments about avalanche risk because these variables are dynamic and complex; terrain, however, is more easily observed and interpreted. Therefore, this study aimed to investigate (1) the type of terrain in which historical fatal snow avalanche accidents in Norway…
Snow supporting structures for milepost 151 Avalanche, Highway US 89/191, Jackson, Wyoming : plans.
DOT National Transportation Integrated Search
2009-04-01
The 151 Avalanche, near Jackson, Wyoming has, historically, avalanched to the road below 1.5 to 2 times a year. The road, US 89/191 is four lanes and carries an estimated 8,000 vehicles per day in the winter months. The starting zone of the 151 Avala...
Rockfalls and Avalanches from Little Tahoma Peak on Mount Rainier, Washington
Crandell, Dwight Raymond; Fahnestock, Robert K.
1965-01-01
In December 1963 rockfalls from Little Tahoma Peak on the east side of Mount Rainier volcano fell onto Emmons Glacier and formed avalanches of rock debris that traveled about 4 miles down the glacier and the White River valley. In this distance, the rock debris descended as much as 6,200 feet in altitude. Minor lithologic differences and crosscutting relations indicate that the rockfalls caused at least seven separate avalanches, having an estimated total volume of 14 million cubic yards. The initial rockfall may have been caused by a small steam explosion near the base of Little Tahoma Peak. During movement, some of the avalanches were deflected from one side of the valley to the other. Calculations based on the height to which the avalanches rose on the valley walls suggest that their velocity reached at least 80 or 90 miles per hour. The unusually long distance some of the avalanches were transported is attributed to a cushion of trapped and compressed air at their base, which buoyed them up amid reduced friction.
NASA Astrophysics Data System (ADS)
Zhu, Zhaoxuan; Wiese, Kay Jörg
2017-12-01
In disordered elastic systems, driven by displacing a parabolic confining potential adiabatically slowly, all advance of the system is in bursts, termed avalanches. Avalanches have a finite extension in time, which is much smaller than the waiting time between them. Avalanches also have a finite extension ℓ in space, i.e., only a part of the interface of size ℓ moves during an avalanche. Here we study their spatial shape 〈S(x ) 〉 ℓ given ℓ , as well as its fluctuations encoded in the second cumulant 〈S2(x ) 〉 ℓ c. We establish scaling relations governing the behavior close to the boundary. We then give analytic results for the Brownian force model, in which the microscopic disorder for each degree of freedom is a random walk. Finally, we confirm these results with numerical simulations. To do this properly we elucidate the influence of discretization effects, which also confirms the assumptions entering into the scaling ansatz. This allows us to reach the scaling limit already for avalanches of moderate size. We find excellent agreement for the universal shape and its fluctuations, including all amplitudes.
NASA Astrophysics Data System (ADS)
Laitinen, Antti; Kumar, Manohar; Elo, Teemu; Liu, Ying; Abhilash, T. S.; Hakonen, Pertti J.
2018-06-01
We have investigated the cross-over from Zener tunneling of single charge carriers to avalanche type of bunched electron transport in a suspended graphene Corbino disk in the zeroth Landau level. At low bias, we find a tunneling current that follows the gyrotropic Zener tunneling behavior. At larger bias, we find an avalanche type of transport that sets in at a smaller current the larger the magnetic field is. The low-frequency noise indicates strong bunching of the electrons in the avalanches. On the basis of the measured low-frequency switching noise power, we deduce the characteristic switching rates of the avalanche sequence. The simultaneous microwave shot noise measurement also reveals intrinsic correlations within the avalanche pulses and indicate a decrease in correlations with increasing bias.
NASA Astrophysics Data System (ADS)
Woodbury, Daniel; Wahlstrand, Jared; Goers, Andy; Feder, Linus; Miao, Bo; Hine, George; Salehi, Fatholah; Milchberg, Howard
2016-10-01
We report on the use of single-shot supercontinuum spectral interferometry (SSSI) to make temporally and spatially resolved measurements of laser-induced avalanche breakdown in ambient air by a 200 ps pulse. By seeding the breakdown using an external 100 fs pulse, we demonstrate control over the timing and spatial characteristics of the avalanche. In addition, we calculate the collisional ionization rates at various laser intensities and demonstrate seeding of the avalanche breakdown both by multiphoton ionization and by photodetaching ions produced from a radioactive source. These observations provide proof-of-concept support for recent proposals to remotely measure radioactivity using laser-induced avalanche breakdown. This work supported by a DTRA, C-WMD Basic Research Program, and by the DOE NNSA Stewardship Science Graduate Fellowship, provided under Grant Number DE-NA0002135.
Volcanic mixed avalanches: a distinct eruption-triggered mass-flow process at snow-clad volcanoes
Pierson, T.C.; Janda, R.J.
1994-01-01
A generally unrecognized type of pyroclastic deposit was produced by rapid avalanches of intimately mixed snow and hot pyroclastic debris during eruptions at Mount St. Helens, Nevado del Ruiz, and Redoubt Volcano between 1982 and 1989. These "mixed avalanches' traveled as far as 14 km at velocities up to ~27 m/s, involved as much as 107 m3 of rock and ice, and left unmelted deposits of single flow units as thick as 5 m. During flow downslope, heat transfer from hot rocks to snow produced meltwater that partially saturated the mixtures, apparently giving these mixed avalanches mobilities equal to or greater than those of "dry' debris avalanches of similar volume. After melting and desiccation, the deposits are highly susceptible to erosion and unlikely to be well preserved in the stratigraphic record. -Authors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, P. A., E-mail: Pavel.Ivanov@mail.ioffe.ru; Potapov, A. S.; Samsonova, T. P.
p{sup +}–n{sub 0}–n{sup +} 4H-SiC diodes with homogeneous avalanche breakdown at 1860 V are fabricated. The pulse current–voltage characteristics are measured in the avalanche-breakdown mode up to a current density of 4000 A/cm{sup 2}. It is shown that the avalanche-breakdown voltage increases with increasing temperature. The following diode parameters are determined: the avalanche resistance (8.6 × 10{sup –2} Ω cm{sup 2}), the electron drift velocity in the n{sub 0} base at electric fields higher than 10{sup 6} V/cm (7.8 × 10{sup 6} cm/s), and the relative temperature coefficient of the breakdown voltage (2.1 × 10{sup –4} K{sup –1}).
Intermittency between avalanche regimes on grain piles
NASA Astrophysics Data System (ADS)
Arran, M. I.; Vriend, N. M.
2018-06-01
We experimentally investigate discrete avalanches of grains, driven by a low inflow rate, on an erodible pile in a channel. We observe intermittency between one regime, in which avalanches are quasiperiodic and system spanning, and another, in which they pass at irregular intervals and have a power-law size distribution. Observations are robust to changes of inflow rate and grain type and require no tuning of external parameters. We demonstrate that the state of the pile's surface determines whether avalanche fronts propagate to the end of the channel or stop partway down, and we introduce a toy model for the latter case that reproduces the observed power-law size distribution. We suggest direct applications to avalanches of pharmaceutical and geophysical grains, and the possibility of reconciling the "self-organized criticality" predicted by several authors with the hysteretic behavior described by others.
Anderson, H.L.; Kinnison, W.W.; Lillberg, J.W.
1985-04-30
An apparatus and method for electronically reading planar two-dimensional ..beta..-ray emitter-labeled gel electrophoretograms. A single, flat rectangular multiwire proportional chamber is placed in close proximity to the gel and the assembly placed in an intense uniform magnetic field disposed in a perpendicular manner to the rectangular face of the proportional chamber. Beta rays emitted in the direction of the proportional chamber are caused to execute helical motions which substantially preserve knowledge the coordinates of their origin in the gel. Perpendicularly oriented, parallel wire, parallel plane cathodes electronically sense the location of the ..beta..-rays from ionization generated thereby in a detection gas coupled with an electron avalanche effect resulting from the action of a parallel wire anode located therebetween. A scintillator permits the present apparatus to be rendered insensitive when signals are generated from cosmic rays incident on the proportional chamber. Resolution for concentrations of radioactive compounds in the gel exceeds 700-..mu..m. The apparatus and method of the present invention represent a significant improvement over conventional autoradiographic techniques in dynamic range, linearity and sensitivity of data collection. A concentration and position map for gel electrophoretograms having significant concentrations of labeled compounds and/or highly radioactive labeling nuclides can generally be obtained in less than one hour.
Anderson, Herbert L.; Kinnison, W. Wayne; Lillberg, John W.
1987-01-01
Apparatus and method for electronically reading planar two dimensional .beta.-ray emitter-labeled gel electrophoretograms. A single, flat rectangular multiwire proportional chamber is placed in close proximity to the gel and the assembly placed in an intense uniform magnetic field disposed in a perpendicular manner to the rectangular face of the proportional chamber. Beta rays emitted in the direction of the proportional chamber are caused to execute helical motions which substantially preserve knowledge of the coordinates of their origin in the gel. Perpendicularly oriented, parallel wire, parallel plane cathodes electronically sense the location of the .beta.-rays from ionization generated thereby in a detection gas coupled with an electron avalanche effect resulting from the action of a parallel wire anode located therebetween. A scintillator permits the present apparatus to be rendered insensitive when signals are generated from cosmic rays incident on the proportional chamber. Resolution for concentrations of radioactive compounds in the gel exceeds 700 .mu.m. The apparatus and method of the present invention represent a significant improvement over conventional autoradiographic techniques in dynamic range, linearity and sensitivity of data collection. A concentration and position map for gel electrophoretograms having significant concentrations of labeled compounds and/or highly radioactive labeling nuclides can generally be obtained in less than one hour.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Time lapse photography as an approach to understanding glide avalanche activity
Hendrikx, Jordy; Peitzsch, Erich H.; Fagre, Daniel B.
2012-01-01
Avalanches resulting from glide cracks are notoriously difficult to forecast, but are a recurring problem for numerous avalanche forecasting programs. In some cases glide cracks are observed to open and then melt away in situ. In other cases, they open and then fail catastrophically as large, full-depth avalanches. Our understanding and management of these phenomena are currently limited. It is thought that an increase in the rate of snow gliding occurs prior to full-depth avalanche activity so frequent observation of glide crack movement can provide an index of instability. During spring 2011 in Glacier National Park, Montana, USA, we began an approach to track glide crack avalanche activity using a time-lapse camera focused on a southwest facing glide crack. This crack melted in-situ without failing as a glide avalanche, while other nearby glide cracks on north through southeast aspects failed. In spring 2012, a camera was aimed at a large and productive glide crack adjacent to the Going to the Sun Road. We captured three unique glide events in the field of view. Unfortunately, all of them either failed very quickly, or during periods of obscured view, so measurements of glide rate could not be obtained. However, we compared the hourly meteorological variables during the period of glide activity to the same variables prior to glide activity. The variables air temperature, relative humidity, air pressure, incoming and reflected long wave radiation, SWE, total precipitation, and snow depth were found to be statistically different for our cases examined. We propose that these are some of the potential precursors for glide avalanche activity, but do urge caution in their use, due to the simple approach and small data set size. It is hoped that by introducing a workable method to easily record glide crack movement, combined with ongoing analysis of the associated meteorological data, we will improve our understanding of when, or if, glide avalanche activity will ensue.
NASA Astrophysics Data System (ADS)
van Herwijnen, Alec; Failletaz, Jerome; Berhod, Nicole; Mitterer, Christoph
2013-04-01
Glide avalanches occur when the entire snowpack glides over the ground until an avalanche releases. These avalanches are difficult to forecast since the gliding process can take place over a few hours up to several weeks or months. The presence of liquid water at the interface between the snow cover and the ground surface is of primary importance as it reduces frictional support. Glide avalanches are often preceded by the opening of a tensile crack in the snow cover, called a glide crack. Past research has shown that glide crack opening accelerates prior to avalanche release. During the winter of 2012-2013, we monitored glide crack expansion using time-lapse photography in combination with a seismic sensor and two heat flux sensors on a slope with well documented glide avalanche activity in the Eastern Swiss Alps above Davos, Switzerland. To track changes in glide rates, the number of dark pixels in an area around the glide crack is counted in each image. Using this technique, we observed an increase in glide rates prior to avalanche release. Since the field site is located very close to the town of Davos, the seismic data was very noisy. Nevertheless, the accelerated snow gliding observed in the time-lapse images coincided with increased seismic activity. Overall, these results show that a combination of time-lapse photography with seismic monitoring could provide valuable insight into glide avalanche release. Recordings of the heat flux plates show that the energy input from the soil is fairly small and constant throughout the observed period. The results suggest that ground heat flux is a minor contributor to the water production at the snow-soil interface. Instead, the presence of water at the base of the snowpack is probably due to a strong hydraulic pressure gradient at the snow-soil interface.
A method to harness global crowd-sourced data to understand travel behavior in avalanche terrain.
NASA Astrophysics Data System (ADS)
Hendrikx, J.; Johnson, J.
2015-12-01
To date, most studies of the human dimensions of decision making in avalanche terrain has focused on two areas - post-accident analysis using accident reports/interviews and, the development of tools as decision forcing aids. We present an alternate method using crowd-sourced citizen science, for understanding decision-making in avalanche terrain. Our project combines real-time GPS tracking via a smartphone application, with internet based surveys of winter backcountry users as a method to describe and quantify travel practices in concert with group decision-making dynamics, and demographic data of participants during excursions. Effectively, we use the recorded GPS track taken within the landscape as an expression of the decision making processes and terrain usage by the group. Preliminary data analysis shows that individual experience levels, gender, avalanche hazard, and group composition all influence the ways in which people travel in avalanche terrain. Our results provide the first analysis of coupled real-time GPS tracking of the crowd while moving in avalanche terrain combined with psychographic and demographic correlates. This research will lead to an improved understanding of real-time decision making in avalanche terrain. In this paper we will specifically focus on the presentation of the methods used to solicit, and then harness the crowd to obtain data in a unique and innovative application of citizen science where the movements within the terrain are the desired output data (Figure 1). Figure 1: Example GPS tracks sourced from backcountry winter users in the Teton Pass area (Wyoming), from the 2014-15 winter season, where tracks in red represent those recorded as self-assessed experts (as per our survey), and where tracks in blue represent those recorded as self-assessed intermediates. All tracks shown were obtained under similar avalanche conditions. Statistical analysis of terrain metrics showed that the experts used steeper terrain than the intermediate users under similar avalanche conditions, demonstrating different terrain choice and use as a function of experience rather than hazard level.
Post-glacial rock avalanches in the Obersee Valley, Glarner Alps, Switzerland
NASA Astrophysics Data System (ADS)
Nagelisen, Jan; Moore, Jeffrey R.; Vockenhuber, Christoph; Ivy-Ochs, Susan
2015-06-01
The geological record of prehistoric rock avalanches provides invaluable data for assessing the hazard posed by these rare but destructive mass movements. Here we investigate two large rock avalanches in the Obersee valley of the Glarner Alps, Switzerland, providing detailed mapping of landslide and related Quaternary phenomena, revised volume estimates for each event, and surface exposure dating of rock avalanche deposits. The Rautispitz rock avalanche originated from the southern flank of the Obersee valley, releasing approximately 91 million m3 of limestone on steeply-dipping bedding planes. Debris had maximum horizontal travel distance of ~ 5000 m, a fahrboeschung angle (relating fall height to length) of 18°, and was responsible for the creation of Lake Obersee; deposits are more than 130 m thick in places. The Platten rock avalanche encompassed a source volume of 11 million m3 sliding from the northern flank of the Obersee valley on similar steeply-dipping limestone beds (bedrock forms a syncline under the valley). Debris had a maximum horizontal travel distance of 1600 m with a fahrboeschung angle of 21°, and is more than 80 m thick in places. Deposits of the Platten rock avalanche are superposed atop those from the Rautispitz event at the end of the Obersee valley where they dam Lake Haslensee. Runout for both events was simulated using the dynamic analysis code DAN3D; results showed excellent match to mapped deposit extents and thickness and helped confirm the hypothesized single-event failure scenarios. 36Cl cosmogenic nuclide surface exposure dating of 13 deposited boulders revealed a Younger Dryas age of 12.6 ± 1.0 ka for the Rautispitz rock avalanche and a mid-Holocene age of 6.1 ± 0.8 ka for the Platten rock avalanche. A seismological trigger is proposed for the former event due to potentially correlated turbidite deposits in nearby Lake Zurich.
NASA Astrophysics Data System (ADS)
Tichavsky, R.
2016-12-01
The High Tatras Mountains are permanently affected by the occurrence of hazardous geomorphic processes. Snow avalanches represent a common hazard that threatens the infrastructure and humans living and visiting the mountains. So far, the spatio-temporal reconstruction of snow avalanche histories was based only on existing archival records, orthophoto interpretation and lichenometric dating in the High Tatras Mountains. Dendrogeomorphic methods allow for the intra-seasonal dating of scars on tree stems and branches and have been broadly used for the dating of snow avalanche events all over the world. We extracted the increment cores and cross sections from 189 individuals of Pinus mugo var. mugo growing on four tali in the Great Cold Valley and dated all the past scars that could correspond with the winter to early spring occurrence of snow avalanches. The dating was supported by the visual analysis of three orthophoto images from 2004, 2009 and 2014. In total, nineteen event years of snow avalanches (10 certain events, and 9 probable events) were identified since 1959. Historical archives provided evidence only for nine event years since 1987, and three of them were confirmed dendrogeomorphically. Geomorphic effect of recent snow avalanches identified by the spatial distribution of scarred trees in individual years corresponds with the extent of events visible from the orthophotos. We can confirm higher frequency of snow avalanche events since 1980s (17 out of 19 events) and significant increase during the last ten years. The future expected climatic changes associated with the changes in temperature and precipitation regime could significantly influence on the frequency of snow avalanches. Therefore, our results can become the starting line for more extensive dendrogeomorphic survey in the High Tatras Mountains in order to create a catalogue of all natural hazards for the future prediction and modelling of these phenomena in context of environmental changes.
WE-E-18A-01: Large Area Avalanche Amorphous Selenium Sensors for Low Dose X-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheuermann, J; Goldan, A; Zhao, W
2014-06-15
Purpose: A large area indirect flat panel imager (FPI) with avalanche gain is being developed to achieve x-ray quantum noise limited low dose imaging. It uses a thin optical sensing layer of amorphous selenium (a-Se), known as High-Gain Avalanche Rushing Photoconductor (HARP), to detect optical photons generated from a high resolution x-ray scintillator. We will report initial results in the fabrication of a solid-state HARP structure suitable for a large area FPI. Our objective is to establish the blocking layer structures and defect suppression mechanisms that provide stable and uniform avalanche gain. Methods: Samples were fabricated as follows: (1) ITOmore » signal electrode. (2) Electron blocking layer. (3) A 15 micron layer of intrinsic a-Se. (4) Transparent hole blocking layer. (5) Multiple semitransparent bias electrodes to investigate avalanche gain uniformity over a large area. The sample was exposed to 50ps optical excitation pulses through the bias electrode. Transient time of flight (TOF) and integrated charge was measured. A charge transport simulation was developed to investigate the effects of varying blocking layer charge carrier mobility on defect suppression, avalanche gain and temporal performance. Results: Avalanche gain of ∼200 was achieved experimentally with our multi-layer HARP samples. Simulations using the experimental sensor structure produced the same magnitude of gain as a function of electric field. The simulation predicted that the high dark current at a point defect can be reduced by two orders of magnitude by blocking layer optimization which can prevent irreversible damage while normal operation remained unaffected. Conclusion: We presented the first solid state HARP structure directly scalable to a large area FPI. We have shown reproducible and uniform avalanche gain of 200. By reducing mobility of the blocking layers we can suppress defects and maintain stable avalanche. Future work will optimize the blocking layers to prevent lag and ghosting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Wei; Li Dan; Reznik, Alla
2005-09-15
An indirect flat-panel imager (FPI) with avalanche gain is being investigated for low-dose x-ray imaging. It is made by optically coupling a structured x-ray scintillator CsI(Tl) to an amorphous selenium (a-Se) avalanche photoconductor called HARP (high-gain avalanche rushing photoconductor). The final electronic image is read out using an active matrix array of thin film transistors (TFT). We call the proposed detector SHARP-AMFPI (scintillator HARP active matrix flat panel imager). The advantage of the SHARP-AMFPI is its programmable gain, which can be turned on during low dose fluoroscopy to overcome electronic noise, and turned off during high dose radiography to avoidmore » pixel saturation. The purpose of this paper is to investigate the important design considerations for SHARP-AMFPI such as avalanche gain, which depends on both the thickness d{sub Se} and the applied electric field E{sub Se} of the HARP layer. To determine the optimal design parameter and operational conditions for HARP, we measured the E{sub Se} dependence of both avalanche gain and optical quantum efficiency of an 8 {mu}m HARP layer. The results were used in a physical model of HARP as well as a linear cascaded model of the FPI to determine the following x-ray imaging properties in both the avalanche and nonavalanche modes as a function of E{sub Se}: (1) total gain (which is the product of avalanche gain and optical quantum efficiency); (2) linearity; (3) dynamic range; (4) gain nonuniformity resulting from thickness nonuniformity; and (5) effects of direct x-ray interaction in HARP. Our results showed that a HARP layer thickness of 8 {mu}m can provide adequate avalanche gain and sufficient dynamic range for x-ray imaging applications to permit quantum limited operation over the range of exposures needed for radiography and fluoroscopy.« less
Unstructured Adaptive Grid Computations on an Array of SMPs
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.
1996-01-01
Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.
Tear thinning time and topical anesthesia as assessed using the HIRCAL grid and the NCCA.
Blades, K J; Murphy, P J; Patel, S
1999-03-01
The literature contains conflicting reports of the effects of topical anesthetics on tear film stability, with some consensus that unpreserved topical anesthetics are less likely to reduce tear film stability than preserved preparations. This experiment investigated the effect of unpreserved 0.4% benoxinate hydrochloride on tear thinning time (TTT), in parallel with "real time" corneal sensitivity assessment. Tear film stability was assessed (HIRCAL grid) in parallel with real time assessment of the pharmacological activity (NCCA) of unpreserved 0.4% benoxinate hydrochloride in normal eyes. The anesthetic used did not significantly affect tear film stability. This finding is in agreement with previous investigators. Unpreserved 0.4% benoxinate hydrochloride could be used to facilitate tear film stability assessment. The experimental protocol used could also be applied to investigate the temporal relationship between anesthesia and tear film stability with preserved topical anesthetics that have been found to decrease tear film stability.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Avalanche Statistics Identify Intrinsic Stellar Processes near Criticality in KIC 8462852
NASA Astrophysics Data System (ADS)
Sheikh, Mohammed A.; Weaver, Richard L.; Dahmen, Karin A.
2016-12-01
The star KIC8462852 (Tabby's star) has shown anomalous drops in light flux. We perform a statistical analysis of the more numerous smaller dimming events by using methods found useful for avalanches in ferromagnetism and plastic flow. Scaling exponents for avalanche statistics and temporal profiles of the flux during the dimming events are close to mean field predictions. Scaling collapses suggest that this star may be near a nonequilibrium critical point. The large events are interpreted as avalanches marked by modified dynamics, limited by the system size, and not within the scaling regime.
XeCl avalanche discharge laser employing Ar as a diluent
Sze, R.C.
1979-10-10
A XeCl avalanche discharge exciplex laser which uses a gaseous lasing starting mixture of: 0.2 to 0.4% chlorine donor/2.5% to 10% Xe/97.3% to 89.6% Ar) is provided. The chlorine donor normally comprises HCl but can also comprise CCl/sub 4/ BCl/sub 3/. Use of Ar as a diluent gas reduces operating pressures over other rare gas halide lasers to near atmospheric pressure, increases output lasing power of the XeCl avalanche discharge laser by 30% to exceed KrF avalanche discharge lasing outputs, and is less expensive to operate.
NASA Astrophysics Data System (ADS)
Shea, Thomas; van Wyk de Vries, Benjamin; Pilato, Martín
2008-07-01
We study the lithology, structure, and emplacement of two debris-avalanche deposits (DADs) with contrasting origins and materials from the Quaternary-Holocene Mombacho Volcano, Nicaragua. A clear comparison is possible because both DADs were emplaced onto similar nearly flat (3° slope) topography with no apparent barrier to transport. This lack of confinement allows us to study, in nature, the perfect case scenario of a freely spreading avalanche. In addition, there is good evidence that no substratum was incorporated in the events during flow, so facies changes are related only to internal dynamics. Mombacho shows evidence of at least three large flank collapses, producing the two well-preserved debris avalanches of this study; one on its northern flank, “Las Isletas,” directed northeast, and the other on its southern flank, “El Crater,” directed south. Other south-eastern features indicate that the debris-avalanche corresponding to the third collapse (La Danta) occurred before Las Isletas and El Crater events. The materials involved in each event were similar, except in their alteration state and in the amount of substrata initially included in the collapse. While “El Crater” avalanche shows no signs of substratum involvement and has characteristics of a hydrothermal weakening-related collapse, the “Las Isletas” avalanche involves significant substratum and was generated by gravity spreading-related failure. The latter avalanche may have interacted with Lake Nicaragua during transport, in which case its run-out could have been modified. Through a detailed morphological and structural description of the Mombacho avalanches, we provide two contrasting examples of non-eruptive volcanic flank collapse. We show that, remarkably, even with two distinct collapse mechanisms, the debris avalanches developed the same gross stratigraphy of a coarse layer above a fine layer. This fine layer provided a low friction basal slide layer. Whereas DAD layering and the run-outs are roughly similar, the distribution of structures is different and related to lithology: Las Isletas has clear proximal faults replaced distally by inter-hummock depressions where basal unit zones are exhumed, whereas El Crater has faults throughout, but the basal layer is hidden in the distal zone. Hummocky forms depend on material type, with steep hummocks being formed of coherent lava units, and low hummocks by matrix-rich units. In both avalanches, extensional structures predominate; the upper layers exclusively underwent longitudinal and lateral extension. This is consistent with evidence of only small amounts of block-to-block interactions during bulk horizontal spreading. The base of the moving mass accommodated transport by large amounts of simple shear. We suggest that contractional structures and inter-block collisions seen in many other avalanches are artifacts related to topographic confinement.
Dense Granular Avalanches: Mathematical Description and Experimental Validation
NASA Astrophysics Data System (ADS)
Tai, Y.-C.; Hutter, K.; Gray, J. M. N. T.
Snow avalanches, landslides, rock falls and debris flows are extremely dangerous and destructive natural phenomena. The frequency of occurrence and amplitudes of these disastrous events appear to have increased in recent years perhaps due to recent climate warming. The events endanger the personal property and infra-structure in mountainous regions. For example, from the winters 1940/41 to 1987/88 more than 7000 snow avalanches occurred in Switzerland with damaged property leading to a total of 1269 deaths. In February 1999, 36 people were buried by a single avalanche in Galtür, Austria. In August 1996, a very large debris flow in middle Taiwan resulted in 51 deaths, 22 lost and an approximate property damage of more than 19 billion NT dollars (ca. 600 million US dollars) [18]. In Europe, a suddenly released debris flow in North Italy in August 1998 buried 5 German tourists on the Superhighway "Brenner-Autobahn". The topic has gained so much significance that in 1990 the United Nations declared the International Decade for Natural Disasters Reduction (IDNDR); Germany has its own Deutsches IDNDR-Komitee für Katastrophenvorbeugung e.V. Special conferences are devoted to the theme, e.g., the CALAR conference on Avalanches, Landslides, Rock Falls and Debris Flows (Vienna, January 2000), INTERPRAEVENT, annual conferences on the protection of habitants from floods, debris flows and avalanches, special conferences on debris flow hazard mi tigation and those exclusively on Avalanches.
Solving very large, sparse linear systems on mesh-connected parallel computers
NASA Technical Reports Server (NTRS)
Opsahl, Torstein; Reif, John
1987-01-01
The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.
National Combustion Code: Parallel Implementation and Performance
NASA Technical Reports Server (NTRS)
Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.
2000-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.
Parallelization of the Physical-Space Statistical Analysis System (PSAS)
NASA Technical Reports Server (NTRS)
Larson, J. W.; Guo, J.; Lyster, P. M.
1999-01-01
Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational reproducibility is well known in the parallel computing community. It is a requirement that the parallel code perform calculations in a fashion that will yield identical results on different configurations of processing elements on the same platform. In some cases this problem can be solved by sacrificing performance. Meeting this requirement and still achieving high performance is very difficult. Topics to be discussed include: current PSAS design and parallelization strategy; reproducibility issues; load balance vs. database memory demands, possible solutions to these problems.
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
MCT (HgCdTe) IR detectors: latest developments in France
NASA Astrophysics Data System (ADS)
Reibel, Yann; Rubaldo, Laurent; Vaz, Cedric; Tribolet, Philippe; Baier, Nicolas; Destefanis, Gérard
2010-10-01
This paper presents an overview of the very recent developments of the MCT infrared detector technology developed by CEA-LETI and Sofradir in France. New applications require high sensitivity, higher operating temperature and dual band detectors. The standard n on p technology in production at Sofradir for 25 years is well mastered with an extremely robust and reliable process. Sofradir's interest in p on n technology opens the perspective of reducing dark current of diodes so detectors could operate in lower flux or higher operating temperature. In parallel, MCT Avalanche Photo Diodes (APD) have demonstrated ideal performances for low flux and high speed application like laser gated imaging during the last few years. This technology also opens new prospects on next generation of imaging detectors for compact, low flux and low power applications. Regarding 3rd Gen IR detectors, the development of dual-band infrared detectors has been the core of intense research and technological improvements for the last ten years. New TV (640 x 512 pixels) format MWIR/LWIR detectors on 20μm pixel pitch, made from Molecular Beam Epitaxy, has been developed with dedicated Read-Out Integrated Circuit (ROIC) for real simultaneous detection and maximum SNR. Technological and products achievements, as well as latest results and performances are presented outlining the availability of p/n, avalanche photodiodes and dual band technologies for new applications at system level.
Acconcia, G; Cominelli, A; Rech, I; Ghioni, M
2016-11-01
In recent years, lifetime measurements by means of the Time Correlated Single Photon Counting (TCSPC) technique have led to a significant breakthrough in medical and biological fields. Unfortunately, the many advantages of TCSPC-based approaches come along with the major drawback of a relatively long acquisition time. The exploitation of multiple channels in parallel could in principle mitigate this issue, and at the same time it opens the way to a multi-parameter analysis of the optical signals, e.g., as a function of wavelength or spatial coordinates. The TCSPC multichannel solutions proposed so far, though, suffer from a tradeoff between number of channels and performance, and the overall measurement speed has not been increased according to the number of channels, thus reducing the advantages of having a multichannel system. In this paper, we present a novel readout architecture for bi-dimensional, high-density Single Photon Avalanche Diode (SPAD) arrays, specifically designed to maximize the throughput of the whole system and able to guarantee an efficient use of resources. The core of the system is a routing logic that can provide a dynamic connection between a large number of SPAD detectors and a much lower number of high-performance acquisition channels. A key feature of our smart router is its ability to guarantee high efficiency under any operating condition.
3D PIC-MCC simulations of positive streamers in air gaps
NASA Astrophysics Data System (ADS)
Jiang, M.; Li, Y.; Wang, H.; Liu, C.
2017-10-01
Simulation of positive streamer evolution is important for understanding the microscopic physical process in discharges. Simulations described in this paper are done using a 3D Particle-In-Cell, Monte-Carlo-Collision code with photoionization. Three phases of a positive streamer evolution, identified as initiation, propagation, and branching are studied during simulations. A homogeneous electric field is applied between parallel-flat electrodes forming a millimeter air gap to make simulations and analysis more simple and general. Free electrons created by the photoionization process determine initiation, propagation, and branching of the streamers. Electron avalanches form a positive streamer tip, when the space charge of ions at the positive tip dominates the local electric field. The propagation of the positive tip toward a cathode is the result of combinations of the positive tip and secondary avalanches ahead of it. A curved feather-like channel is formed without obvious branches when the electric field between electrodes is 50 kV/cm. However, a channel is formed with obvious branches when the electric field increases up to 60 kV/cm. In contrast to the branches around a sharp needle electrode, branches near the flat anode are formed at a certain distance away from it. Simulated parameters of the streamer such as diameter, maximum electric field, propagation velocity, and electron density at the streamer tip are in a good agreement with those published earlier.
NASA Technical Reports Server (NTRS)
Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.
1995-01-01
A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.
Brittle Fracture In Disordered Media: A Unified Theory
NASA Astrophysics Data System (ADS)
Shekhawat, Ashivni; Zapperi, Stefano; Sethna, James
2013-03-01
We present a unified theory of fracture in disordered brittle media that reconciles apparently conflicting results reported in the literature, as well as several experiments on materials ranging from granite to bones. Our renormalization group based approach yields a phase diagram in which the percolation fixed point, expected for infinite disorder, is unstable for finite disorder and flows to a zero-disorder nucleation-type fixed point, thus showing that fracture has mixed first order and continuous character. In a region of intermediate disorder and finite system sizes, we predict a crossover with mean-field avalanche scaling. We discuss intriguing connections to other phenomena where critical scaling is only observed in finite size systems and disappears in the thermodynamic limit. We present a numerical validation of our theoretical results. We acknowledge support from DOE- BES DE-FG02-07ER46393, ERC-AdG-2011 SIZEFFECT, and the NSF through TeraGrid by LONI under grant TG-DMR100025.
A new detector concept for silicon photomultipliers
NASA Astrophysics Data System (ADS)
Sadigov, A.; Ahmadov, F.; Ahmadov, G.; Ariffin, A.; Khorev, S.; Sadygov, Z.; Suleymanov, S.; Zerrouk, F.; Madatov, R.
2016-07-01
A new design and principle of operation of silicon photomultipliers are presented. The new design comprises a semiconductor substrate and an array of independent micro-phototransistors formed on the substrate. Each micro-phototransistor comprises a photosensitive base operating in Geiger mode and an individual micro-emitter covering a small part of the base layer, thereby creating, together with this latter, a micro-transistor. Both micro-emitters and photosensitive base layers are connected with two respective independent metal grids via their individual micro-resistors. The total value of signal gain in the proposed silicon photomultiplier is a result of both the avalanche gain in the base layer and the corresponding gain in the micro-transistor. The main goals of the new design are: significantly lower both optical crosstalk and after-pulse effects at high signal amplification, improve speed of single photoelectron pulse formation, and significantly reduce the device capacitance.
NASA Technical Reports Server (NTRS)
2002-01-01
(Released 13 May 2002) The Science The rugged, arcuate rim of the 90 km crater Reuyl dominates this THEMIS image. Reuyl crater is at the southern edge of a region known to be blanketed in thick dust based on its high albedo (brightness) and low thermal inertia values. This thick mantle of dust creates the appearance of snow covered mountains in the image. Like snow accumulation on Earth, Martian dust can become so thick that it eventually slides down the face of steep slopes, creating runaway avalanches of dust. In the center of this image about 1/3 of the way down is evidence of this phenomenon. A few dozen dark streaks can be seen on the bright, sunlit slopes of the crater rim. The narrow streaks extend downslope following the local topography in a manner very similar to snow avalanches on Earth. But unlike their terrestrial counterparts, no accumulation occurs at the bottom. The dust particles are so small that they are easily launched into the thin atmosphere where they remain suspended and ultimately blow away. The apparent darkness of the avalanche scars is due to the presence of relatively dark underlying material that becomes exposed following the passage of the avalanche. Over time, new dust deposition occurs, brightening the scars until they fade into the background. Although dark slope streaks had been observed in Viking mission images, a clear understanding of this dynamic phenomenon wasn't possible until the much higher resolution images from the Mars Global Surveyor MOC camera revealed the details. MOC images also showed that new avalanches have occurred during the time MGS has been in orbit. THEMIS images will allow additional mapping of their distribution and frequency, contributing new insights about Martian dust avalanches. The Story The stiff peaks in this image might remind you of the Alps here on Earth, but they really outline the choppy edge of a large Martian crater over 50 miles wide (seen in the context image at right). While these aren't the Alps, you will find quite a few avalanches. Avalanches of dust, however, not snow. Martian dust can become so thick in this area that it eventually slides down the steep slopes, creating runaway avalanches of dust. No dedicated, Swiss-like avalanche rescue teams would be needed much on Mars, however. Unlike snow, the dust doesn't pile up and accumulate at the bottom. Instead, dust particles are so small that they get launched into the atmosphere where they remain suspended until . . . poof! They are blown away and distributed lightly elsewhere. For evidence of past avalanches, check out the dark streaks running down the bright, sunlit slopes (western side of the peaks about 1/3 of the way down the image). These avalanche scars are dark because the underlying surface is not as bright as the removed dust. Eventually, new dust will settle over these scars, and the streaks will brighten until they fade into the background. The neat thing is that we'll be able to see all of these changes happening over time. Our current two Mars orbiters (called Mars Global Surveyor and 2001 Mars Odyssey) are showing that avalanche action is happening right now, all of the time on Mars. For example, the camera on Mars Global Surveyor has already taken pictures of the Martian surface in some areas that showed no avalanches - the first time the picture was snapped, that is. The next time around, the camera took a picture of the same area, only voila! New streaks, meaning new avalanches! That's why it can be so exciting to look at the Martian landscape over time to see how it changes. The THEMIS camera on Odyssey will continue to map out the places where the avalanches occur and how often. This information will really help scientists understand how dust is works to shape the terrain and to influence the Martian climate as it constantly swings into the atmosphere, falls down to the ground, and rises back up again. Stay tuned to see if you too can pick out the changes over time!
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707
Karpievitch, Yuliya V; Almeida, Jonas S
2006-03-15
Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.
Stellar Winds and Dust Avalanches in the AU Mic Debris Disk
NASA Astrophysics Data System (ADS)
Chiang, Eugene; Fung, Jeffrey
2017-10-01
We explain the fast-moving, ripple-like features in the edge-on debris disk orbiting the young M dwarf AU Mic. The bright features are clouds of submicron dust repelled by the host star’s wind. The clouds are produced by avalanches: radial outflows of dust that gain exponentially more mass as they shatter background disk particles in collisional chain reactions. The avalanches are triggered from a region a few au across—the “avalanche zone”—located on AU Mic’s primary “birth” ring at a true distance of ˜35 au from the star but at a projected distance more than a factor of 10 smaller: the avalanche zone sits directly along the line of sight to the star, on the side of the ring nearest Earth, launching clouds that disk rotation sends wholly to the southeast, as observed. The avalanche zone marks where the primary ring intersects a secondary ring of debris left by the catastrophic disruption of a progenitor up to Varuna in size, less than tens of thousands of years ago. Only where the rings intersect are particle collisions sufficiently violent to spawn the submicron dust needed to seed the avalanches. We show that this picture works quantitatively, reproducing the masses, sizes, and velocities of the observed escaping clouds. The Lorentz force exerted by the wind’s magnetic field, whose polarity reverses periodically according to the stellar magnetic cycle, promises to explain the observed vertical undulations. The timescale between avalanches, about 10 yr, might be set by time variability of the wind mass loss rate or, more speculatively, by some self-regulating limit cycle.
NASA Astrophysics Data System (ADS)
Moore, Jeffrey R.; Pankow, Kristine L.; Ford, Sean R.; Koper, Keith D.; Hale, J. Mark; Aaron, Jordan; Larsen, Chris F.
2017-03-01
The 2013 Bingham Canyon Mine rock avalanches represent one of the largest cumulative landslide events in recorded U.S. history and provide a unique opportunity to test remote analysis techniques for landslide characterization. Here we combine aerial photogrammetry surveying, topographic reconstruction, numerical runout modeling, and analysis of broadband seismic and infrasound data to extract salient details of the dynamics and evolution of the multiphase landslide event. Our results reveal a cumulative intact rock source volume of 52 Mm3, which mobilized in two main rock avalanche phases separated by 1.5 h. We estimate that the first rock avalanche had 1.5-2 times greater volume than the second. Each failure initiated by sliding along a gently dipping (21°), highly persistent basal fault before transitioning to a rock avalanche and spilling into the inner pit. The trajectory and duration of the two rock avalanches were reconstructed using runout modeling and independent force history inversion of intermediate-period (10-50 s) seismic data. Intermediate- and shorter-period (1-50 s) seismic data were sensitive to intervals of mass redirection and constrained finer details of the individual slide dynamics. Back projecting short-period (0.2-1 s) seismic energy, we located the two rock avalanches within 2 and 4 km of the mine. Further analysis of infrasound and seismic data revealed that the cumulative event included an additional 11 smaller landslides (volumes 104-105 m3) and that a trailing signal following the second rock avalanche may result from an air-coupled Rayleigh wave. Our results demonstrate new and refined techniques for detailed remote characterization of the dynamics and evolution of large landslides.
Infrasonic monitoring of snow avalanches in the Alps
NASA Astrophysics Data System (ADS)
Marchetti, E.; Ulivieri, G.; Ripepe, M.; Chiambretti, I.; Segor, V.
2012-04-01
Risk assessment of snow avalanches is mostly related to weather conditions and snow cover. However a robust risk validation requires to identify all avalanches occurring, in order to compare predictions to real effects. For this purpose on December 2010 we installed a permanent 4-element, small aperture (100 m), infrasound array in the Alps, after a pilot experiment carried out in Gressonay during the 2009-2010 winter season. The array has been deployed in the Ayas Valley, at an elevation of 2000 m a.s.l., where natural avalanches are expected and controlled events are regularly performed. The array consists into 4 Optimic 2180 infrasonic microphones, with a sensitivity of 10-3 Pa in the 0.5-50 Hz frequency band and a 4 channel Guralp CMG-DM24 A/D converter, sampling at 100 Hz. Timing is achieved with a GPS receiver. Data are transmitted to the Department of Earth Sciences of the University of Firenze, where data is recorded and processed in real-time. A multi-channel semblance is carried out on the continuous data set as a function of slowness, back-azimuth and frequency of recorded infrasound in order to detect all avalanches occurring from the back-ground signal, strongly affected by microbarom and mountain induced gravity waves. This permanent installation in Italy will allow to verify the efficiency of the system in short-to-medium range (2-8 km) avalanche detection, and might represent an important validation to model avalanches activity during this winter season. Moreover, the real-time processing of infrasonic array data, might strongly contribute to avalanche risk assessments providing an up-to-description of ongoing events.
Energy mechanics of rock and snow avalanches and the role of fragmentation (invited)
NASA Astrophysics Data System (ADS)
Bartelt, Perry; Buser, Othmar; Glover, James
2014-05-01
The energy mechanics of rock and snow avalanches are traditionally described using a two-step transformation: potential energy is first converted into kinetic energy; kinetic energy is dissipated to heat by frictional processes. If the frictional processes are known, the energy fluxes of avalanches can be calculated completely. The break-up of the released mass, however, introduces several new energy fluxes into the avalanche problem. The first energy is associated with the fragmentation, which generates random particle motions. This is true kinetic energy. Inter-particle interactions (collisions, abrasion, fracture) cause the energy of the random particle motion to dissipate to heat. A constraint on the random motions is the basal boundary. It is at this interface that the dispersive pressure is created by vertical particle motions that are directed upwards into the flow. The integral of the upward particle motions can induce a change in avalanche flow volume and density, depending on the relationship between the weight of the flow and the dispersive pressure. Interestingly, normal pressures will only diverge from hydrostatic when there are changes in flow density. We are therefore confronted with the problem of calculating not only the vertical acceleration of the dispersive pressure, but also the change in vertical acceleration. In this contribution we discuss a method to calculate random particle motions, dispersive pressure and changes in avalanche flow density. These are dependent not only on the absolute mass, but also on the material properties of the disintegrating mass. This becomes particularly interesting when considering the motion of snow and rock avalanches as it allows the prediction of flow regime changes and therefore extreme avalanche run-out potential.
Dynamics of glide avalanches and snow gliding
NASA Astrophysics Data System (ADS)
Ancey, Christophe; Bain, Vincent
2015-09-01
In recent years, due to warmer snow cover, there has been a significant increase in the number of cases of damage caused by gliding snowpacks and glide avalanches. On most occasions, these have been full-depth, wet-snow avalanches, and this led some people to express their surprise: how could low-speed masses of wet snow exert sufficiently high levels of pressure to severely damage engineered structures designed to carry heavy loads? This paper reviews the current state of knowledge about the formation of glide avalanches and the forces exerted on simple structures by a gliding mass of snow. One particular difficulty in reviewing the existing literature on gliding snow and on force calculations is that much of the theoretical and phenomenological analyses were presented in technical reports that date back to the earliest developments of avalanche science in the 1930s. Returning to these primary sources and attempting to put them into a contemporary perspective are vital. A detailed, modern analysis of them shows that the order of magnitude of the forces exerted by gliding snow can indeed be estimated correctly. The precise physical mechanisms remain elusive, however. We comment on the existing approaches in light of the most recent findings about related topics, including the physics of granular and plastic flows, and from field surveys of snow and avalanches (as well as glaciers and debris flows). Methods of calculating the forces exerted by glide avalanches are compared quantitatively on the basis of two case studies. This paper shows that if snow depth and density are known, then certain approaches can indeed predict the forces exerted on simple obstacles in the event of glide avalanches or gliding snow cover.
Development of solid-state avalanche amorphous selenium for medical imaging.
Scheuermann, James R; Goldan, Amir H; Tousignant, Olivier; Léveillé, Sébastien; Zhao, Wei
2015-03-01
Active matrix flat panel imagers (AMFPI) have limited performance in low dose applications due to the electronic noise of the thin film transistor (TFT) array. A uniform layer of avalanche amorphous selenium (a-Se) called high gain avalanche rushing photoconductor (HARP) allows for signal amplification prior to readout from the TFT array, largely eliminating the effects of the electronic noise. The authors report preliminary avalanche gain measurements from the first HARP structure developed for direct deposition onto a TFT array. The HARP structure is fabricated on a glass substrate in the form of p-i-n, i.e., the electron blocking layer (p) followed by an intrinsic (i) a-Se layer and finally the hole blocking layer (n). All deposition procedures are scalable to large area detectors. Integrated charge is measured from pulsed optical excitation incident on the top electrode (as would in an indirect AMFPI) under continuous high voltage bias. Avalanche gain measurements were obtained from samples fabricated simultaneously at different locations in the evaporator to evaluate performance uniformity across large area. An avalanche gain of up to 80 was obtained, which showed field dependence consistent with previous measurements from n-i-p HARP structures established for vacuum tubes. Measurements from multiple samples demonstrate the spatial uniformity of performance using large area deposition methods. Finally, the results were highly reproducible during the time course of the entire study. We present promising avalanche gain measurement results from a novel HARP structure that can be deposited onto a TFT array. This is a crucial step toward the practical feasibility of AMFPI with avalanche gain, enabling quantum noise limited performance down to a single x-ray photon per pixel.
NASA Astrophysics Data System (ADS)
Sato, Yuki; Fukuda, Naoki; Takeda, Hiroyuki; Kameda, Daisuke; Suzuki, Hiroshi; Shimizu, Yohei; Ahn, DeukSoon; Murai, Daichi; Inabe, Naohito; Shimaoka, Takehiro; Tsubota, Masakatsu; Kaneko, Junichi H.; Chayahara, Akiyoshi; Umezawa, Hitoshi; Shikata, Shinichi; Kumagai, Hidekazu; Murakami, Hiroyuki; Sato, Hiromi; Yoshida, Koichi; Kubo, Toshiyuki
A multiple sampling ionization chamber (MUSIC) and parallel-plate avalanche counters (PPACs) were installed within the superconducting in-flight separator, named BigRIPS, at the RIKEN Nishina Center for particle identification of RI beams. The MUSIC detector showed negligible charge collection inefficiency from recombination of electrons and ions, up to a 99-kcps incidence rate for high-energy heavy ions. For the PPAC detectors, the electrical discharge durability for incident heavy ions was improved by changing the electrode material. Finally, we designed a single crystal diamond detector, which is under development for TOF measurements of high-energy heavy ions, that has a very fast response time (pulse width <1 ns).
NASA Astrophysics Data System (ADS)
Zhou, Wenhe; He, Xuan; Wu, Jianyun; Wang, Liangbi; Wang, Liangcheng
2017-07-01
The parallel plate capacitive humidity sensor based on the grid upper electrode is considered to be a promising one in some fields which require a humidity sensor with better dynamic characteristics. To strengthen the structure and balance the electric charge of the grid upper electrode, a strip is needed. However, it is the strip that keeps the dynamic characteristics of the sensor from being further improved. The numerical method is time- and cost-saving, but the numerical study on the response time of the sensor is just of bits and pieces. The numerical models presented by these studies did not consider the porosity effect of the polymer film on the dynamic characteristics. To overcome the defect of the grid upper electrode, a new structure of the upper electrode is provided by this paper first, and then a model considering the porosity effects of the polymer film on the dynamic characteristics is presented and validated. Finally, with the help of software FLUENT, parameter effects on the response time of the humidity sensor based on the microhole upper electrode are studied by the numerical method. The numerical results show that the response time of the microhole upper electrode sensor is 86% better than that of the grid upper electrode sensor, the response time of humidity sensor can be improved by reducing the hole spacing, increasing the aperture, reducing film thickness, and reasonably enlarging the porosity of the film.
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2008-02-01
A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.
Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.
2016-01-01
A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
Studies with cathode drift chambers for the GlueX experiment at Jefferson Lab
Pentchev, L.; Barbosa, F.; Berdnikov, V.; ...
2017-04-22
A drift chamber system consisting of 24 1 m-diameter chambers with both cathode and wire readout (total of 12,672 channels) is operational in Hall D at Jefferson Lab (Virginia). Two cathode strip planes and one wire plane in each chamber register the same avalanche allowing the study of avalanche development, charge induction process, and strip resolution. We demonstrate a method for reconstructing the two-dimensional distribution of the avalanche “center-of-gravity” position around the wire from an 55Fe source with resolutions down to 30 μm. We estimate the azimuthal extent of the avalanche around the wire as a function of the totalmore » charge for an Ar/CO 2 gas mixture. By means of cluster counting using a modified 3 cm-gap chamber, we observe significant space charge effects within the same track, resulting in an extent of the avalanche along the wire.« less
NASA Astrophysics Data System (ADS)
Baker, Jennifer
Although there has been substantial research on the avoidance of risk, much less has been completed on voluntary risk. This study examined backcountry snowmobilers' risk perceptions, avalanche related information seeking behaviours, and decision-making processes when dealing with avalanches and backcountry risk in Canada. To accomplish this, in-depth, semi-structured interviews were conducted with 17 participants who were involved in backcountry snowmobiling. Interviews were done both in person and by telephone. The results of this study show that, unlike previous research on snowmobilers, the participants of this study were well prepared and knowledgeable about backcountry risks. All 17 participants stated that they carried a shovel, probe, and transceiver with them on each backcountry trip, and 10 participants had taken an avalanche safety course. Group dynamics and positive peer pressure were influential in promoting safe backcountry behaviour. KEYWORDS: Backcountry snowmobiling, Avalanches, Voluntary Risk, Preparedness, Decision-Making.
Avalanche statistics from data with low time resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
Avalanche statistics from data with low time resolution
LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...
2016-11-22
Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less
National Combustion Code Parallel Performance Enhancements
NASA Technical Reports Server (NTRS)
Quealy, Angela; Benyo, Theresa (Technical Monitor)
2002-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.
NASA Astrophysics Data System (ADS)
Averkin, Sergey N.; Gatsonis, Nikolaos A.
2018-06-01
An unstructured electrostatic Particle-In-Cell (EUPIC) method is developed on arbitrary tetrahedral grids for simulation of plasmas bounded by arbitrary geometries. The electric potential in EUPIC is obtained on cell vertices from a finite volume Multi-Point Flux Approximation of Gauss' law using the indirect dual cell with Dirichlet, Neumann and external circuit boundary conditions. The resulting matrix equation for the nodal potential is solved with a restarted generalized minimal residual method (GMRES) and an ILU(0) preconditioner algorithm, parallelized using a combination of node coloring and level scheduling approaches. The electric field on vertices is obtained using the gradient theorem applied to the indirect dual cell. The algorithms for injection, particle loading, particle motion, and particle tracking are parallelized for unstructured tetrahedral grids. The algorithms for the potential solver, electric field evaluation, loading, scatter-gather algorithms are verified using analytic solutions for test cases subject to Laplace and Poisson equations. Grid sensitivity analysis examines the L2 and L∞ norms of the relative error in potential, field, and charge density as a function of edge-averaged and volume-averaged cell size. Analysis shows second order of convergence for the potential and first order of convergence for the electric field and charge density. Temporal sensitivity analysis is performed and the momentum and energy conservation properties of the particle integrators in EUPIC are examined. The effects of cell size and timestep on heating, slowing-down and the deflection times are quantified. The heating, slowing-down and the deflection times are found to be almost linearly dependent on number of particles per cell. EUPIC simulations of current collection by cylindrical Langmuir probes in collisionless plasmas show good comparison with previous experimentally validated numerical results. These simulations were also used in a parallelization efficiency investigation. Results show that the EUPIC has efficiency of more than 80% when the simulation is performed on a single CPU from a non-uniform memory access node and the efficiency is decreasing as the number of threads further increases. The EUPIC is applied to the simulation of the multi-species plasma flow over a geometrically complex CubeSat in Low Earth Orbit. The EUPIC potential and flowfield distribution around the CubeSat exhibit features that are consistent with previous simulations over simpler geometrical bodies.
Grid-Enabled High Energy Physics Research using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Mahmood, Akhtar
2005-04-01
At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
NASA Astrophysics Data System (ADS)
Petersson, Anders; Rodgers, Arthur
2010-05-01
The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.
Distributed Optimal Power Flow of AC/DC Interconnected Power Grid Using Synchronous ADMM
NASA Astrophysics Data System (ADS)
Liang, Zijun; Lin, Shunjiang; Liu, Mingbo
2017-05-01
Distributed optimal power flow (OPF) is of great importance and challenge to AC/DC interconnected power grid with different dispatching centres, considering the security and privacy of information transmission. In this paper, a fully distributed algorithm for OPF problem of AC/DC interconnected power grid called synchronous ADMM is proposed, and it requires no form of central controller. The algorithm is based on the fundamental alternating direction multiplier method (ADMM), by using the average value of boundary variables of adjacent regions obtained from current iteration as the reference values of both regions for next iteration, which realizes the parallel computation among different regions. The algorithm is tested with the IEEE 11-bus AC/DC interconnected power grid, and by comparing the results with centralized algorithm, we find it nearly no differences, and its correctness and effectiveness can be validated.
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
Simulation of hypersonic rarefied flows with the immersed-boundary method
NASA Astrophysics Data System (ADS)
Bruno, D.; De Palma, P.; de Tullio, M. D.
2011-05-01
This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.
NASA Astrophysics Data System (ADS)
Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.
2017-12-01
An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.
Radiation and Temperature Hard Multi-Pixel Avalanche Photodiodes
NASA Technical Reports Server (NTRS)
Bensaoula, Abdelhak (Inventor); Starikov, David (Inventor); Pillai, Rajeev (Inventor)
2017-01-01
The structure and method of fabricating a radiation and temperature hard avalanche photodiode with integrated radiation and temperature hard readout circuit, comprising a substrate, an avalanche region, an absorption region, and a plurality of Ohmic contacts are presented. The present disclosure provides for tuning of spectral sensitivity and high device efficiency, resulting in photon counting capability with decreased crosstalk and reduced dark current.
Application of LANDSAT data to delimitation of avalanche hazards in Montane, Colorado
NASA Technical Reports Server (NTRS)
Knepper, D. H. (Principal Investigator); Summer, R.
1976-01-01
The author has identified the following significant results. With rare exceptions, avalanche areas cannot be identified on LANDSAT imagery. Avalanche hazard mapping on a regional scale is best conducted using LANDSAT imagery in conjunction with complementary data sources. Level of detail of such maps will be limited by the amount and completeness of the complementary information used.
High-Accuracy Measurements of the Centre of Gravity of Avalanches in Proportional Chambers
DOE R&D Accomplishments Database
Charpak, G.; Jeavons, A.; Sauli, F.; Stubbs, R.
1973-09-24
In a multiwire proportional chamber the avalanches occur close to the anode wires. The motion of the positive ions in the large electric fields at the vicinity of the wires induces fast-rising positive pulses on the surrounding electrodes. Different methods have been developed in order to determine the position of the centre of the avalanches. In the method we describe, the centre of gravity of the pulse distribution is measured directly. It seems to lead to an accuracy which is limited only by the stability of the spatial distribution of the avalanches generated by the process being measured.
Avalanche multiplication and impact ionization in amorphous selenium photoconductive target
NASA Astrophysics Data System (ADS)
Park, Wug-Dong; Tanioka, Kenkichi
2014-03-01
The avalanche multiplication factor and the hole ionization coefficient in the amorphous selenium (a-Se) high-gain avalanche rushing amorphous photoconductor (HARP) target depend on the electric field. The phenomenon of avalanche multiplication and impact ionization in the 0.4-µm-thick a-Se HARP target is investigated. The hot carrier energy in the 0.4-µm-thick a-Se HARP target increases linearly as the target voltage increases. The energy relaxation length of hot carriers in the a-Se photoconductor of the 0.4-µm-thick HARP target saturates as the electric field increases. The average energy Eav of a hot carrier and the energy relaxation length λE in the a-Se photoconductor of the 0.4-µm-thick HARP target at 1 × 108 V/m were 0.25 eV and 2.5 nm, respectively. In addition, the hole ionization coefficient β and the avalanche multiplication factor M are derived as a function of the electric field, the average energy of a hot carrier, and the impact ionization energy. The experimental hole ionization coefficient β and the avalanche multiplication factor M in the 0.4-µm-thick a-Se HARP target agree with the theoretical results.
Magnetically switched power supply system for lasers
NASA Technical Reports Server (NTRS)
Pacala, Thomas J. (Inventor)
1987-01-01
A laser power supply system is described in which separate pulses are utilized to avalanche ionize the gas within the laser and then produce a sustained discharge to cause the gas to emit light energy. A pulsed voltage source is used to charge a storage device such as a distributed capacitance. A transmission line or other suitable electrical conductor connects the storage device to the laser. A saturable inductor switch is coupled in the transmission line for containing the energy within the storage device until the voltage level across the storage device reaches a predetermined level, which level is less than that required to avalanche ionize the gas. An avalanche ionization pulse generating circuit is coupled to the laser for generating a high voltage pulse of sufficient amplitude to avalanche ionize the laser gas. Once the laser gas is avalanche ionized, the energy within the storage device is discharged through the saturable inductor switch into the laser to provide the sustained discharge. The avalanche ionization generating circuit may include a separate voltage source which is connected across the laser or may be in the form of a voltage multiplier circuit connected between the storage device and the laser.
NASA Astrophysics Data System (ADS)
Cooray, G. V.; Cooray, G. K.
2011-12-01
Gurevich et al. [1] postulated that the source of narrow bipolar pulses, a class of high energy pulses that occur during thunderstorms, could be a runaway electron avalanche driven by the intense electric fields of a thunderstorm. Recently, Watson and Marshall [2] used the modified transmission line model to test the mechanism of the source of narrow bipolar pulses. In a recent paper, Cooray and Cooray [3] demonstrated that the electromagnetic fields of accelerating charges could be used to evaluate the electromagnetic fields from electrical discharges if the temporal and spatial variation of the charges in the discharge is known. In the present study, those equations were utilized to evaluate the electromagnetic fields generated by a relativistic electron avalanche. In the analysis it is assumed that all the electrons in the avalanche are moving with the same speed. In other words, the growth or the decay of the number of electrons takes place only at the head of the avalanche. It is shown that the radiation is emanating only from the head of the avalanche where electrons are being accelerated. It is also shown that an analytical expression for the radiation field of the avalanche at any distance can be written directly in terms of the e-folding length of the avalanche. This makes it possible to extract directly the spatial variation of the e-folding length of the avalanche from the measured radiation fields. In the study this model avalanche was used to investigate whether it can be used to describe the measured electromagnetic fields of narrow bipolar pulses. The results obtained are in reasonable agreement with the two station data of Eack [4] for speeds of propagation around (2 - 2.5) x 10^8 m/s and when the propagation effects on the electric fields measured at the distant station is taken into account. [1] Gurevich et al. (2004), Phys. Lett. A., 329, pp. 348 -361. [2] Watson, S. S. and T. C. Marshall (2007), Geophys. Res. Lett., Vol. 34, L04816, doi: 10.1029/2006GL027426. [3] Cooray, V. and G. Cooray (2010), IEEE Transactions on Electromagnetic Compatibility, 52, No. 4, 944 - 955. [4] Eack, K. B. (2004), Geophys. Res. Lett., Vol. 31, L20102, doi: 10.1029/2005GL023975.
Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; VanderWijngaart, Rob F.
2003-01-01
We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.
The Universal Transverse Mercator (UTM) grid
,
1997-01-01
The most convenient way to identify points on the curved surface of the Earth is with a system of reference lines called parallels of latitude and meridians of longitude. On some maps the meridians and parallels appear as straight lines. On most modern maps, however, the meridians and parallels may appear as curved lines. These differences are due to the mathematical treatment required to portray a curved surface on a flat surface so that important properties of the map (such as distance and areal accuracy) are shown with minimum distortion. The system used to portray a portion of the round Earth on a flat surface is called a map projection.
The Universal Transverse Mercator (UTM) grid
,
1999-01-01
The most convenient way to identify points on the curved surface of the Earth is with a system of reference lines called parallels of latitude and meridians of longitude. On some maps, the meridians and parallels appear as straight lines. On most modern maps, however, the meridians and parallels appear as curved lines. These differences sre due to the mathematical treatment required to portray a curved surface on a flat surface so that important properties of the map (such as distance and areal accuracy) are shown with minimum distortion. The system used to portray a portion of the round Earth on a flat surface is called a map projection.
Local and nonlocal parallel heat transport in general magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del-Castillo-Negrete, Diego B; Chacon, Luis
2011-01-01
A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.
Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.
Knöpfel, Thomas; Leech, Robert
2018-01-01
Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654
Neuronal avalanches and coherence potentials
NASA Astrophysics Data System (ADS)
Plenz, D.
2012-05-01
The mammalian cortex consists of a vast network of weakly interacting excitable cells called neurons. Neurons must synchronize their activities in order to trigger activity in neighboring neurons. Moreover, interactions must be carefully regulated to remain weak (but not too weak) such that cascades of active neuronal groups avoid explosive growth yet allow for activity propagation over long-distances. Such a balance is robustly realized for neuronal avalanches, which are defined as cortical activity cascades that follow precise power laws. In experiments, scale-invariant neuronal avalanche dynamics have been observed during spontaneous cortical activity in isolated preparations in vitro as well as in the ongoing cortical activity of awake animals and in humans. Theory, models, and experiments suggest that neuronal avalanches are the signature of brain function near criticality at which the cortex optimally responds to inputs and maximizes its information capacity. Importantly, avalanche dynamics allow for the emergence of a subset of avalanches, the coherence potentials. They emerge when the synchronization of a local neuronal group exceeds a local threshold, at which the system spawns replicas of the local group activity at distant network sites. The functional importance of coherence potentials will be discussed in the context of propagating structures, such as gliders in balanced cellular automata. Gliders constitute local population dynamics that replicate in space after a finite number of generations and are thought to provide cellular automata with universal computation. Avalanches and coherence potentials are proposed to constitute a modern framework of cortical synchronization dynamics that underlies brain function.
Strapazzon, Giacomo; Migliaccio, Daniel; Fontana, Diego; Stawinoga, Agnieszka Elzbieta; Milani, Mario; Brugger, Hermann
2018-03-01
To explore baseline knowledge about avalanche guidelines and the Avalanche Victim Resuscitation Checklist (AVReCh) in Italy and the knowledge acquisition from a standardized lecture. Standardized lecture material discussing AVReCh was presented during 8 mountain medicine courses from November 2014 to April 2016 in different regions of Italy. To determine the knowledge acquisition from the lecture, a pre- and postlecture survey was utilized. A total of 193 surveys were analyzed. More than 50% of the participants had never participated in lectures/courses on avalanche guidelines, and less than 50% of the participants knew about the AVReCh before the lecture. The correct temporal sequence of reportable information in the basic life support section of the AVReCh was selected by 40% of the participants before the lecture and by 75% after the lecture (P<0.001). Within subgroups analysis, most groups saw significant improvement in performance (P<0.05). The selection of the correct burial time increased from 36 to 84% (P<0.05). Health care providers and mountain rescue personnel are not widely aware of avalanche guidelines. The standardized lecture significantly improved knowledge of the principles of avalanche management related to core AVReCh elements. However, the effect that this knowledge acquisition has on avalanche victim survival or adherence to the AVReCh in the field is yet to be determined. Copyright © 2017 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.
2015-09-30
This image from NASA Mars Reconnaissance Orbiter spacecraft shows a channel system flowing to the southwest toward the huge Hellas impact basin. Click on the image for larger version The scarp at the edge of the North Polar layered deposits of Mars is the site of the most frequent frost avalanches seen by HiRISE. At this season, northern spring, frost avalanches are common and HiRISE monitors the scarp to learn more about the timing and frequency of the avalanches, and their relationship to the evolution of frost on the flat ground above and below the scarp. This picture managed to capture a small avalanche in progress, right in the color strip. See if you can spot it in the browse image, and then click on the cutout to see it at full resolution. The small white cloud in front of the brick red cliff is likely carbon dioxide frost dislodged from the layers above, caught in the act of cascading down the cliff. It is larger than it looks, more than 20 meters across, and (based on previous examples) it will likely kick up clouds of dust when it hits the ground. The avalanches tend to take place at a season when the North Polar region is warming, suggesting that the avalanches may be triggered by thermal expansion. The avalanches remind us, along with active sand dunes, dust devils, slope streaks and recurring slope lineae, that Mars is an active and dynamic planet. http://photojournal.jpl.nasa.gov/catalog/PIA19961
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
NASA Astrophysics Data System (ADS)
Castebrunet, H.; Eckert, N.; Giraud, G.; Durand, Y.; Morin, S.
2014-09-01
Projecting changes in snow cover due to climate warming is important for many societal issues, including the adaptation of avalanche risk mitigation strategies. Efficient modelling of future snow cover requires high resolution to properly resolve the topography. Here, we introduce results obtained through statistical downscaling techniques allowing simulations of future snowpack conditions including mechanical stability estimates for the mid and late 21st century in the French Alps under three climate change scenarios. Refined statistical descriptions of snowpack characteristics are provided in comparison to a 1960-1990 reference period, including latitudinal, altitudinal and seasonal gradients. These results are then used to feed a statistical model relating avalanche activity to snow and meteorological conditions, so as to produce the first projection on annual/seasonal timescales of future natural avalanche activity based on past observations. The resulting statistical indicators are fundamental for the mountain economy in terms of anticipation of changes. Whereas precipitation is expected to remain quite stationary, temperature increase interacting with topography will constrain the evolution of snow-related variables on all considered spatio-temporal scales and will, in particular, lead to a reduction of the dry snowpack and an increase of the wet snowpack. Overall, compared to the reference period, changes are strong for the end of the 21st century, but already significant for the mid century. Changes in winter are less important than in spring, but wet-snow conditions are projected to appear at high elevations earlier in the season. At the same altitude, the southern French Alps will not be significantly more affected than the northern French Alps, which means that the snowpack will be preserved for longer in the southern massifs which are higher on average. Regarding avalanche activity, a general decrease in mean (20-30%) and interannual variability is projected. These changes are relatively strong compared to changes in snow and meteorological variables. The decrease is amplified in spring and at low altitude. In contrast, an increase in avalanche activity is expected in winter at high altitude because of conditions favourable to wet-snow avalanches earlier in the season. Comparison with the outputs of the deterministic avalanche hazard model MEPRA (Modèle Expert d'aide à la Prévision du Risque d'Avalanche) shows generally consistent results but suggests that, even if the frequency of winters with high avalanche activity is clearly projected to decrease, the decreasing trend may be less strong and smooth than suggested by the statistical analysis based on changes in snowpack characteristics and their links to avalanches observations in the past. This important point for risk assessment pleads for further work focusing on shorter timescales. Finally, the small differences between different climate change scenarios show the robustness of the predicted avalanche activity changes.
Capabilities of Fully Parallelized MHD Stability Code MARS
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang
2016-10-01
Results of full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. Parallel version of MARS, named PMARS, has been recently developed at FAR-TECH. Parallelized MARS is an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, implemented in MARS. Parallelization of the code included parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse vector iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the MARS algorithm using parallel libraries and procedures. Parallelized MARS is capable of calculating eigenmodes with significantly increased spatial resolution: up to 5,000 adapted radial grid points with up to 500 poloidal harmonics. Such resolution is sufficient for simulation of kink, tearing and peeling-ballooning instabilities with physically relevant parameters. Work is supported by the U.S. DOE SBIR program.
NASA Astrophysics Data System (ADS)
Raeli, Alice; Bergmann, Michel; Iollo, Angelo
2018-02-01
We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.
Planning paths through a spatial hierarchy - Eliminating stair-stepping effects
NASA Technical Reports Server (NTRS)
Slack, Marc G.
1989-01-01
Stair-stepping effects are a result of the loss of spatial continuity resulting from the decomposition of space into a grid. This paper presents a path planning algorithm which eliminates stair-stepping effects induced by the grid-based spatial representation. The algorithm exploits a hierarchical spatial model to efficiently plan paths for a mobile robot operating in dynamic domains. The spatial model and path planning algorithm map to a parallel machine, allowing the system to operate incrementally, thereby accounting for unexpected events in the operating space.
Relating Silicon Carbide Avalanche Breakdown Diode Design to Pulsed-Energy Capability
2017-03-01
Relating Silicon Carbide Avalanche Breakdown Diode Design to Pulsed- Energy Capability Damian Urciuoli, Miguel Hinojosa, and Ronald Green US...were pulse tested in an inductive load circuit at peak powers of over 110 kW. Total pulsed- energy dissipation was kept nearly the same among the...voltages about which design provides the highest pulsed- energy capability. Keywords: Avalanche; Breakdown; Diode; Silicon Carbide Introduction
Rockslide-debris avalanche of May 18, 1980, Mount St. Helens Volcano, Washington
Glicken, Harry
1996-01-01
This report provides a detailed picture of the rockslide-debris avalanche of the May 18, 1980, eruption of Mount St. Helens volcano. It provides a characterization of the deposit, a reinterpretation of the details of the first minutes of the eruption of May 18, and insight into the transport mechanism of the mass movement. Details of the rockslide event, as revealed by eyewitness photographs, are correlated with features of the deposit. The photographs show three slide blocks in the rockslide movement. Slide block I was triggered by a magnitude 5.1 earthquake at 8:32 a.m. Pacific Daylight Time (P.D.T.). An exploding cryptodome burst through slide block II to produce the 'blast surge.' Slide block III consisted of many discrete failures that were carried out in continuing pyroclastic currents generated from the exploding cryptodome. The cryptodome continued to depressurize after slide block III, producing a blast deposit that rests on top of the debris-avalanche deposit. The hummocky 2.5 cubic kilometer debris-avalanche deposit consists of block facies (pieces of the pre-eruption Mount St. Helens transported relatively intact) and matrix facies (a mixture of rocks from the old mountain and cryptodome dacite). Block facies is divided into five lithologic units. Matrix facies was derived from the explosively generated current of slide block III as well as from disaggregation and mixing of debris-avalanche blocks. The mean density of the old cone was measured to be abut 20 percent greater than the mean density of the avalanche deposit. Density in the deposit does not decrease with distance which suggests that debris-avalanche blocks were dilated at the mountain, rather than during transport. Various grain-size parameters that show that clast size converges about a mean with distance suggest mixing during transport. The debris-avalanche flow can be considered a grain flow, where particles -- either debris-avalanche blocks or the clasts within the blocks -- collided and created dispersive stress normal to the movement of material. The dispersive stress preserved the dilation of the material and allowed it to flow.
NASA Technical Reports Server (NTRS)
Wigton, Larry
1996-01-01
Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.