Science.gov

Sample records for 38th parallel lineament

  1. Re-evaluating the 38th Parallel Serial Impact Hypothesis

    NASA Astrophysics Data System (ADS)

    Evans, K. R.; Davis, G. H.; Miao, X.; Mickus, K. L.; Miller, J. F.; Morrow, J. R.

    2008-12-01

    The idea that the 38th-parallel structures across Kansas, Missouri, and Illinois are serial impacts has been controversial. In addition to the original eight, two other structures are proximal to the 38th parallel, Dent Branch and Silver City Dome. Only Weaubleau, Decaturville, and Crooked Creek contain quartz grains with multiple directions of planar deformational features (PDFs). Shatter cones have been found at Decaturville and Crooked Creek. Key macroscopic observations of these impacts include: (1) circular outlines and notable central uplifts, (2) remarkably intense levels of structural deformation (folding, faulting, fracturing, and brecciation), (3) deformation dying out with depth and laterally away from the central uplift, and (4) associated igneous rocks only as clasts. From field and core studies and published reports, we consider other structures along the 38th parallel to be dubious (Hazelgreen), intrusive, (Hick's Dome), or volcanic in origin (Silver City Dome, Rose Dome, Furnace Creek, Dent Branch, and Avon). The age of the Weaubleau structure is constrained biostratigraphically as middle Mississippian (latest Osagean or early Meramecian). Crooked Creek and Decaturville are deeply eroded; their ages are poorly constrained. Crooked Creek contains isolated blocks of sandstone of late Osagean age, but the stratigraphic context of the blocks is poorly known. Other investigators contend the age of Decaturville is Pennsylvanian or Permian, based on CRM paleomagnetism and occurrence of an isolated sulfide breccia body in the central uplift. The Ozark plateau experienced Missouri Valley Type (MVT) sulfide mineralization during the Ouachita orogeny, but our examination of a sample from the sulfide breccia shows it is shattered pyrite and differs from typical MVT deposits. If the breccia is not associated with the regional mineralization, a middle Mississippian age cannot be excluded. Weaubleau, Decaturville, and Crooked Creek are aligned across 199 km. A

  2. Kinematics and age of Early Tertiary trench parallel volcano-tectonic lineaments in southern Mexico: Tectonic implications

    NASA Astrophysics Data System (ADS)

    Martini, M.; Ferrari, L.; Lopez Martinez, M.; Cerca Martinez, M.; Serrano Duran, L.

    2007-05-01

    We present new geological, structural, and geochronological data that constrain the timing and geometry of Early Tertiary strike slip deformation in southwestern Mexico and its relation with the concurrent magmatic activity. Geologic mapping in Guerrero and Michoacan States documented two regional WNW trending volcano-tectonic lineaments sub parallel to the present trench. The southernmost lineament runs for ~140 km from San Miguel Totolapan area (NW Guerrero) to Sanchiqueo (SE Michoacan), and passes through Ciudad Altamirano. Its southeastern part is marked by the alignment of at least eleven silicic to intermediate major domes as well as by the course of the Balsas River. The northwestern part of the lineament is characterized by ductile left lateral shear zones in Early Tertiary plutonic rocks observed in the Rio Chiquito valley. Domes near Ciudad Altamirano are unaffected by ductile shearing and yielded a ~42 Ma 40Ar/39Ar age, setting a minimum age for this deformation. The northern volcano-tectonic lineament runs for ~190 km between the areas of Huitzuco in northern Guerrero and the southern part of the Tzitzio fold in eastern Michoacan. The Huautla, Tilzapotla, Taxco, La Goleta and Nanchititla silicic centers (all in the range 37-34 Ma) are emplaced along this lineament, which continues to the WNW trough a mafic dike swarm exposed north of Tiquicheo (37-35 Ma) and the Purungueo subvolcanic body (~42 Ma). These rocks, unaffected by ductile shearing, give a minimum age of deformation similar to the southern Totolapan-Sanquicheo lineament. Post ~42 Ma deformation is essentially brittle and is characterized by several left lateral and right lateral transcurrent faults with typical Riedel patterns. Other trench-parallel left lateral shear zones active in pre-Oligocene times were recently reported in western Oaxaca. The recognizing of Early Tertiary trench-parallel and left-lateral ductile shearing in internal areas of southern Mexico suggest a field of widely

  3. 38th Aerospace Mechanisms Symposium

    NASA Technical Reports Server (NTRS)

    Boesiger, Edward A. (Compiler)

    2006-01-01

    The Aerospace Mechanisms Symposium (AMS) provides a unique forum for those active in the design, production and use of aerospace mechanisms. A major focus is the reporting of problems and solutions associated with the development and flight certification of new mechanisms. Organized by the Mechanisms Education Association, the National Aeronautics and Space Administration and Lockheed Martin Space Systems Company (LMSSC) share the responsibility for hosting the AMS. Now in its 38th symposium, the AMS continues to be well attended, attracting participants from both the U.S. and abroad. The 38th AMs, hosted by the NASA Langley Research Center in Williamsburg, Virginia, was held May 17-19, 2006. During these three days, 34 papers were presented. Topics included gimbals, tribology, actuators, aircraft mechanisms, deployment mechanisms, release mechanisms, and test equipment. Hardware displays during the supplier exhibit gave attendees an opportunity to meet with developers of current and future mechanism components.

  4. Lutetia's lineaments

    NASA Astrophysics Data System (ADS)

    Besse, S.; Küppers, M.; Barnouin, O. S.; Thomas, N.; Benkhoff, J.

    2014-10-01

    The European Space Agency's Rosetta spacecraft flew by asteroid (21) Lutetia on July 10, 2010. Observations through the OSIRIS camera have revealed many geological features. Lineaments are identified on the entire observed surface of the asteroid. Many of these features are concentric around the North Pole Crater Cluster (NPCC). As observed on (433) Eros and (4) Vesta, this analysis of Lutetia assesses whether or not some of the lineaments could be created orthogonally to observed impact craters. The results indicate that the orientation of lineaments on Lutetia's surface could be explained by three impact craters: the Massilia and the NPCC craters observed in the northern hemisphere, and candidate crater Suspicio inferred to be in the southern hemisphere. The latter has not been observed during the Rosetta flyby. Of note, is that the inferred location of the Suspicio impact crater derived from lineaments matches locations where hydrated minerals have been detected from Earth-based observations in the southern hemisphere of Lutetia. Although the presence of these minerals has to be confirmed, this analysis shows that the topography may also have a significant contribution in the modification of the spectral shape and its interpretation. The cross-cutting relationships of craters with lineaments, or between lineaments themselves show that Massilia is the oldest of the three impact feature, the NPCC the youngest, and that the Suspicio impact crater is of intermediate age that is likely occurred closer in time to the Massilia event.

  5. Lutetia’s lineaments

    NASA Astrophysics Data System (ADS)

    Besse, Sebastien; Kuppers, M.; Benkhoff, J.; Barnouin, O.

    2013-10-01

    During the fly-by of the Rosetta spacecraft in July 2010, asteroid (21) Lutetia has been partially observed with its remote sensing instrument and its visible camera OSIRIS. The geology and morphology of the asteroid is widely influenced by the large “recent” impact crater located close to the North Pole (also named North Polar Crater Cluster (NPCC) due to the superposition of numerous circular depressions), with its ejecta blankets that have resurfaced a portion of the local terrains around the impact. Impact craters, landslides, boulders and lineaments have been observed in various location and concentration, with the origin of the last three possibly related to the NPCC. The identification of the lineaments and their possible origin has been already investigated [1], with more than 400 lineaments mapped. The location and orientation of the lineaments present a correlation with the NPCC, thus the impact could be the source of the lineaments [2]. In this work, we define more restrictive criteria for the identification of lineaments in order to define orientations and poles of sub-group of lineaments. We follow the same approach taken in studies of 433 Eros and Vesta [3, 4]. We find that the lineaments have two pole directions that point to two locations on the surface. We also investigate the relationship between crosscutting lineaments and the interaction with craters to investigate the timeline of the formation of the lineaments, and indirectly the origin of the formation. Understanding the relationship between orientations and relative ages of the lineaments could provide valuable information on the origin of the lineaments, and interior structure of Lutetia. References [1] N. Thomas, et al., The geomorphology of (21) Lutetia: Results from the OSIRIS imaging system onboard ESA's Rosetta spacecraft, Planetary and Space Science 66, 96-124, 2012 [2] M. Jutzi, et al., The influence of recent major crater impacts on the surrounding surfaces of (21) Lutetia

  6. Structural lineaments of Gaspe from ERTS imagery

    NASA Technical Reports Server (NTRS)

    Steffensen, R.

    1973-01-01

    A test study was conducted to assess the value of ERTS images for mapping geologic features of the Gaspe Peninsula, Quebec. The specific objectives of the study were: 1) to ascertain the best procedure to follow in order to obtain valuable geologic data as a result of interpretation; and 2) to indicate in which way these data could relate to mineral exploration. Of the four spectral bands of the Multispectral scanner, the band from 700 to 800 nanometers, which seems to possess the best informational content for geologic study, was selected for analysis. The original ERTS image at a scale of 1:3,700,000 was enlarged about 15 times and reproduced on film. Geologically meaningful lines, called structural lineaments, were outlined and classified according to five categories: morpho-lithologic boundaries, morpho-lithologic lineaments, fault traces, fracture zones and undefined lineaments. Comparison with the geologic map of Gaspe shows that morpho-lithologic boundaries correspond to contacts between regional stratigraphic units. Morpholithologic lineaments follow bedding trends, whereas fracture traces appear as sets of parallel lineaments, intersecting at high angles the previous category of lineaments. Fault traces mark more precisely the location of faults already mapped and spot the presence of presumable faults, not indicated on the geologic map.

  7. 38th Annual Maintenance & Operations Cost Study for Colleges

    ERIC Educational Resources Information Center

    Agron, Joe

    2009-01-01

    The nation's colleges are feeling the pinch of the economic downturn, and maintenance and operations (M&O) budgets especially are under pressure. This article presents data from the 38th annual Maintenance & Operations Cost Study for colleges that can help one in benchmarking expenditures at one's institution. Data provided only targets two-year…

  8. 38th Annual Maintenance & Operations Cost Study for Schools

    ERIC Educational Resources Information Center

    Agron, Joe

    2009-01-01

    Despite the worst economic environment in generations, spending by K-12 institutions on maintenance and operations (M&O) held its own--defying historical trends that have shown M&O spending among the most affected in times of budget tightening. This article presents data from the 38th annual Maintenance & Operations Cost Study for schools that can…

  9. Archuleta County CO Lineaments

    DOE Data Explorer

    Zehner, Richard E.

    2012-01-01

    Citation Information: Originator: Earth Science &Observation Center (ESOC), CIRES, University of Colorado at Boulder Originator: Geothermal Development Associates, Reno, Nevada Publication Date: 2012 Title: Archuleta Lineaments Edition: First Publication Information: Publication Place: Reno Nevada Publisher: Geothermal Development Associates, Reno, Nevada Description: This layer traces apparent topographic and air-photo lineaments in the area around Pagosa springs in Archuleta County, Colorado. It was made in order to identify possible fault and fracture systems that might be conduits for geothermal fluids. Geothermal fluids commonly utilize fault and fractures in competent rocks as conduits for fluid flow. Geothermal exploration involves finding areas of high near-surface temperature gradients, along with a suitable “plumbing system” that can provide the necessary permeability. Geothermal power plants can sometimes be built where temperature and flow rates are high. To do this, georeferenced topographic maps and aerial photographs were utilized in an existing GIS, using ESRI ArcMap 10.0 software. The USA_Topo_Maps and World_Imagery map layers were chosen from the GIS Server at server.arcgisonline.com, using a UTM Zone 13 NAD27 projection. This line shapefile was then constructed over that which appeared to be through-going structural lineaments in both the aerial photographs and topographic layers, taking care to avoid manmade features such as roads, fence lines, and right-of-ways. These lineaments may be displaced somewhat from their actual location, due to such factors as shadow effects with low sun angles in the aerial photographs. Note: This shape file was constructed as an aid to geothermal exploration in preparation for a site visit for field checking. We make no claims as to the existence of the lineaments, their location, orientation, and nature. Spatial Domain: Extent: Top: 4132831.990103 m Left: 311979.997741 m Right: 331678.289280 m Bottom: 4116067

  10. Lineaments on Skylab photographs: Detection, mapping, and hydrologic significance in central Tennessee

    NASA Technical Reports Server (NTRS)

    Moore, G. K.

    1976-01-01

    An investigation was carried out to determine the feasibility of mapping lineaments on SKYLAB photographs of central Tennessee and to determine the hydrologic significance of these lineaments, particularly as concerns the occurrence and productivity of ground water. Sixty-nine percent more lineaments were found on SKYLAB photographs by stereo viewing than by projection viewing, but longer lineaments were detected by projection viewing. Most SKYLAB lineaments consisted of topographic depressions and they followed or paralleled the streams. The remainder were found by vegetation alinements and the straight sides of ridges. Test drilling showed that the median yield of wells located on SKYLAB lineaments were about six times the median yield of wells located by random drilling. The best single detection method, in terms of potential savings, was stereo viewing. Larger savings might be achieved by locating wells on lineaments detected by both stereo viewing and projection.

  11. 38th JANNAF Combustion Subcommittee Meeting. Volume 1

    NASA Technical Reports Server (NTRS)

    Fry, Ronald S. (Editor); Eggleston, Debra S. (Editor); Gannaway, Mary T. (Editor)

    2002-01-01

    This volume, the first of two volumes, is a collection of 55 unclassified/unlimited-distribution papers which were presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 38th Combustion Subcommittee (CS), 26 th Airbreathing Propulsion Subcommittee (APS), 20th Propulsion Systems Hazards Subcommittee (PSHS), and 21 Modeling and Simulation Subcommittee. The meeting was held 8-12 April 2002 at the Bayside Inn at The Sandestin Golf & Beach Resort and Eglin Air Force Base, Destin, Florida. Topics cover five major technology areas including: 1) Combustion - Propellant Combustion, Ingredient Kinetics, Metal Combustion, Decomposition Processes and Material Characterization, Rocket Motor Combustion, and Liquid & Hybrid Combustion; 2) Liquid Rocket Engines - Low Cost Hydrocarbon Liquid Rocket Engines, Liquid Propulsion Turbines, Liquid Propulsion Pumps, and Staged Combustion Injector Technology; 3) Modeling & Simulation - Development of Multi- Disciplinary RBCC Modeling, Gun Modeling, and Computational Modeling for Liquid Propellant Combustion; 4) Guns Gun Propelling Charge Design, and ETC Gun Propulsion; and 5) Airbreathing - Scramjet an Ramjet- S&T Program Overviews.

  12. Lineaments and Mineral Occurrences in Pennsylvania

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Kowalik, W. S.; Gold, D. P.

    1975-01-01

    The author has identified the following significant results. A conservative lineament map of Pennsylvania interpreted from ERTS-1 channel 7 (infrared) imagery and Skylab photography was compared with the distribution of known metallic mines and mineral occurrences. Of 383 known mineral occurrences, 116 show a geographical association to 1 km wide lineaments, another 24 lie at the intersection of two lineaments, and one lies at the intersection of three lineaments. The Perkiomen Creek lineament in the Triassic Basin is associated with 9 Cu-Fe occurrences. Six Pb-Zn occurrences are associated with the Tyrone-Mount Union lineament. Thirteen other lineaments are associated with 3, 4, or 5 mineral occurrences each.

  13. Significance of selected lineaments in Alabama

    NASA Technical Reports Server (NTRS)

    Drahovzal, J. A.; Neathery, T. L.; Wielchowsky, C. C.

    1974-01-01

    Four lineaments in the Alabama Appalachians that appear on ERTS-1 imagery have been geologically analysed. Two of the lineaments appear to have regional geologic significance, showing relationships to structural and stratigraphic frameworks, water and mineral resources, geophysical anomalies, and seismicity. The other two lineaments are of local geologic significance, but, nevertheless, have important environmental implications.

  14. Problems in the Study of lineaments

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Kholmyanskii, Michael

    2015-04-01

    The study of linear objects in upper crust, called lineaments, led at one time to a major scientific results - discovery of the planetary regmatic network, the birth of some new tectonic concepts, establishment of new search for signs of mineral deposits. But now lineaments studied not enough for such a promising research direction. Lineament geomorphology has a number of problems. 1.Terminology problems. Lineament theme still has no generally accepted terminology base. Different scientists have different interpretations even for the definition of lineament We offer an expanded definition for it: lineaments - line features of the earth's crust, expressed by linear landforms, geological linear forms, linear anomalies of physical fields may follow each other, associated with faults. The term "lineament" is not identical to the term "fault", but always lineament - reasonable suspicion to fault, and this suspicion is justified in most cases. The structure lineament may include only the objects that are at least presumably can be attributed to the deep processes. Specialists in the lineament theme can overcome terminological problems if together create a common terminology database. 2. Methodological problems. Procedure manual selection lineaments mainly is depiction of straight line segments along the axes of linear morphostructures on some cartographic basis. Reduce the subjective factors of manual selection is possible, following a few simple rules: - The choice of optimal projection, scale and quality of cartographic basis; - Selection of the optimal type of linear objects under study; - The establishment of boundary conditions for the allocation lineament (minimum length, maximum bending, the minimum length to width ratio, etc.); - Allocation of an increasing number of lineaments - for representative sampling and reduce the influence of random errors; - Ranking lineaments: fine lines (rank 3) combined to form larger lineaments rank 2; which, when combined

  15. CLustre: semi-automated lineament clustering for palaeo-glacial reconstruction

    NASA Astrophysics Data System (ADS)

    Smith, Mike; Anders, Niels; Keesstra, Saskia

    2016-04-01

    Palaeo glacial reconstructions, or "inversions", using evidence from the palimpsest landscape are increasingly being undertaken with larger and larger databases. Predominant in landform evidence is the lineament (or drumlin) where the biggest datasets number in excess of 50,000 individual forms. One stage in the inversion process requires the identification of lineaments that are generically similar and then their subsequent interpretation in to a coherent chronology of events. Here we present CLustre, a semi-authomated algorithm that clusters lineaments using a locally adaptive, region growing, method. This is initially tested using 1,500 model runs on a synthetic dataset, before application to two case studies (where manual clustering has been undertaken by independent researchers): (1) Dubawnt Lake, Canada and (2) Victoria island, Canada. Results using the synthetic data show that classifications are robust in most scenarios, although specific cases of cross-cutting lineaments may lead to incorrect clusters. Application to the case studies showed a very good match to existing published work, with differences related to limited numbers of unclassified lineaments and parallel cross-cutting lineaments. The value in CLustre comes from the semi-automated, objective, application of a classification method that is repeatable. Once classified, summary statistics of lineament groups can be calculated and then used in the inversion.

  16. The spacing calculator software—A Visual Basic program to calculate spatial properties of lineaments

    NASA Astrophysics Data System (ADS)

    Ekneligoda, Thushan C.; Henkel, Herbert

    2006-05-01

    A software tool is presented which calculates the spatial properties azimuth, length, spacing, and frequency of lineaments that are defined by their starting and ending co-ordinates in a two-dimensional (2-D) planar co-ordinate system. A simple graphical interface with five display windows creates a user-friendly interactive environment. All lineaments are considered in the calculations, and no secondary sampling grid is needed for the elaboration of the spatial properties. Several rule-based decisions are made to determine the nearest lineament in the spacing calculation. As a default procedure, the programme defines a window that depends on the mode value of the length distribution of the lineaments in a study area. This makes the results more consistent, compared to the manual method of spacing calculation. Histograms are provided to illustrate and elaborate the distribution of the azimuth, length and spacing. The core of the tool is the spacing calculation between neighbouring parallel lineaments, which gives direct information about the variation of block sizes in a given category of structures. The 2-D lineament frequency is calculated for the actual area that is occupied by the lineaments.

  17. Commerce geophysical lineament - Its source, geometry, and relation to the Reelfoot rift and New Madrid seismic zone

    USGS Publications Warehouse

    Langenheim, V.E.; Hildenbrand, T.G.

    1997-01-01

    The Commerce geophysical lineament is a northeast-trending magnetic and gravity feature that extends from central Arkansas to southern Illinois over a distance of ???400 km. It is parallel to the trend of the Reelfoot graben, but offset ???40 km to the northwest of the western margin of the rift floor. Modeling indicates that the source of the aeromagnetic and gravity anomalies is probably a mafic dike swarm. The age of the source of the Commerce geophysical lineament is not known, but the linearity and trend of the anomalies suggest a relationship with the Reelfoot rift, which has undergone episodic igneous activity. The Commerce geophysical lineament coincides with several topographic lineaments, movement on associated faults at least as young as Quaternary, and intrusions of various ages. Several earthquakes (Mb > 3) coincide with the Commerce geophysical lineament, but the diversity of associated focal mechanisms and the variety of surface structural features along the length of the Commerce geophysical lineament obscure its relation to the release of present-day strain. With the available seismicity data, it is difficult to attribute individual earthquakes to a specific structural lineament such as the Commerce geophysical lineament. However, the close correspondence between Quaternary faulting and present-day seismicity along the Commerce geophysical lineament is intriguing and warrants further study.

  18. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  19. A Global Data Base of 433Eros Lineaments

    NASA Astrophysics Data System (ADS)

    Buczkowski, D. L.; Prockter, L.; Barnouin-Jha, O. S.

    2005-12-01

    The Near-Earth Asteroid Rendezvous NEAR-Shoemaker spacecraft orbited the asteroid 433Eros for a year from 2000-2001. The NEAR Multi-Spectral Imager (MSI) collected tens of thousands of high resolution images and as a result Eros is the most comprehensively studied asteroid in the solar system. Previous mapping of lineaments on Eros has supported the suggestion of planes throughout the asteroid. We are creating a global data base of all Eros lineaments to better understand the global distribution of these features. It is particularly challenging to map lineament orientations on a non-spherical body (Eros is the shape of a peanut, measuring 34 km on the long axis). To address this issue we are mapping the lineaments directly on the Eros shapefile using POINTS, developed by J. Joseph at Cornell University. We compare lineament orientation to impact craters to determine if there is a causal relationship between cratering events and lineament formation. For example, preliminary mapping around Narcissus crater indicate at least four groups of lineaments: 1) previously mapped lineaments that may be indicative of planes through Eros, 2) lineaments radial to Narcissus crater, 3) lineaments related to nearby Shoemaker Regio, and 4) a set of lineaments that appear to be conjugate to set 1. Lineament sets 1 and 4 are also evident near Selene crater, as are lineaments radial to the crater. Lineament types include grooves, ridges and pit chains. We identify types of lineaments across the surface using a combination of NEAR Laser Rangefinder (NLR) topographic data and MSI images, and classify them according to region, including areas suggestive of thicker regolith. Further lineament/crater interactions are also examined, to determine the effect that lineaments have on crater shape.

  20. 38th Annual Survey Report on State-Sponsored Student Financial Aid, 2006-2007 Academic Year

    ERIC Educational Resources Information Center

    National Association of State Student Grant and Aid Programs, 2007

    2007-01-01

    Each year, the National Association of State Student Grant and Aid Programs (NASSGAP) completes a survey regarding state-funded expenditures for postsecondary student financial aid. This report, the 38th annual survey, represents data from academic year 2006-07. Data highlights of this survey include: (1) In the 2006-2007 academic year, the states…

  1. The 38th Annual Phi Delta Kappa/Gallup Poll of the Public's Attitudes toward the Public Schools

    ERIC Educational Resources Information Center

    Rose, Lowell C.; Gallup, Alec M.

    2006-01-01

    Rose and Gallup report on the results of the 38th Annual PDK/Gallup Poll of the Public's Attitude Toward the Public Schools. This year's survey examined No Child Left Behind and the public's perception of the law, the appropriate role of standardized testing, the achievement gap between white students and black and Hispanic students, the…

  2. Research in Medical Education: Proceedings of the Annual Conference (38th, Washington, DC, October 25-27, 1999).

    ERIC Educational Resources Information Center

    Anderson, M. Brownell, Ed.

    1999-01-01

    The Proceedings of the 38th Annual Conference on Research in Medical Education (Washington, DC, October 25-27, 1999) contain 43 research papers on innovative curricula, diagnostic reasoning, student evaluations of faculty, practicing physicians, prediction, licensing examinations, admissions, faculty development, managed care, technology-enhanced…

  3. Structural lineaments in the southern Sierra Nevada, California

    NASA Technical Reports Server (NTRS)

    Liggett, M. A. (Principal Investigator); Childs, J. F.

    1974-01-01

    The author has identified the following significant results. Several lineaments observed in ERTS-1 MSS imagery over the southern Sierra Nevada of California have been studied in the field in an attempt to explain their geologic origins and significance. The lineaments are expressed topographically as alignments of linear valleys, elongate ridges, breaks in slope or combinations of these. Natural outcrop exposures along them are characteristically poor. Two lineaments were found to align with foliated metamorphic roof pendants and screens within granitic country rocks. Along other lineaments, the most consistant correlations were found to be alignments of diabase dikes of Cretaceous age, and younger cataclastic shear zones and minor faults. The location of several Pliocene and Pleistocene volcanic centers at or near lineament intersections suggests that the lineaments may represent zones of crustal weakness which have provided conduits for rising magma.

  4. Florida lineament: Key tectonic element of eastern Gulf of Mexico

    SciTech Connect

    Christenson, G. )

    1990-09-01

    The origin of the Florida lineament, a major basement lineament that strikes northwest-southeast across the West Florida Shelf and southern Florida, is a key to the history of the Gulf of Mexico. Regional magnetic and gravity trends are truncated along the Florida lineament. New geologic data from recent wells on the West Florida Shelf and magnetic anomaly data indicate that pre-Mesozoic basement terranes on opposite sides of the Florida lineament were contiguous prior to Triassic-Jurassic volcanism and exhibit only minimal lateral offset across the Florida lineament at present. The lack of major lateral offset of pre-Mesozoic basement terranes across the Florida lineament and lithologic and geophysical data suggest that the lineament represents a Triassic-Jurassic extensional rift margin. The Florida lineament is interpreted to be the southeastward continuation of the well-documented peripheral fault system, which delineates the rifted continental margin of the northern Gulf basin. The continuation of the peripheral fault system along the Florida lineament suggests that the tectonostratigraphic terranes associated with the Mesozoic producing trends of the northern Gulf basin may extend southeastward along the Florida lineament. The interpretation of the Florida lineament as an extensional rift margin places significant constraints on any tectonic model of the Gulf of Mexico region. A tectonic interpretation consistent with the constraints suggests that the West Florida Shelf and southern Florida region formed as the result of Triassic-Jurassic extension around a pole of rotation in central Florida. The central Florida pole of rotation is intermediate to the poles of rotation counterclockwise of Yucatan out of the northern Gulf basin. This suggests that the region south of the Florida lineament underwent extension synchronous with the rotation of the Yucatan block.

  5. A new fault lineament in Southern California

    NASA Technical Reports Server (NTRS)

    Pease, R. W.; Johnson, C. W.

    1973-01-01

    ERTS-1 imagery clearly shows a 50-mile wide tectonic zone across Southern California oriented about 15 deg to the structures of the Transverse Ranges or with an azimuth of 70 deg. The zone is delineated on the imagery by terrian alignments and vegetational differences. A previously undisclosed tectonic lineament extends across the Mojave Desert and appears as a line of crustal upwarping. Pressure which would have caused this plus the occurrence of many thrust faults with the 70 deg azimuth indicate this to be a zone of crustal compression. Recent earthquake epicenters appear to be related to this compression zone rather than the traditional fault network of Southern California.

  6. From the 38th Parallel to the DMZ: Teaching about the Two Koreas--A Nation Divided (1945-Present).

    ERIC Educational Resources Information Center

    Peters, Richard

    How the Korean peninsula came to be divided in 1945 is discussed, and the stalemated situation that has existed since that time is described. This brief history is followed by lesson plans for grades 6-12. The lesson plans include goals and objectives, activities, and assessment techniques for teaching the following concepts: (1) territorial…

  7. Europa's Northern Trailing Hemisphere: Lineament Stratigraphic Framework

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Hare, T.; Ricq, E.; Strom, K.; Greeley, R.; Tanaka, K.; Senske, D.

    2004-01-01

    Knowledge of the global distribution of Europan geologic units in time and space is a necessary step for the synthesis of the results of the Galileo mission and in preparation for future exploration (namely, by JIMO) of the satellite. We have initiated the production of the first Global Geological Map of Europa. As a base map, we use the recently published global photomosaic of Europa (U.S.G.S. Map I-2757) and additional Galileo SSI images at their original resolution. The map is being produced entirely on GIS format for analysis and combination with other datasets [1]. One of the main objectives of this project is to establish a global stratigraphic framework for Europa. In the absence of a well-developed cratering record, this goal will be achieved using the satellite s global network of lineaments (ridges, ridge complexes and bands; cf. [2]). Here we present the preliminary stratigraphic framework synthesized from the sequence of lineaments derived for the northern trailing hemisphere of Europa (Figure 1, below), and we discuss its significance and some emerging implications.

  8. Structural lineament and pattern analysis of Missouri, using LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Martin, J. A.; Kisvarsanyi, G. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Major linear, circular, and arcuate traces were observed on LANDSAT imagery of Missouri. Lineaments plotted within the state boundaries range from 20 to nearly 500 km in length. Several extend into adjoining states. Lineaments plots indicate a distinct pattern and in general reflect structural features of the Precambrian basement of the platform. Coincidence of lineaments traced from the imagery and known structural features in Missouri is high, thus supporting a causative relation between them. The lineament pattern apparently reveals a fundamental style of the deformation of the intracontinental craton. Dozens of heretofore unknown linear features related to epirogenic movements and deformation of this segment of the continental crust were delineated. Lineaments and mineralization are interrelated in a geometrically classifiable pattern.

  9. Shape of lenticulae on Europa and their interaction with lineaments.

    NASA Astrophysics Data System (ADS)

    Culha, Cansu; Manga, Michael

    2015-04-01

    The surface of Europa contains many elliptical features that have been grouped into three classes: (a) positive relief (domes), (b) negative relief (pits), or (c) complex terrain (small chaos). Collectively, these three classes of features are often called "lenticulae". The internal processes that form lenticulae are unknown. However, given that the diameters of all these features are similar, it is parsimonious to ascribe each class of feature to a different stage in the evolution of some process occurring within the ice shell. Proposed models for these features including diapirs (Sotin et al., 2002; Rathbun et al., 1998); melting above diapirs (Schmidt et al., 2011); and sills of water (Michaut and Manga, 2014). The objective of the present study is to first characterize the shape of lecticulae, and then look at the interaction of lenticulae with lineaments, in order to test lenticulae formation mechanisms. Lenticulae and lineaments are mapped and annotated on ArcGIS. We mapped a total of 57 pits and 86 domes. Both pits and domes have similar aspect ratios and orientations. The elliptical similarities of domes and pits suggest that domes and pits are surface expressions of different stages of a common process within the ice shell. The cross cutting relationships between lineaments reveal relative age. Lineaments either lie over or under the lenticulae. All of the lineament segments that appear within pits also appear topographically lower than the rest of the surface. Domes lie over and under lineaments, but unlike pits there are lineaments that lie over domes that do not vary in topography. This suggests that the lineaments that lie above lenticulae and match the lenticulae's topography are older than the lenticulae. Domes have more crossing lineaments. Therefore, on average, they appear to be older than pits. Lineaments also appear on the sides of lenticulae. There are two different ways in which adjacent lineaments appear: 1. they disrupt the shape of the

  10. Investigation of lineaments on Skylab and ERTS images of Peninsular Ranges, Southwestern California

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.

    1974-01-01

    The author has identified the following significant results. Northwest trending faults such as the Elsinore and San Jacinto are prominently displayed on Skylab and ERTS images of the Peninsular Ranges, southern California. Northeast, north-south, and west-north-west trending lineaments and faults are also apparent on satellite imagery. Several of the lineaments represent previously unmapped faults. Other lineaments are due to erosion along foliation directions and sharp bends in basement rock contacts rather than faulting. The northeast trending Thing Valley fault appears to be offset by the south branch of the Elsinore fault near Agua Caliente Hot Springs. Larger horizontal displacement along the Elsinore fault further northwest may be distributed along several faults which branch from the Elsinore fault in the Peninsular Ranges. The northeast and west-northwest trending faults are truncated by the major northwest trending faults and appear to be restricted to basement terrane. Limited data on displacement direction suggests that the northeast and west-northwest trending faults formed in response to an earlier period of east-northeast, west-southwest crustal shortening. Such a stress system is consistent with the plate tectonic model of a subduction zone parallel to the continental margin suggested in the late Mesozoic and early Tertiary.

  11. Models of fracture lineaments - Joint swarms, fracture corridors and faults in crystalline rocks, and their genetic relations

    NASA Astrophysics Data System (ADS)

    Gabrielsen, Roy H.; Braathen, Alvar

    2014-07-01

    Fracture lineaments in crystalline and metamorphic rocks of southern Norway can be subdivided into joint swarms, fracture corridors and faults, depending on displacement, the fracture mode and patterns, and the presence of fault rocks. Their physical appearance as lineaments seen by remote sensing is not discernible, as they define km-long and narrow tabular zones of high fracture intensity. Intrinsically, fracture zonation becomes better expressed from joint swarms to fracture corridors and especially faults as a consequence of increasing accumulate strain. Joint swarms and fracture corridors commonly reveal a symmetric fracture zonation on both sides of its core, whereas inclined extensional faults tend to have asymmetric patterns with enhanced strain and a wider damage zone in the hanging wall. Fracture lineament can be mapped in subzones A-B (core), which are typically some cm up to some tens of meters wide. Common structural elements are fault rocks/shear zones, lenses, and a network of fractures often with very high fracture frequency. Secondary minerals are common. Outside this, subzones C-D (damage zone) are commonly 20-50-m\\ wide with lower fracture intensity of lineament-parallel fracturing, defining the topographic boundary of the lineament. Mineralisation is rarer. The transitional subzone E of multi-orientation fractures defines the transition to the background fracture system. We propose a model for the classification and development of fracture lineaments, applying their architecture (intrinsic geometry, spatial fracture pattern and spatial distribution of fault rocks) as tools for the systematic description. This links fault growth processes and mechanisms that can be ascribed to strain hardening and softening scenarios in a model of fault architecture.

  12. Objective procedures for lineament enhancement and extraction ( Eros Data Center).

    USGS Publications Warehouse

    Moore, G.K.; Waltz, F.A.

    1983-01-01

    A longterm research goal at EROS Data Center is to develop automated, objective procedures for lineament mapping. In support of this goal, a five-step digital convolution procedure has been used to produce directionally enhanced images, which contain few artifacts and little noise. The main limitation of this procedure is that little enhancement of lineaments occurs in dissected terrain, in shadowed areas, and in flat areas with a uniform land cover. The directional enhancement procedure can be modified to extract edge and line segments from an image. Any of various decision rules can then be used to connect the line segments and to produce a final lineament map. The result is an interpretive map, but one that is based on an objective extraction of lineament components by digital processing. -from Authors

  13. Lineaments that are artifacts of lighting, part G

    NASA Technical Reports Server (NTRS)

    Howard, K. A.; Larsen, B. R.

    1972-01-01

    Apollo 15 orbital photographs, particularly those taken at low sun elevation angles, are examined revealing grid patterns of lineaments. Preliminary results of experiments demonstrate that spurious lineaments and grid patterns can be produced and that the directions are dependent in part upon the position of the light source. The experiments were designed to duplicate the effect of bright sunlight reflecting from a hummocky surface with little or no diffuse light in the shadowed areas.

  14. Recognition of lineaments in Eastern Rhodopes on Landsat multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, Denitsa; Jelev, Georgi; Atanassov, Valentin; Koprinkova-Hristova, Petia; Alexiev, Kiril

    Lineaments usually appear on the multispectral images as lines (edges) or linear formations as a result of the color variations of the surface structures. Lineaments are line features on earth’s surface which reflect geological structure. The basic geometry of a line is orientation, length and curve. Detection of lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, water resources, etc. In this study the integrated approach is applied. It comes together the methods of the visual interpretation of various geological and geographical indications in the satellite images, application of spatial analysis in GIS and automatic processing of Landsat multispectral image by Canny algorithm, Directional Filter and Neural Network. Canny algorithm for extracting edges is series of filters (Gaussian, Sobel, etc.) applied to all bands of the image using the free IDL source (http://www.cis.rit.edu/class/simg782/programs/ canny.pro). Directional Filter is applied to sharpen the image in a specific (preferred) direction. Another method is the Neural Network algorithm for recognizing lineaments. Lineaments are effectively extracted using different methods of automatic. The results from above mentioned methods are compared to results derived from visual interpretation of satellite images and from geological map. The rose-diagrams of distribution of lineaments and maps of their density are completed. Acknowledgments: This study is supported by the project DFNI - I01/8 funded by the Bulgarian Science Fund.

  15. Some Characteristics of Regular Fracture-lineament Global Network

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Longinos, Biju

    2013-04-01

    Existence of regular fracture-lineament global network global network (FLGN) (or regmatic network), was known for lands of the Earth in many regions. Authors made more than 20 000 measurements of lineaments and faults azimuths of the lineaments and fractures on geographic, geologic and tectonic maps for number of regions and for all Earth. Later all data files have subjected by the factor analysis. We detect existence FLGN in the Ocean bottom. Statistic relation between fractures and lineaments directions was established. Control of large-scale lineaments by fractures within the competence of the FLGN was based. Predominating strike directions of line elements of FLGN are: 0 - 10˚, 80 - 90˚, 30 - 60˚, 120 - 150˚. FLGN have attribute of fractality. One-direction lines elements of the FLGN alternate with constant step within the competence of defined scale. FLGN was formed under a continuous stress, which exist at least throughout the entire earthcrust thickness and during the time of at least the entire Phanerozoe. This stress was generated by a complex of forces: rotational, pulsating and, possibly, some others in the earthcrust. All of these forces are symmetric to the Earth rotation axis and some of them also to the equator. Rotation and pulsating processes of the Earth are the main factors of these forces and, hence, formation of the fracture- lineament network. FLGN determines the most favorable place for fracturing, formation of fracture-controlled landforms, volcanic and seismic processes (geohazards), fluid flow and ore-formation (minerals).

  16. Lineaments in basement terrane of the Peninsular Ranges, Southern California

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.

    1974-01-01

    The author has identified the following significant results. ERTS and Skylab images reveal a number of prominent lineaments in the basement terrane of the Peninsular Ranges, Southern California. The major, well-known, active, northwest trending, right-slip faults are well displayed; northeast and west to west-northwest trending lineaments are also present. Study of large-scale airphotos followed by field investigations have shown that several of these lineaments represent previously unmapped faults. Pitches of striations on shear surfaces of the northeast and west trending faults indicate oblique slip movement; data are insufficient to determine the net-slip. These faults are restricted to the pre-tertiary basement terrane and are truncated by the major northwest trending faults. They may have been formed in response to an earlier stress system. All lineaments observed in the space photography are not due to faulting, and additional detailed geologic investigations are required to determine the nature of the unstudied lineaments, and the history and net-slip of fault-controlled lineaments.

  17. The Cottage Lake Aeromagnetic Lineament: A Possible Onshore Extension of the Southern Whidbey Island Fault, Washington

    USGS Publications Warehouse

    Blakely, Richard J.; Sherrod, Brian L.; Wells, Ray E.; Weaver, Craig S.; McCormack, David H.; Troost, Kathy G.; Haugerud, Ralph A.

    2004-01-01

    topographic and aeromagnetic lineaments cross the tunnel alignment. Some of the disturbance is likely tectonic in origin, although other explanations are possible. Some of the soil disturbance demonstrably predates the 15-13 ka Fraser glaciation of the Puget Lowland; other samples have inconclusive ages and may be younger. Subtle scarps in Pleistocene surfaces are visible on high-resolution lidar topography at a number of locations along the Cottage Lake aeromagnetic lineament. Collectively, the scarps are parallel to the trend of the aeromagnetic lineament and extend a total distance of 18 km. In the field, scarps exhibit 1 to 5 m of north-side-up offset. The scarps provide targets for future paleoseismic trenching studies to test the hypothesis that they have a tectonic origin.

  18. Vegetation Lineaments Near Pearblossom: Indicators of San Andreas Foreberg-Style Faulting?

    NASA Astrophysics Data System (ADS)

    Lynch, D. K.; Hudnut, K. W.; Jordan, F.

    2012-12-01

    associated with foreberg-like structures of the SAF. Florensov and Solonenko (1963) documented significant slip on similarly sized and oriented structures when mapping the 1957 Gobi-Altay earthquake surface ruptures. Prior investigations concluded that several of these lineaments represent bedrock units, but some lineaments (most notably V2, V3, and V4) appear to have caused recent topography owing to both lateral and vertical movement. Bedrock offsets appear to be equal to total offset of recent alluvial fan deposits; we infer that these are faults that have begun to move perhaps within the past few tens of thousands of years. Other VLs, (V9 & V10) appear to offset alluvial fan surfaces by several meters vertically, an observation that seems inconsistent with prior interpretations and appears to indicate recent movement. Further mapping is planned to determine the magnitude of any movement and its recency, whether or not the lineaments indicate foreberg-style structures, and if similar foreberg systems exist elsewhere along the SAF system. Such systems could accommodate nearly fault-normal compression by forming elongate doubly-plunging anticlinal folds that trend sub-parallel to the main through-going strike-slip system. These lineaments may indicate the presence of previously unrecognized hazards associated with secondary faults that subparallel the main strand of the SAF.

  19. Algorithms for lineaments detection in processing of multispectral images

    NASA Astrophysics Data System (ADS)

    Borisova, D.; Jelev, G.; Atanassov, V.; Koprinkova-Hristova, Petia; Alexiev, K.

    2014-10-01

    Satellite remote sensing is a universal tool to investigate the different areas of Earth and environmental sciences. The advancement of the implementation capabilities of the optoelectronic devices which are long-term-tested in the laboratory and the field and are mounted on-board of the remote sensing platforms further improves the capability of instruments to acquire information about the Earth and its resources in global, regional and local scales. With the start of new high-spatial and spectral resolution satellite and aircraft imagery new applications for large-scale mapping and monitoring becomes possible. The integration with Geographic Information Systems (GIS) allows a synergistic processing of the multi-source spatial and spectral data. Here we present the results of a joint project DFNI I01/8 funded by the Bulgarian Science Fund focused on the algorithms of the preprocessing and the processing spectral data by using the methods of the corrections and of the visual and automatic interpretation. The objects of this study are lineaments. The lineaments are basically the line features on the earth's surface which are a sign of the geological structures. The geological lineaments usually appear on the multispectral images like lines or edges or linear shapes which is the result of the color variations of the surface structures. The basic geometry of a line is orientation, length and curve. The detection of the geological lineaments is an important operation in the exploration for mineral deposits, in the investigation of active fault patterns, in the prospecting of water resources, in the protecting people, etc. In this study the integrated approach for the detecting of the lineaments is applied. It combines together the methods of the visual interpretation of various geological and geographical indications in the multispectral satellite images, the application of the spatial analysis in GIS and the automatic processing of the multispectral images by Canny

  20. Curriculum Materials 1983. A Guide to the Exhibit of Curriculum Materials at the ASCD Annual Conference (38th, Houston, Texas, March 5-8, 1983).

    ERIC Educational Resources Information Center

    Association for Supervision and Curriculum Development, Alexandria, VA.

    This catalog was prepared as a guide for conference participants to use while examining curriculum materials displayed during the 38th Association for Supervision and Curriculum Development annual conference. Materials are listed by the following subject areas: art; bilingual/English as a second language; career/vocational education; computer…

  1. On the reliability of manually produced bedrock lineament maps

    NASA Astrophysics Data System (ADS)

    Scheiber, Thomas; Viola, Giulio; Fredin, Ola; Jarna, Alexandra; Gasser, Deta; Łapinska-Viola, Renata

    2016-04-01

    Manual extraction of topographic features from digital elevation models (DEMs) is a commonly used technique to produce lineament maps of fractured basement areas. There are, however, several sources of bias which can influence the results. In this study we investigated the influence of the factors (a) scale, (b) illumination azimuth and (c) operator on remote sensing results by using a LiDAR (Light Detection and Ranging) DEM of a fractured bedrock terrain located in SW Norway. Six operators with different backgrounds in Earth sciences and remote sensing techniques mapped the same LiDAR DEM at three different scales and illuminated from three different directions. This resulted in a total of 54 lineament maps which were compared on the basis of number, length and orientation of the drawn lineaments. The maps show considerable output variability depending on the three investigated factors. In detail: (1) at larger scales, the number of lineaments drawn increases, the line lengths generally decrease, and the orientation variability increases; (2) Linear features oriented perpendicular to the source of illumination are preferentially enhanced; (3) The reproducibility among the different operators is generally poor. Each operator has a personal mapping style and his/her own perception of what is a lineament. Consequently, we question the reliability of manually produced bedrock lineament maps drawn by one person only and suggest the following approach: In every lineament mapping study it is important to define clear mapping goals and design the project accordingly. Care should be taken to find the appropriate mapping scale and to establish the ideal illumination azimuths so that important trends are not underrepresented. In a remote sensing project with several persons included, an agreement should be reached on a given common view on the data, which can be achieved by the mapping of a small test area. The operators should be aware of the human perception bias. Finally

  2. Structural lineaments and neogene volcanism in southwestern Luzon

    NASA Astrophysics Data System (ADS)

    Wolfe, John A.; Self, Stephen

    The Philippine Islands have at least 15 active composite volcanoes and as many more that are fumarolic or dormant. About 20 calderas of Pleistocene age are known so far. Southwestern Luzon, one of the major volcanic districts of the country, contains three young composite volcanoes, four in a fumarolic stage, and over 200 vents of Pliocene-Pleistocene age within 150 km of Manila. There are three large calderas in this zone with a fourth a short distance south on Mindoro Island, plus four summit calderas. One of the most striking features is the Bataan Lineament, a chain of 27 volcanic vents, only one at present active, which marks the western side of the district. The main segment extends from Naujan caldera in the south (on Mindoro Island) on a strike of N31°W through Batangas Bay caldera, Mataas Na Gulod (a summit caldera), Corregidor Island (a small caldera), to Mount Mariveles and Mount Natib on the Bataan peninsula. With a bend of 30° at Mount Natib, the lineament continues northward for another 100 km, giving a total length of 320 km. Here it includes Mount Pinatubo, which is active, and several other vents. The Bataan Lineament is a volcanic arc, with perhaps some extensional element, above the subduction zone of the Manila Trench, dipping eastward under Luzon. Another major volcanic element is the Verde Island transform, which forms a zone across southwest Luzon, including 10 or more volcanoes. Activity extended from the lower Miocene with periodic eruptions until the late Pleistocene. Two volcanoes may be in a waning (fumarolic) stage and have thermal areas. Near the western end of this lineament, recent rifting may have occurred, and presently it is a zone of intense seismic activity. In the zone between the Bataan and Verde Island lineaments, several major volcanoes have developed including Laguna de Bay and Taal volcano-tectonic depressions. Large volume ignimbrite-forming eruptions may have taken place from Laguna de Bay caldera approximately 1.0 m

  3. Lineaments on Skylab photographs: Detection, mapping, and hydrologic significance in central Tennessee

    NASA Technical Reports Server (NTRS)

    Moore, G. K. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Lineaments were detected on Skylab photographs by stereo viewing, projection viewing, and composite viewing. Sixty-nine percent more lineaments were found by stereo viewing than by projection, but segments of projection lineaments are longer; total length of lineaments found by these two methods is nearly the same. Most Skylab lineaments consist of topographic depression: stream channel alinements, straight valley walls, elongated swales, and belts where sinkholes are abundant. Most of the remainder are vegetation alinements. Lineaments are most common in dissected areas having a thin soil cover. Results of test drilling show: (1) the median yield of test wells on Skylab lineaments is about six times the median yield of all existing wells; (2) three out of seven wells on Skylab lineaments yield more than 6.3 1/s (110 gal/min): (3) low yields are possible on lineaments as well as in other favorable locations; and (4) the largest well yields can be obtained at well locations of Skylab lineaments that also are favorably located with respect to topography and geologic structure, and are in the vicinity of wells with large yields.

  4. Seismic Structure of the Jemez Lineament, New Mexico: Evidence for Heterogenous Accretion and Extension in Proterozoic Time, and Modern Volcanism

    NASA Astrophysics Data System (ADS)

    Magnani, B. M.; Levander, A.; Miller, K. C.; Eshete, T.

    2001-12-01

    Southwestern North America is the result of a long and complex geologic history that spans from the Proterozoic time, when assembly of the southwestern part of the continent began, to the present. Geological and geophysical observations suggest that the lithospheric structures produced during the assembly of the continent profoundly influenced subsequent modifications to the southwest. The primary objective of the Continental Dynamics of Rocky Mountains (CDROM) project is to investigate the processes that have produced the present structure of the Rocky Mountains lithosphere and to understand the legacy of the Archean and Proterozoic accretionary boundaries. One of the enigmatic features investigated in CDROM is the Jemez Lineament, an 800 km long alignment of Tertiary volcanic centers that extends across Arizona and northern New Mexico following the southern margin of the Yavapai-Mazatzal Proterozoic terrane boundary. The Jemez lineament was the target of deep seismic reflection and crustal refraction profiling. The reflection profile extends about 150 km parallel to the front of the southern Rocky Mountains and crosses the southern edge of the lineament at high angle near Las Vegas NM. The seismic reflection profile exhibits a striking difference in reflectivity and crustal structure north and south of Las Vegas. To the north the reflection profile images a broad, south dipping, strongly reflecting, ramp structure, traceable to depths of 30-32km. The ramp is overprinted in places by a complex set of bright layered reflections. We interpret the south-dipping ramp as a suture formed during Proterozoic island arc accretion and the bright reflections as Jemez lineament recent intrusives that have ponded at several crustal depths, and are present locally in outcrop. We speculate that the intrusives used the Proterozoic suture as a pathway through the crust to the surface. To the south, the entire middle crust is characterized by a 35 km wide antiform that may have

  5. Morphostructure analysis of Sapaya ancient volcanic area based lineament data

    NASA Astrophysics Data System (ADS)

    Massinai, Muhammad Altin; Kadir, Fitrah H.; Ismullah, Muh. Fawzy; Aswad, Sabrianto

    2016-05-01

    Morphostructure of Sapaya ancient volcanic have been analysis by using lineament models. In this models, two methods of retrieval data have been used. First, the field survey of the area, second, the satellite images analysis. The morphostructure of Sapaya ancient volcanic contribute to the crater, caldera, and shown an eroded cone morphology. The directions of eruption from Sapaya ancient volcanic have identified in region of Jeneponto and Takalar, which is had east - west and northeast - southwest structure. These eruptions also give contribution to the character of river in Jenelata watershed, by the presence of tuffs, pillow lava, basalt, andesite, diorite, granodiorite, granite, and gabbro, respectively.

  6. Lineament analysis of mineral areas of interest in Afghanistan

    USGS Publications Warehouse

    Hubbard, Bernard E.; Mack, Thomas J.; Thompson, Allyson L.

    2012-01-01

    The purpose of this report and accompanying GIS data is to provide lineament maps that give one indication of areas that warrant further investigation for optimal bedrock water-well placement within 24 target areas for mineral resources (Peters and others, 2011). These data may also support the identification of faults related to modern seismic hazards (for example, Wheeler and others, 2005; Ruleman and others, 2007), as well as support studies attempting to understand the relationship between tectonic and structural controls on hydrothermal fluid flow, subsequent mineralization, and water-quality issues near mined and unmined mineral deposits (for example, Eppinger and others, 2007).

  7. Landsat and field studies link structural lineaments and mineralization in St. Francois, Missouri

    SciTech Connect

    Robertson, C.E.

    1984-07-01

    Late-stage Precambrian granites in the St. Francois Mountains are among the most uraniferous in North America. The St. Francois province has potential for uranium mineralization of economic importance, especially in the later differentiates. Structural lineaments and circular features displayed on images produced by electronic data processing of Landsat multispectral scanner data may be related to late-stage intrusives with uranium potential. Strong north-south lineaments and associated circular and arcuate features may correspond to major weaknesses in the earth's crust along which fracturing, faulting, and volcanism have occurred. The strike of the lineaments transects the older dominant northwest-southeast and northeast-southwest structural grain of the region. This, and the remarkable preservation of Precambrian structures of volcanic origin, indicate that the lineaments may be related to late-stage, uranium- and thorium-rich intrusives. The Ironton lineament, a major north-south lineament, is closely related spatially to Precambrian iron and manganese deposits. Field work along the Ironton lineament suggests that it is related to a late period of Precambrian volcanism and that structural deformation along the lineament continued into early Paleozoic time. Areas of faulting, shearing, and hydrothermal alteration affecting both Precambrian and Paleozoic rocks have been located. A circular feature along the lineament has been found to be centered by a manganese deposit of possible hydrothermal origin.

  8. Crack azimuths on Europa: The G1 lineament sequence revisited

    USGS Publications Warehouse

    Sarid, A.R.; Greenberg, R.; Hoppa, G.V.; Brown, D.M., Jr.; Geissler, P.

    2005-01-01

    The tectonic sequence in the anti-jovian area covered by regional mapping images from Galileo's orbit E15 is determined from a study of cross-cutting relationships among lineament features. The sequence is used to test earlier results from orbit G1, based on lower resolution images, which appeared to display a progressive change in azimuthal orientation over about 90?? in a clockwise sense. Such a progression is consistent with expected stress variations that would accompany plausible non-synchronous rotation. The more recent data provide a more complete record than the G1 data did. We find that to fit the sequence into a continual clockwise change of orientation would require at least 1000?? (> 5 cycles) of azimuthal rotation. If due to non-synchronous rotation of Europa, this result implies that we are seeing back further into the tectonic record than the G1 results had suggested. The three sets of orientations found by Geissler et al. now appear to have been spaced out over several cycles, not during a fraction of one cycle. While our more complete sequence of lineament formation is consistent with non-synchronous rotation, a statistical test shows that it cannot be construed as independent evidence. Other lines of evidence do support non-synchronous rotation, but azimuths of crack sequences do not show it, probably because only a couple of cracks form in a given region in any given non-synchronous rotation period. ?? 2004 Elsevier Inc. All rights reserved.

  9. Lineaments of Texas - possible surface expressions of deep-seated phenomena. Final report

    SciTech Connect

    Woodruff, C.M. Jr.; Caran, S.C.

    1984-04-01

    Lineaments were identified on 51 Landsat images covering Texas and parts of adjacent states in Mexico and the United States. A method of identifying lineaments was designed so that the findings would be consistent, uncomplicated, objective, and reproducible. Lineaments denoted on the Landsat images were traced onto 1:250,000-scale work maps and then rendered cartographically on maps representing each of the 51 Landsat images at a scale of 1:500,000. At this stage more than 31,000 lineaments were identified. It included significant areas outside of Texas. In preparing the final lineament map of Texas at 1:1,000,000-scale from the 1:500,000-scale maps, all features that lay outside Texas and repetition among features perceived by individual workers were eliminated. Cultural features were checked for before reducing and cartographically fitting the mosaic of 51 individual map sheets to a single map base. Lineaments that were partly colinear but with different end points were modified into a single lineament trace with the combined length of the two or more colinear lineaments. Each lineament was checked to determine its validity according to our definition. The features were edited again to eliminate processing artifacts within the image itself, as well as representations of cultural features (fencelines, roads, and the like) and geomorphic patterns unrelated to bedrock structure. Thus the more than 31,000 lineaments originally perceived were reduced to the approximately 15,000 presented on the 1:1,000,000 map. Interpretations of the lineaments are presented.

  10. Revealing topographic lineaments through IHS enhancement of DEM data. [Digital Elevation Model

    NASA Technical Reports Server (NTRS)

    Murdock, Gary

    1990-01-01

    Intensity-hue-saturation (IHS) processing of slope (dip), aspect (dip direction), and elevation to reveal subtle topographic lineaments which may not be obvious in the unprocessed data are used to enhance digital elevation model (DEM) data from northwestern Nevada. This IHS method of lineament identification was applied to a mosiac of 12 square degrees using a Cray Y-MP8/864. Square arrays from 3 x 3 to 31 x 31 points were tested as well as several different slope enhancements. When relatively few points are used to fit the plane, lineaments of various lengths are observed and a mechanism for lineament classification is described. An area encompassing the gold deposits of the Carlin trend and including the Rain in the southeast to Midas in the northwest is investigated in greater detail. The orientation and density of lineaments may be determined on the gently sloping pediment surface as well as in the more steeply sloping ranges.

  11. Lineaments, planetary jointing, and the regmatic system: Main points of the phenomena and terminology

    NASA Astrophysics Data System (ADS)

    Koronovsky, N. V.; Bryantseva, G. V.; Goncharov, M. A.; Naimark, A. A.; Kopaev, A. V.

    2014-03-01

    The meaning of the term lineament, the modes of their recognition, and the lineament patterns dramatically varying in interpretations by different authors, are considered. It has been shown that obligatory identification of lineaments with faults and fracture zones is mostly implied rather than corroborated by evidence. The mapping of faults in platform regions based on lineaments requires distinct geological substantiation, otherwise lineament patterns remain devoid of sense. The regmatic system of supposedly tectonic dislocations cannot form on the surface of the rotating Earth, because the operating forces are too weak. Taking into account drift of continents and their rotation in the geological past, one hardly can speak of an ancient and inherited fault network.

  12. A tomographic glimpse of the upper mantle source of magmas of the Jemez lineament, New Mexico

    USGS Publications Warehouse

    Spence, W.; Gross, R.S.

    1990-01-01

    To infer spatial distributions of partial melt in the upper mantle source zones for the Rio Grande rift and the Jemez lineament, the lateral variations of P wave velocity in the upper mantle beneath these features has been investigated. Teleseismic P wave delays recorded at a 22-station network were used to perform a damped least squares, three-dimensional inversion for these lateral variations. Results infer that a large magmatic source zone exists beneath the Jemez lineament but not beneath the Rio Grande rift. This implies that the volcanic potential of the Jemez lineaments continues to greatly exceed that of the Rio Grande rift. The magmatic source zones of the Jemez lineament are modeled as due to clockwise rotation of the Colorado Plateau about a pole in northeastern Colorado. This rotation caused extension of the lithosphere beneath the Jemez lineament, permitting concentration there of partially melted rock in the upper mantle. -from Authors

  13. Probabilistic constraints on structural lineament best fit plane precision obtained through numerical analysis

    NASA Astrophysics Data System (ADS)

    Seers, Thomas D.; Hodgetts, David

    2016-01-01

    Understanding the orientation distribution of structural discontinuities using the limited information afforded by their trace in outcrop has considerable application, with such analysis often providing the basis for geological modelling. However, eigen analysis of 3D structural lineaments mapped at decimetre to regional scales indicates that discontinuity best fit plane estimates from such datasets tend to be unreliable. Here, the relationship between digitised lineament vertex geometry (coplanarity/collinearity) and the reliability of their estimated best fitting plane is investigated using Monte Carlo experiments. Lineaments are modelled as the intersection curve between two orthonormally oriented fractional Brownian surfaces representing the outcrop and discontinuity plane. Commensurate to increasing lineament vertex collinearity (K), systematic decay in estimated pole vector precision is observed from these experiments. Pole vector distributions are circumferentially constrained around the axis of rotation set by the end nodes of the synthetic lineaments, reducing the rotational degrees of freedom of the vertex set from three to one. Vectors on the unit circle formed perpendicular to this arbitrary axis of rotation conform to von Mises (circular normal) distributions tending towards uniform at extreme values of K. This latter observation suggests that whilst intrinsically unreliable, confidence limits can be placed upon orientation estimates from 3D structural lineaments digitised from remotely sensed data. A probabilistic framework is introduced which draws upon the statistical constraints obtained from our experiments to provide robust best fit plane estimates from digitised 3D structural lineaments.

  14. PREFACE: 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics & 38th National Conference on Theoretical Physics

    NASA Astrophysics Data System (ADS)

    2014-09-01

    This volume contains selected papers presented at the 38th National Conference on Theoretical Physics (NCTP-38) and the 1st International Workshop on Theoretical and Computational Physics: Condensed Matter, Soft Matter and Materials Physics (IWTCP-1). Both the conference and the workshop were held from 29 July to 1 August 2013 in Pullman hotel, Da Nang, Vietnam. The IWTCP-1 was a new activity of the Vietnamese Theoretical Physics Society (VTPS) organized in association with the 38th National Conference on Theoretical Physics (NCTP-38), the most well-known annual scientific forum dedicated to the dissemination of the latest development in the field of theoretical physics within the country. The IWTCP-1 was also an External Activity of the Asia Pacific Center for Theoretical Physics (APCTP). The overriding goal of the IWTCP is to provide an international forum for scientists and engineers from academia to share ideas, problems and solution relating to the recent advances in theoretical physics as well as in computational physics. The main IWTCP motivation is to foster scientific exchanges between the Vietnamese theoretical and computational physics community and world-wide scientists as well as to promote high-standard level of research and education activities for young physicists in the country. About 110 participants coming from 10 countries participated in the conference and the workshop. 4 invited talks, 18 oral contributions and 46 posters were presented at the conference. In the workshop we had one keynote lecture and 9 invited talks presented by international experts in the fields of theoretical and computational physics, together with 14 oral and 33 poster contributions. The proceedings were edited by Nguyen Tri Lan, Trinh Xuan Hoang, and Nguyen Ai Viet. We would like to thank all invited speakers, participants and sponsors for making the conference and the workshop successful. Nguyen Ai Viet Chair of NCTP-38 and IWTCP-1

  15. Geological and geophysical signatures of the Jemez lineament: a reactivated Precambrian structure

    SciTech Connect

    Aldrich, M.J. Jr.; Ander, M.E.; Laughlin, A.W.

    1981-01-01

    The Jemez lineament (N52/sup 0/E) is one of several northeast-trending lineaments that traverse the southwestern United States. It is defined by a 500-km-long alignment of late Cenozoic volcanic fields extending southwest from at least the Jemez Mountains in the north-central New Mexico to the San Carlos-Peridot volcanic field in east-central Arizona. Geochronologic data from Precambrian basement rocks indicate that the lineament is approximately coincident with a boundary between Precambrian crustal provinces. Characteristics of the lineament are high heat flow (>104.5 mW/m/sup 2/), an attenuated seismic velocity zone from 25 to 140 km depth, and an upwarp of the crustal electrical conductor inferred from magnetotelluric studies. The high electrical conductivity is probably caused by the presence of interstitial magma in the rocks of the mid-to-upper crust. The average electical strike within the Precambrian basement is N60/sup 0/E, supporting a relationship between the Precambrian structural grain and the Jemez lineament. The geological and geophysical data suggest that the lineament is a structural zone that extends deep into the lithosphere and that its location was controlled by an ancient zone of weakness in the Precambrian basement. Ages of late Cenozoic volcanic rocks along the lineament show no systematic geographic progression, thus indicating that a mantle plume was not responsible for the alignment of the volcanic fields.Most of the faults, dikes, and cinder cone alignments along the lineament trend approximately N25/sup 0/E and N5/sup 0/W. These trends may represent Riedel shears formed by left-lateral transcurrent movement along the structure. Less common trends of cinder cone alignments and dikes are approximately N65/sup 0/W and N85/sup 0/W. The diversity in orientation indicates that the magnitudes of the two horizontal principal stresses within the lineament have been approximately equal for at least the last 5 m.y.

  16. Probabilistic Constraints on Structural Lineament Best Fit Plane Precision Obtained through Numerical Analysis

    NASA Astrophysics Data System (ADS)

    Seers, Thomas; Hodgetts, David

    2015-04-01

    Recent advances in geological trace extraction procedures now enable three dimensional representations of structural lineaments to be delineated from digital elevation models (DEMs), orthophotos and mesh based surface reconstructions. The principle advantage of obtaining higher dimensional representations of lineaments from remotely sensed data is that they allow best fit plane estimates to be made for their corresponding discontinuities which cannot be obtained from conventional bi-dimensional datasets. These orientation estimates yield deterministic constraints upon structural architecture and enable spatially dependent discontinuity network properties, such as volumetric intensity and connectivity, known to govern key rock mass physical properties (i.e. strength, elastic modulus and permeability) to be assessed. However, the eigen characteristics of 3D structural lineaments mapped at decimetre to regional scales indicates that discontinuity plane estimates from such datasets tend to be unreliable. Here, we investigate the relationship between digitised lineament vertex geometry (coplanarity/collinearity) and the reliability of their estimated best fitting plane using Monte Carlo experiments. Lineaments are modelled as the intersection curve between two orthonormally oriented fractional Brownian surfaces representing the outcrop and discontinuity plane. Commensurate to increasing lineament vertex collinearity (K), systematic decay in estimated pole vector precision is observed from our experiments. Pole vector distributions are circumferentially constrained around the axis of rotation set by the end nodes of the synthetic lineaments, effectively reducing the rotational degrees of freedom of the vertex set from three to one. Vectors on the unit circle formed perpendicular to this arbitrary axis of rotation conform to von Mises (circular normal) distributions, only transforming to uniform at extreme values of K. This latter observation suggests that whilst

  17. Analysis of the Tectonic Lineaments in the Ganiki Planitia (V14) Quadrangle, Venus

    NASA Technical Reports Server (NTRS)

    Venechuk, E. M.; Hurwitz, D. M.; Drury, D. E.; Long, S. M.; Grosfils, E. B.

    2005-01-01

    The Ganiki Planitia quadrangle, located between the Atla Regio highland to the south and the Atalanta Planitia lowland to the north, is deformed by many tectonic lineaments which have been mapped previously but have not yet been assessed in detail. As a result, neither the characteristics of these lineaments nor their relationship to material unit stratigraphy is well constrained. In this study we analyze the orientation of extensional and compressional lineaments in all non-tessera areas in order to begin characterizing the dominant tectonic stresses that have affected the region.

  18. Cleats and their relation to geologic lineaments and coalbed methane potential in Pennsylvanian coals in Indiana

    USGS Publications Warehouse

    Solano-Acosta, W.; Mastalerz, Maria; Schimmelmann, A.

    2007-01-01

    Cleats and fractures in Pennsylvanian coals in southwestern Indiana were described, statistically analyzed, and subsequently interpreted in terms of their origin, relation to geologic lineaments, and significance for coal permeability and coalbed gas generation and storage. These cleats can be interpreted as the result of superimposed endogenic and exogenic processes. Endogenic processes are associated with coalification (i.e., matrix dehydration and shrinkage), while exogenic processes are mainly associated with larger-scale phenomena, such as tectonic stress. At least two distinct generations of cleats were identified on the basis of field reconnaissance and microscopic study: a first generation of cleats that developed early on during coalification and a second generation that cuts through the previous one at an angle that mimics the orientation of the present-day stress field. The observed parallelism between early-formed cleats and mapped lineaments suggests a well-established tectonic control during early cleat formation. Authigenic minerals filling early cleats represent the vestiges of once open hydrologic regimes. The second generation of cleats is characterized by less prominent features (i.e., smaller apertures) with a much less pronounced occurrence of authigenic mineralization. Our findings suggest a multistage development of cleats that resulted from tectonic stress regimes that changed orientation during coalification and basin evolution. The coals studied are characterized by a macrocleat distribution similar to that of well-developed coalbed methane basins (e.g., Black Warrior Basin, Alabama). Scatter plots and regression analyses of meso- and microcleats reveal a power-law distribution between spacing and cleat aperture. The same distribution was observed for fractures at microscopic scale. Our observations suggest that microcleats enhance permeability by providing additional paths for migration of gas out of the coal matrix, in addition to

  19. A tomographic glimpse of the upper mantle source of magmas of the Jemez lineament, New Mexico

    NASA Technical Reports Server (NTRS)

    Spence, William; Gross, Richard S.

    1990-01-01

    In this study, the lateral variations of the P wave velocity as a function of depth were examined for the regions of the Rio Grande rift and the Jemez lineament, to infer spatial distributions of partial melt in the upper mantle source zones for the rift and the lineament. The method involved measurements of teleismic P wave delays at a 22-station network followed by performing a damped least-squares three-dimensional inversion for these lateral variations. Results indicate that, directly beneath the Jemez lineament (but not beneath the Rio Grande rift), there is a 100-km-wide 1-2-percent low-P-wave-velocity feature in the depth range of 50-160 km. This implies that the volcanic potential of the Jemez lineaments continues to greatly exceed that of the Rio Grande rift.

  20. Correlation of LANDSAT lineaments with Devonian gas fields in Lawrence County, Ohio

    NASA Technical Reports Server (NTRS)

    Johnson, G. O.

    1981-01-01

    In an effort to locate sources of natural gas in Ohio, the fractures and lineaments in Black Devonian shale were measured by: (1) field mapping of joints, swarms, and fractures; (2) stereophotointerpretation of geomorphic lineaments with precise photoquads; and (3) by interpreting the linear features on LANDSAT images. All results were compiled and graphically represented on 1:250,000 scale maps. The geologic setting of Lawrence County was defined and a field fracture map was generated and plotted as rose patterns at the exposure site. All maps were compared, contrasted, and correlated by superimposing each over the other as a transparency. The LANDSAT lineaments had significant correlation with the limits of oil and gas producing fields. These limits included termination of field production as well as extensions to other fields. The lineaments represent real rock fractures with zones of increased permeability in the near surface bedrock.

  1. Smiles: a fortran-77 program for sequential machine interpreted lineament extraction using digital images

    NASA Astrophysics Data System (ADS)

    Raghavan, Venkatesh; Wadatsumu, Kiyoshi; Masumoto, Shinji

    1994-03-01

    A FORTRAN-77 program Sequential Machine Interpreted Lineament Extraction System (SMILES) is presented, which is useful for automatic and manual extraction of lineament information from digital images. The SMILES is a stand-alone package composed of several modules which perform the function of image display, lineament information extraction, data management, output generation, and preliminary analysis. The program architecture and application results are described. The program has been tested using LANDSAT MSS data of southwestern Japan. The Directional Segment Detection Algorithm (DSDA) also has been applied to shaded relief maps generated from digital elevation data of the same area. Interpretation of aerial photograph stereo pairs reveals that the machine interpreted features show photogeological expressions that are characteristic of geologic lineaments.

  2. Comparison of Skylab and Landsat Lineaments with Joint Orientations in Northcentral Pennsylvania. [on Allegheny Plateau

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Kowalik, W. S.

    1975-01-01

    The author has identified the following significant results. The histogram peaks of lineaments mapped from Skylab photograph at a scale of 1:517,000 lie subparallel, within 20 deg, to major shale joints and coal cleats on part of the Allegheny Plateau. The Landsat lineament, mapped at 1:989,000 are biased by illumination and scan line directions. While there is an illumination bias in the Skylab photograph, its direction does not coincide with the main transverse lineament trend, thus providing an independent assessment of the illumination direction bias. The coincidence in direction regardless of scale of the linear features suggests a mechanical relationship between joints, fracture traces, and lineaments which is more consistent with a tensional model than a shear model of origin.

  3. Different emplacement mechanisms of the granitoids from the Sanabria-Viana do Bolo lineament (Iberian Massif, Spain)

    NASA Astrophysics Data System (ADS)

    Vegas, N.; Aranguren, A.; Cuevas, J.; Tubía, J. M.

    2003-04-01

    The Ollo de Sapo Domain (OSD), a large antiform that delineates the Ibero-Armorican Arc within the northern part of the Iberian Massif in NW Spain, is pierced by several granitic plutons. The pre-, syn- or post-kinematic emplacement of such plutons is a current issue of debate. Here, we report the structural study, based on field and anisotropy of magnetic susceptibility data, of three granitic plutons located by the southern termination of the OSD. The structure of the OSD reflects the superposition of vertical folds with crenulation cleavage over older recumbent folds with slate cleavage or schistosity. The process of crustal shortening responsible for these structures ended in the development of dextral and sinistral strike-slip shear zones parallel to the vertical folds. The granites studied, the Veiga, Pradorramisquedo and Sanabria plutons, define a N130°E-trending lineament parallel to the structural grain of the Ollo de Sapo Domain. The granite lineament is located on a negative gravity anomaly (- 60 mgal), suggesting the existence of buried granites. The Sanabria granites are conformable with the schistosity of the country rocks, and are deformed by N130°E-trending dextral shear zones. In contrast, the Pradorramisquedo and the Veiga granites cut the structures of the metamorphic rocks. These observations suggest that the Sanabria granite is syn-kinematic and the remaining two plutons are post-kinematic. However, structural data show that the emplacement of the three plutons was related to a dextral transpressional strike-slip shear zone which is buried below the Pradorramisquedo and the Veiga plutons. The structural differences shown by these granites are to be related to the control of local conditions on migration and emplacement of melts.

  4. Geologic evaluation of major Landsat lineaments in Nevada and their relationship to ore districts

    USGS Publications Warehouse

    Rowan, Lawrence C.; Wetlaufer, Pamela Heald

    1979-01-01

    Analysis of diverse geologic, geophysical, and geochemical data shows that eight major lineament systems delineated in Landsat images of Nevada are morphological and tonal expressions of substantially broader structural zones. Southern Nevada is dominated by the 175 km-wide northwest-trending Walker Lane, a 150 km-wide zone of east-trending lineament systems consisting of the Pancake Range, Warm Springs, and Timpahute lineament systems, and a 125 km-wide belt of northeast-trending faults termed the Pahranagat lineament system. Northern Nevada is dominated by the northeast-trending 75-200km wide Midas Trench lineament system, which is marked by northeasterly-oriented faults, broad gravity anomalies, and the Battle Mountain heat flow high; this feature appears to extend into central Montana. The Midas Trench system is transected by the Northern Nevada Rift, a relatively narrow zone of north-northwest-trending basaltic dikes that give rise to a series of prominent aeromagnetic highs. The northwest-trending Rye Patch lineament system, situated at the northeast boundary of the Walker Lane, also intersects the Midas Trench system and is characterized by stratigraphic discontinuities and alignment of aeromagnetic anomalies. Field relationships indicate that all the lineament systems except for the Northern Nevada Rift are conjugate shears formed since mid-Miocene time during extension of the Great Basin. Metallization associated with volcanism was widespread along these systems during the 17-6 m.y. period. However, these zones appear to have been established prior to this period, probably as early as Precambr-an time. These lineament systems are interpreted to be old, fundamental, structural zones that have been reactivated episodically as stress conditions !changed in the western United States. Many metal districts are localized within these zones as magma rose along the pre-existing conduits.

  5. Summary of the panel session at the 38th Annual Surgical Symposium of the Association of VA Surgeons: what is the big deal about frailty?

    PubMed

    Anaya, Daniel A; Johanning, Jason; Spector, Seth A; Katlic, Mark R; Perrino, Albert C; Feinleib, Jessica; Rosenthal, Ronnie A

    2014-11-01

    Owing to the phenomenon known as "global graying," elderly-specific conditions, including frailty, will become more prominent among patients undergoing surgery. The concept of frailty, its effect on surgical outcomes, and its assessment and management were discussed during the 38th Annual Surgical Symposium of the Association of VA Surgeons panel session entitled "What's the Big Deal about Frailty?" and held in New Haven, Connecticut, on April 7, 2014. The expert panel discussed the following questions and topics: (1) Why is frailty so important? (2) How do we identify the frail patient prior to the operating room? (3) The current state of the art: preoperative frail evaluation. (4) Preoperative interventions for frailty prior to operation: do they work? (5) Intraoperative management of the frail patient: does anesthesia play a role? (6) Postoperative care of the frail patient: is rescue the issue? This special communication summarizes the panel session topics and provides highlights of the expert panel's discussions and relevant key points regarding care for the geriatric frail surgical patient. PMID:25230137

  6. Quantifying and cataloguing small-scale motions along lineaments on Europa

    NASA Astrophysics Data System (ADS)

    Vetter, J.; Kattenhorn, S. A.

    2004-12-01

    Small-scale normal and shear motions (approaching the limits of resolution; < a few 100 m) along lineaments on Europa are not well constrained. Previous work has not differentiated, quantified, and catalogued the small-scale motions along the lengths of lineaments of varying morphologies. In their characterizations, such work principally utilized rigid-block reconstructions which do not address any variations in motion along the lengths of lineaments. Also, these investigations typically did not consider the effect of relatively small amounts of fault-orthogonal motion, or the apparent offsets caused by convergence, if present. For example, the existence of lateral offsets along a lineament does not explicitly require that any strike-slip motion occurred at all as offsets could purely be the result of convergence. Using a technique which utilizes characteristic changes in the distribution of geometric relations of crosscutting features, small-scale motions can be inferred (whether strike-slip or fault-orthogonal), within the limits of image resolution. By measuring the total offset, the separation, and alpha (the clockwise angle between a lineament and a crosscut feature) for every crosscut feature along the length of the lineament (i.e., a range of alpha values), the actual motions can be resolved. Specifically, by using these measured quantities and a series of trigonometric equations, opening, convergence, actual strike-slip, or a combination of strike-slip and opening/convergence can be determined. Actual motions along lineaments become particularly apparent in graphs of alpha versus separation, which display different patterns depending on the displacement ratio (DR: the ratio of opening/convergence to strike-slip), which can be estimated from the graph. The accuracy of this technique is limited to DR < 3. If a crosscut feature is approximately orthogonal to a slipped lineament, the observable strike-slip component of motion would have been unaffected by

  7. Lineaments derived from analysis of linear features mapped from Landsat images of the Four Corners region of the Southwestern United States

    USGS Publications Warehouse

    Knepper, Daniel H.

    1982-01-01

    Linear features are relatively short, distinct, non-cultural linear elements mappable on Landsat multispectral scanner images (MSS). Most linear features are related to local topographic features, such as cliffs, slope breaks, narrow ridges, and stream valley segments that are interpreted as reflecting directed aspects of local geologic structure including faults, zones of fracturing (joints), and the strike of tilted beds. 6,050 linear features were mapped on computer-enhanced Landsat MSS images of 11 Landsat scenes covering an area from the Rio Grande rift zone on the east to the Grand Canyon on the west and from the San Juan Mountains, Colorado, on the north to the Mogollon Rim on the south. Computer-aided statistical analysis of the linear feature data revealed 5 statistically important trend intervals: 1.) N. 10W.-N.16E., 2.) N.35-72E., 3.) N.33-59W., 4.) N. 74-83W., and 5.) N.89-9-W. and N. 89-90E. Subsequent analysis of the distribution of the linear features indicated that only the first three trend intervals are of regional geologic significance. Computer-generated maps of the linear features in each important trend interval were prepared, as well as contour maps showing the relative concentrations of linear features in each trend interval. These maps were then analyzed for patterns suggestive of possible regional tectonic lines. 20 possible tectonic lines, or lineaments, were interpreted from the maps. One lineament is defined by an obvious change in overall linear feature concentrations along a northwest-trending line cutting across northeastern Arizona. Linear features are abundant northeast of the line and relatively scarce to the southwest. The remaining 19 lineaments represent the axes of clusters of parallel linear features elongated in the direction of the linear feature trends. Most of these lineaments mark previously known structural zones controlled by linear features in the Precambrian basement or show newly recognized relationships to

  8. Topographic and Air-Photo Lineaments in Various Locations Related to Geothermal Exploration in Colorado

    DOE Data Explorer

    Zehner, Richard

    2012-02-01

    Title: Topographic and Air-Photo Lineaments in Various Locations Related to Geothermal Exploration in Colorado Tags: Colorado, lineaments, air-photo, geothermal Summary: These line shapefiles trace apparent topographic and air-photo lineaments in various counties in Colorado. It was made in order to identify possible fault and fracture systems that might be conduits for geothermal fluids, as part of a DOE reconnaissance geothermal exploration program. Description: Geothermal fluids commonly utilize fault and fractures in competent rocks as conduits for fluid flow. Geothermal exploration involves finding areas of high near-surface temperature gradients, along with a suitable “plumbing system” that can provide the necessary permeability. Geothermal power plants can sometimes be built where temperature and flow rates are high. This line shapefile is an attempt to use desktop GIS to delineate possible faults and fracture orientations and locations in highly prospective areas prior to an initial site visit. Geochemical sampling and geologic mapping could then be centered around these possible faults and fractures. To do this, georeferenced topographic maps and aerial photographs were utilized in an existing GIS, using ESRI ArcMap 10.0 software. The USA_Topo_Maps and World_Imagery map layers were chosen from the GIS Server at server.arcgisonline.com, using a UTM Zone 13 NAD27 projection. This line shapefile was then constructed over that which appeared to be through-going structural lineaments in both the aerial photographs and topographic layers, taking care to avoid manmade features such as roads, fence lines, and utility right-of-ways. Still, it is unknown what actual features these lineaments, if they exist, represent. Although the shapefiles are arranged by county, not all areas within any county have been examined for lineaments. Work was focused on either satellite thermal infrared anomalies, known hot springs or wells, or other evidence of geothermal systems

  9. Lineament mapping of vertical fractures of rock outcrops by remote sensing images

    NASA Astrophysics Data System (ADS)

    Matarrese, Raffaella; Masciopinto, Costantino

    2016-04-01

    The monitoring of hydrological processes within the vadose zone is usually difficult, especially in the presence of compact rock subsoil. The possibility of recognizing the trend of the structural lineaments in fractured systems has important fallout in the understanding water infiltration processes, especially when the groundwater flow is strongly affected by the presence of faults and fractures that constitute the preferred ways of water fluxes. This study aims to detect fracture lineaments on fractured rock formations from CASI hyperspectral airborne VNIR images, with a size of 60 cm of the spatial resolution, and collected during November 2014. Lineaments detected with such high resolution have been compared with the fracture lineaments detected by a Landsat 8 image acquired at the same time of the CASI acquisition. The method has processed several remote sensed images at different spatial resolution, and it has produced the visualization of numerous lineament maps, as result of the vertical and sub-vertical fractures of the investigated area. The study has been applied to the fractured limestone outcrop of the Murgia region (Southern Italy). Here the rock formation hosts a deep groundwater, which supplies freshwater for drinking and irrigation purposes. The number of the fractures allowed a rough estimation of the vertical average hydraulic conductivity of the rock outcrop. This value was compared with field saturated rock hydraulic conductivity measurements derived from large ring infiltrometer tests carried out on the same rock outcrop.

  10. Application of satellite photographic and MSS data to selected geologic and natural resource problems in Pennsylvania. 1: Lineaments and mineral occurrences in Pennsylvania. 2: Relation of lineaments to sulfide deposits: Bald Eagle Mountain, Centre County, Pennsylvania. 3: Comparison of Skylab and LANDSAT lineaments with joint orientations in north central Pennsylvania

    NASA Technical Reports Server (NTRS)

    Kowalik, W. S.; Gold, D. P.; Krohn, M. D.

    1975-01-01

    Those metallic mineral occurrences in Pennsylvania are reported which lie near lineaments mapped from LANDSAT-1 satellite imagery and verified from Skylab photography where available. The lineaments were categorized by degree of expression and type of expression; the mineral occurrences were classified by host rock age, mineralization type, and value. The accompanying tables and figure document the mineral occurrences geographically associated with lineaments and serve as a base for a mineral exploration model.

  11. Orbital-science investigation: Part G: lineaments that are artifacts of lighting

    USGS Publications Warehouse

    Howard, Keith A.; Larsen, Bradley R.

    1972-01-01

    Many Apollo 15 orbital photographs, particularly those taken at low Sun-elevation angles, reveal grid patterns of lineaments. In some circumstances, the grid pattern is present in areas where structural control seems unlikely. For example, in an oblique view (fig. 25-52), the ejecta blankets of two fresh impact craters seem to have two intersecting sets of lineaments. Because previous studies of impact craters indicate that concentric and radial trends are commonly present, that this pattern is unexpected. A crater-saturated surface on which a faint grid of linear markings can be discerned is shown in figure 25-53. Again, this pattern is unexpected for a surface that has been exposed to random impacts. In both situations, the azimuths of the main lineaments are approximately symmetrical to the direction of the Sun.

  12. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  13. Detection of Lineaments in Denizli Basin of Western Anatolia Region Using Bouguer Gravity Data

    NASA Astrophysics Data System (ADS)

    Altinoğlu, Figen F.; Sari, Murat; Aydin, Ali

    2015-02-01

    The aim of this study is to investigate the geostructural boundaries of the eastern part of Western Anatolia. To achieve this, three methods, horizontal gradient, analytic signal, and tilt angle, were used. With the application of each method to the Bouguer gravity data, the common lineaments were determined using maximum values of the horizontal gradient, analytic signal maps, and zero contours of the tilt angle maps. The basement topography was also produced using the Parker-Oldenburg algorithm. Then, the produced lineaments were compared with the active fault map of the region. The results suggested that although a good agreement between the current work and earlier work exists, the new four lineament regions were also detected. We concluded that this work will lead to better understanding of Anatolian geostructural and its impact on the larger scale geological processes.

  14. A new artefacts resistant method for automatic lineament extraction using Multi-Hillshade Hierarchic Clustering (MHHC)

    NASA Astrophysics Data System (ADS)

    Šilhavý, Jakub; Minár, Jozef; Mentlík, Pavel; Sládek, Ján

    2016-07-01

    This paper presents a new method of automatic lineament extraction which includes the removal of the 'artefacts effect' which is associated with the process of raster based analysis. The core of the proposed Multi-Hillshade Hierarchic Clustering (MHHC) method incorporates a set of variously illuminated and rotated hillshades in combination with hierarchic clustering of derived 'protolineaments'. The algorithm also includes classification into positive and negative lineaments. MHHC was tested in two different territories in Bohemian Forest and Central Western Carpathians. The original vector-based algorithm was developed for comparison of the individual lineaments proximity. Its use confirms the compatibility of manual and automatic extraction and their similar relationships to structural data in the study areas.

  15. Further Tests of the Seismo-Lineament Method for Recognizing Seismogenic Faults at the Ground Surface

    NASA Astrophysics Data System (ADS)

    Millard, M. A.; Campbell, R. D.; Lindsay, R. D.; Secrest, S. H.; Cronin, V. S.

    2007-05-01

    The importance of locating the surface trace of faults that can produce earthquakes is self-evident, particularly in California where avoidance of ground-rupture hazards is a legal requirement. We have developed a method that utilizes earthquake focal mechanism solutions coupled with field reconnaissance to locate the surface trace of probable seismogenic faults. We project a fault-plane solution from the boundaries of the uncertainty region around the earthquake focus to the surface of a DEM to define a seismo-lineament -- a zone within which the surface trace of the fault associated with the earthquake is likely to be located. Field work is then undertaken to evaluate the hypothesis that a seismogenic fault exists within the seismo-lineament. If a fault is found within the seismo-lineament, the fault’s orientation and direction of slip are statistically compared with the orientation and slip data from the fault-plane solution to complete the spatial correlation of the fault with the earthquake. To evaluate the effectiveness of this procedure, we selected 6 historic earthquakes that caused fault displacement of the ground surface and used the seismo-lineament method to indicate the probable location of the surface trace of the fault. Earthquakes analyzed in this study include the Parkfield (2004, M6), Denali (2002, M7.9), Hector Mine (1999, M7.1), Superstition Hills (1987, M6.2 and M6.6), and Borah Peak (1983, M7.3) earthquakes. In all 6 test cases, the actual ground-rupture zone associated with the main shock was located within the seismo-lineament. In addition to using focal-mechanism solutions associated with the main shocks to define seismo-lineaments, we have used data from several major aftershocks associated with these events. Seismo-lineaments defined by aftershocks also coincided with the surface trace of the seismogenic fault. Based on results from this study, the seismo-lineament method is likely to be useful in identifying probable seismogenic faults

  16. A study of the Tyrone-Mount Union lineament by remote sensing techniques and field methods

    NASA Technical Reports Server (NTRS)

    Gold, D. P. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. This study has shown that subtle variations in fold axes, fold form, and stratigraphic thickness can be delineated. Many of the conclusions were based on extrapolation in similitude to different scales. A conceptual model was derived for the Tyrone-Mount Union lineament. In this model, the lineament the morphological expression of a zone of fracture concentrations which penetrated basement rocks and may have acted as a curtain to regional stresses or as a domain boundary between uncoupled adjacent crustal blocks.

  17. Significance of operator variation and the angle of illumination in lineament analysis on synoptic images. [LANDSAT geological investigations

    NASA Technical Reports Server (NTRS)

    Siegal, B. S.; Short, N. M.

    1977-01-01

    The significance of operator variation and the angle of illumination in acquired imagery is analyzed for lineament analysis. Five operators analyzed a LANDSAT image and four photographs of a plastic relief map illuminated at a low angle from varying directions of the Prescott, Arizona region. Significant differences were found in both number and length of the lineaments recognized by the different investigators for the images. The actual coincidence of lineaments recognized by the investigators for the same image is exceptionally low. Even the directional data on lineament orientation is significantly different from operator to operator and from image to image. Cluster analysis of the orientation data displays a clustering by operators rather than by images. It is recommended that extreme caution be taken before attempting to compare different investigators' results in lineament analysis.

  18. Natural fractures and lineaments of the east-central Greater Green river basin. Topical report, May 1992-August 1995

    SciTech Connect

    Jaworowski, C.; Christiansen, G.E.; Grout, M.A.; Heasler, H.P.; Iverson, W.P.

    1995-08-01

    This topical report addresses the relationship of natural fractures and lineaments to hydrocarbon production of the east-central Greater Green River Basin. The tight gas sands of the Cretaceous Mesaverde Formation are the primary focus of this work. IER and USGS researchers have (1) demonstrated that east-northeast and northeast-trending regional fractures and lineaments are important to hydrocarbon production; (2) recognized the east-northeast regional joint set near two horizontal wells (Champlin 254 Amoco B 2-H and Champlin 320 C-1A-H) in the Washankie and Great Divide basins, respectively; (3) related Cretaceous Almond Formation thickness and facies to northeast-trending faults; (4) developed a program to automatically derive lineaments from small linear features; (5) associated oil and gas production data with east-northeast and northeast-trending lineaments and linear features; and (6) digitally compared lineaments with potentiometric maps of the Mesaverde and Frontier formations.

  19. Attempt at correlating Italian long lineaments from LANDSAT-1 satellite images with some geological phenomena. Possible use in geothermal energy research

    NASA Technical Reports Server (NTRS)

    Barbier, E.; Fanelli, M.

    1975-01-01

    By utilizing the images from the LANDSAT-1, in the spectral band 0.8-1.1 microns (near infrared), a photomosaic was obtained of Italian territory. From this mosaic the field of long lineaments was drawn, corresponding to fractures of the earth crust more than 100 km long. The relationship between lineaments, hot springs, volcanic areas, and earthquake epicenters is verified. There is a clear connection between long lineaments and hot springs: 78% of the springs are located on one or more lineaments, and the existence of hot lineaments was observed. A slightly weaker, but still significant, connection exists between the Pliocene-Quaternary volcanic areas and long lineaments. The relationship between earthquakes and long lineaments can only be verified in some cases. The lineaments which can be related to earthquakes have little or no connection with the other phenomena.

  20. Application of SEASAT-1 Synthetic Aperture Radar (SAR) data to enhance and detect geological lineaments and to assist LANDSAT landcover classification mapping. [Appalachian Region, West Virginia

    NASA Technical Reports Server (NTRS)

    Sekhon, R.

    1981-01-01

    Digital SEASAT-1 synthetic aperture radar (SAR) data were used to enhance linear features to extract geologically significant lineaments in the Appalachian region. Comparison of Lineaments thus mapped with an existing lineament map based on LANDSAT MSS images shows that appropriately processed SEASAT-1 SAR data can significantly improve the detection of lineaments. Merge MSS and SAR data sets were more useful fo lineament detection and landcover classification than LANDSAT or SEASAT data alone. About 20 percent of the lineaments plotted from the SEASAT SAR image did not appear on the LANDSAT image. About 6 percent of minor lineaments or parts of lineaments present in the LANDSAT map were missing from the SEASAT map. Improvement in the landcover classification (acreage and spatial estimation accuracy) was attained by using MSS-SAR merged data. The aerial estimation of residential/built-up and forest categories was improved. Accuracy in estimating the agricultural and water categories was slightly reduced.

  1. Seismotectonics of transverse lineaments in the eastern Himalaya and its foredeep

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Manoj

    1984-11-01

    The Himalayan collision zone and the Burmese subduction zone lie in rather close vicinity across northeast India. Their possible interaction produces complex tectonics and a high level of seismicity in the region. Several prominent transverse lineaments across the eastern Himalaya and its foredeep appear to be seismically active. Quite a few of the active lineaments are regionally extensive, even transgressing the Bengal basin. A focal mechanism study indicates that active lineaments are either normal or strike-slip faults. For the highly active Shillong-Mikir massif (a fragmented portion of the Indian shield) this correlation is less clear, although it appears that some of the activity may be associated with the northeasterly lineaments crossing the massif into the eastern Himalayan foredeep. This presumably results due to drag experienced by the Indian lithosphere near its margins under the Himalayan and Burmese arcs. The Assam Valley, which is the common foredeep for both the arcs, is remarkably aseismic though it is surrounded by active regions. Mainly thrusting mechanisms characterize the earthquakes which originate from the Himalayan and Burmese arcs adjoining the Valley. A model of basement reactivation below the Valley is proposed in order to explain the style of tectonic deformation and current seismicity in the Himalayan and Burmese orogens adjoining the Valley.

  2. The Cottage Lake Lineament, Washington: Onshore Extension of the Southern Whidbey Island Fault?

    NASA Astrophysics Data System (ADS)

    Blakely, R. J.; Weaver, C. S.; Sherrod, B. L.; Troost, K. G.; Haugerud, R. A.; Wells, R. E.; McCormack, D. H.

    2003-12-01

    The northwest-striking southern Whidbey Island fault zone (SWIF) is reasonably well expressed by borehole data, marine seismic surveys, and potential-field anomalies on Whidbey Island and beneath surrounding waterways. Johnson et al. (1996) described evidence for Quaternary movement on the SWIF, suggested the fault zone is capable of a M 7 earthquake, and projected three fault strands onto the mainland between the cities of Seattle and Everett. Evidence for this onshore projection is scant, however, and the exact location of the SWIF in this populated region is unknown. Four linear, northwest-striking magnetic anomalies on the mainland may help address this issue. All of the anomalies are low in amplitude and best illuminated in residual magnetic fields. The most prominent of the magnetic anomalies extends at least 15 km, is on strike with the SWIF on Whidbey Island, and passes near Cottage Lake, about 15 km south of downtown Everett. The magnetic anomaly is associated with linear topography along its entire length, but spectral analysis indicates that the source of the anomaly lies principally beneath the topographic surface and extends to depths greater than 2 km. The anomalies are likely created by northwest-trending, faulted and folded Tertiary volcanic and sedimentary rocks of the Cascade foothills, which rise from beneath the Quaternary lowland fill to the southeast of the SWIF. High-resolution Lidar topography provided by King County shows subtle scarps cutting the latest Pleistocene glaciated surface at two locations along the magnetic anomaly; scarps are parallel to the anomaly trend. In the field, one scarp has 2 to 3 m of north-side-up offset; paleoseismic trench excavations are planned for Fall 2003 to determine their nature and history. Preliminary examination of boreholes, recently acquired as part of an ongoing sewer tunnel project, show anomalous stratigraphic and structural disturbances in the area of the magnetic anomalies. Analyses are underway

  3. Hypothesis on the origin of lineaments in the LANDSAT and SLAR images of precambrian soil in the low Contas River Valley (southern Bahia)

    NASA Technical Reports Server (NTRS)

    Liu, C. C. (Principal Investigator); Rodrigues, J. E.

    1984-01-01

    Examination of LANDSAT and SLAR images in southern Bahia reveals numerous linear features, which are grouped in five sets, based on their trends: N65 degrees E, N70 degrees W, N45 degrees E and NS/N15 degrees E. Owing to their topographic expressions, distributive patterns, spacing between individual lineaments and their mutual relationships, the lineament sets of N65 degrees E and N70 degrees W, as well as the sets of N40 degrees E and N45 degrees W, are considered as two groups of conjugate shear fractures and the former is older and is always cut by the latter. Their conjugate shear angles are 45 degrees and 85 degrees and their bisector lines are approximately in east-west and north-south directions, respectively. According to Badgeley's argumentation on the conjugate shear angles, the former conjugate shear fractures would be caused by: (1) vertical movements, and the bisector of their conjugate angle would be parallel to the long axis of horsting or folding, or (2) by a compressive force in the east-west direction and under a condition of low confining pressure and temperature.

  4. The Ikom-Mamfe basin, Nigeria: A study of fracture and mineral vein lineament trends and Cretaceous deformations

    NASA Astrophysics Data System (ADS)

    Oden, M. I.; Egeh, E. U.; Amah, E. A.

    2015-01-01

    The Ikom-Mamfe basin is approximately a 130 km long, east-west abutment onto the eastern flank of the lower Benue trough of Nigeria and extends westwards into Cameroon. Two hundred and six fracture lineaments were analyzed in the Nigerian sector of this basin. They vary in length from 0.5 to 23.75 km, with the most frequently occurring fracture length being about 2.25 km. The most prominent fracture sets have NE-SW and NW-SE orientations, while less prominent patterns are in the NNE-SSW and ESE-WNW directions. NW-SE and NNE-SSW fracture sets are interpreted as "ac" extension fractures from two different deformation episodes, while NE-SW and ESE-WNW sets are "bc" tensile fractures parallel to the axes of F1 and F2 folds, respectively. This implies two deformation episodes in this basin, with the earlier one producing the NE-SW (F1) fold axes, exactly as in the Benue trough. Two prominent mineral vein trends in the basin are the NW-SE and NNE-SSW sets, in which minerals are loaded in "ac" extension fractures. The orientations, lengths and frequency of these lineaments should help in differentiating their ages. The less prominent veins are in the NE-SW and ESE-WNW directions, which are in the "bc" tensile fractures. Early Cretaceous sediments are characterized by NW-SE major and NE-SW minor sets of veins, while the late Cretaceous sequence is characterized by NNE-SSW major and ESE-WNW minor, mainly barite, veins. More than 70% of the barite samples tested gave specific gravity values of 4.2 and above, which is the range specified by the American Petroleum Institution (API) as drilling mud additive or weighting agent. Other vein-filling minerals in this basin are lead ore (galena), zinc ore (sphalerite), pyrite and amethyst, which are altogether subsidiary to barite mineralization.

  5. Late Cenozoic structure and correlations to seismicity along the Olympic-Wallowa Lineament, northwest United States

    USGS Publications Warehouse

    Mann, G.M.; Meyer, C.E.

    1993-01-01

    Late Cenozoic fault geometry, structure, paleoseismicity, and patterns of recent seismicity at two seismic zones along the Olympic-Wallowa lineament (OWL) of western Idaho, northeast Oregon, and southeast Washington indicate limited right-oblique slip displacement along multiple northwest-striking faults that constitute the lineament. The southern end of the OWL originates in the Long Valley fault system and western Snake River Plain in western Idaho. The OWL in northeast Oregon consists of a wide zone of northwest-striking faults and is associated with several large, inferred, pull-apart basins. The OWL then emerges from the Blue Mountain uplift as a much narrower zone of faults in the Columbia Plateau known as the Wallula fault zone (WFZ). Stuctural relationships in the WFZ strongly suggest that it is a right-slip extensional duplex. -from Authors

  6. Analysis of pseudocolor transformations of ERTS-1 images of Southern California area. [geological faults and lineaments

    NASA Technical Reports Server (NTRS)

    Merifield, P. M. (Principal Investigator); Lamar, D. L.; Stratton, R. H.; Lamar, J. V.; Gazley, C., Jr.

    1974-01-01

    The author has identified the following significant results. Representative faults and lineaments, natural features on the Mojave Desert, and cultural features of the southern California area were studied on ERTS-1 images. The relative appearances of the features were compared on a band 4 and 5 subtraction image, its pseudocolor transformation, and pseudocolor images of bands 4, 5, and 7. Selected features were also evaluated in a test given students at the University of California, Los Angeles. Observations and the test revealed no significant improvement in the ability to detect and locate faults and lineaments on the pseudocolor transformations. With the exception of dry lake surfaces, no enhancement of the features studied was observed on the bands 4 and 5 subtraction images. Geologic and geographic features characterized by minor tonal differences on relatively flat surfaces were enhanced on some of the pseudocolor images.

  7. A study of structural lineaments in Pantanal (Brazil) using remote sensing data.

    PubMed

    Paranhos Filho, Antonio C; Nummer, Alexis R; Albrez, Edilce A; Ribeiro, Alisson A; Machado, Rômulo

    2013-09-01

    This paper presents a study of the structural lineaments of the Pantanal extracted visually from satellite images (CBERS-2B satellite, Wide Field Imager sensor, a free image available in INTERNET) and a comparison with the structural lineaments of Precambrian and Paleozoic rocks surrounding the Cenozoic Pantanal Basin. Using a free software for satellite image analysis, the photointerpretation showed that the NS, NE and NW directions observed on the Pantanal satellite images are the same recorded in the older rocks surrounding the basin, suggesting reactivation of these basement structural directions during the Quaternary. So the Pantanal Basin has an active tectonics and its evolution seems to be linked to changes that occurred during the Andean subduction. PMID:24068083

  8. Lineament Domain of Regional Strike-Slip Corridor: Insight from the Neogene Transtensional De Geer Transform Fault in NW Spitsbergen

    NASA Astrophysics Data System (ADS)

    Cianfarra, P.; Salvini, F.

    2015-05-01

    Lineaments on regional scale images represent controversial features in tectonic studies. Published models explain the presence of the lineament domains in most geodynamic environments as resulting from the enhanced erosion along strikes normal to the upper crustal regional extension. Despite their success in many tectonic frameworks, these models fail to explain the existing lineament domains in the regional strike-slip corridors that separate regional blocks, including the transform faults. The present paper investigates the lineament distribution in such environments, and specifically presents the results from a study along the shear corridor of the De Geer Transform Fault in the North Atlantic, responsible for the separation and drifting away between Northern Greenland and the Svalbard Archipelago since Oligocene times. The study spans from satellite image analysis and outcrop scale investigations to a more regional analysis on a digital bathymetric model of the North Atlantic-Arctic Ocean. Lineaments were automatically detected in the spectral band 8 (0.52-0.9 μm) of a Landsat 7 image (15 m/pixel resolution). A total of 320 image lineaments were extracted from both the regional and the local scale investigations and statistically analyzed. Results from the multi-scalar lineament analyses revealed the existence of a main N-S lineament domain regionally persistent from the De Geer corridor to the western margin of northern Spitsbergen where it relates to the youngest, post-Oligocene, tectonics observed onshore. This is confirmed by field observations showing that the N-S faults represent the youngest brittle deformation system and systematically cut the deformations associated with the building of the Tertiary West Spitsbergen fold and thrust belt. The N-S lineament domain is the result of the activity of a larger, regional scale tectonic feature, NW-SE oriented and responsible for the localized extension within its deformation corridor, the De Geer Transform

  9. Quantification of geologic lineaments by manual and machine processing techniques. [in Oklahoma and the Colorado Plateau

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    A study was conducted to investigate the effect of operator variability and subjectivity in lineament mapping and to examine methods to minimize or eliminate these problems by use of several machine preprocessing methods. LANDSAT scenes from the Anadarko Basin of Oklahoma and the Colorado Plateau were analyzed as test cases. Four geologists mapped lineaments on an Anadarko Basin scene, using transparencies of MSS bands 4-7, and their results are compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators showed that only 0.4% were recognized by all four operators, 4.7% by three, 17.8% by two and 77% by one operator. Two methods of machine aided mapping were tested, both simulating directional filters. One consists of computer (digital) processing of CCTs using edge enhancement algorithms, the other employs a television (analog) scanning of an image transparency which superimposes the original image and one offset in the direction of the scan line.

  10. Lineament Mapping for Groundwater Exploration Using Remotely Sensed Imagery in Different Terrains

    NASA Astrophysics Data System (ADS)

    Alonso, C. A.; Rios-Sanchez, M.; Gierke, J.

    2008-12-01

    Developing methods for analyzing remote sensing data to delineate fractures and discontinuities in hard-rock terrains could be used to improve well-siting strategies in regions where the primary sources of groundwater are bedrock wells. Groundwater recharge/discharge zones might also be detectable using remote sensing techniques that are sensitive to temperature, vegetation, and water content differences. Fracture networks and discontinuities are difficult to characterize because of inadequate information available from drilling records and conventional mapping. Most features, such as bedding planes, foliations, and faults, occur as linear features called lineaments and these are sometimes visible in aerial photos and remotely sensed imagery. Bruning (2008) demonstrated how lineaments could be mapped using remotely sensed imagery by identifying patterns based on color, tone, and texture, and demonstrated a suite of digital image processing techniques, such as principal component analysis and various indexing methods, to enhance the visibility of features from different data sources. This approach was developed for a relatively small volcanic area (4 km by 16 km) in Nicaragua. We are adapting this approach to study a regional system of multiple aquifers and created a lineament map of the Quito aquifer system in Ecuador using ASTER, RADARSAT, and Landsat images together with a Digital Elevation Model. The normalized difference vegetation index was used to detect fractures and faults that affect the occurrence of vegetation associated with proximity of groundwater. The normalized difference water index is sensitive to water content in vegetated areas. In addition to applying the approach to a new and larger volcanic region, this method was used in an attempt to identify the cavity network in a karst terrain in a Northern area of Puerto Rico where groundwater is the main supply of drinking water for inhabitants and also contributes to base flow for surface water bodies

  11. The tectonic evolution of the Transbrasiliano Lineament in northern Paraná Basin, Brazil, as inferred from aeromagnetic data

    NASA Astrophysics Data System (ADS)

    Curto, Julia B.; Vidotti, Roberta M.; Fuck, Reinhardt A.; Blakely, Richard J.; Alvarenga, Carlos J. S.; Dantas, Elton L.

    2014-03-01

    Data from six airborne magnetic surveys were compiled and analyzed to develop a structural interpretation for the Transbrasiliano Lineament in northern Paraná Basin, Brazil. Magnetic lineaments, interpreted to reflect geologic contacts and structures at different depths, were illuminated using the matched-filter technique applied to aeromagnetic anomalies. Field-based structural measurements generally support our magnetic analysis. We estimated three primary magnetic zones: (i) a zone of deep magnetic sources at 20 km depth, (ii) an intermediate basement zone at 6 km depth, and (iii) a shallow zone of near-surface geological features at 1.5 km depth. The deepest zone exhibits three major NE trending crustal discontinuities related to the Transbrasiliano Lineament, dividing the region into four geotectonic blocks. Anomalies associated with the intermediate zone indicate directional divergence of subsidiary structures away from the main Transbrasiliano Fault, which strikes N30°E. The shallow magnetic zone includes near-surface post-Brasiliano orogenic granites. Our analysis identified NE trending sigmoidal lineaments around these intrusions, indicating intense zones of deformation associated with probable shear structures. At the shallow depth zone, magnetic anomalies caused by Cretaceous alkaline intrusive bodies and basalts of the Serra Geral Formation are enhanced by the matched-filter method. These igneous bodies are related to extensional NW striking lineaments and seismicity aligned along these lineaments suggests that they currently are reactivated. Prior to flexural subsidence of the Paraná Basin, reactivation processes along transcurrent elements of the Transbrasiliano Lineament promoted extensional processes and formed initial Paraná Basin depocenters. Cretaceous and more recent sedimentation also correlate with reactivations of NE striking structures.

  12. Main structural lineaments of north-eastern Morocco derived from gravity and aeromagnetic data

    NASA Astrophysics Data System (ADS)

    El Gout, Radia; Khattach, Driss; Houari, Mohammed-Rachid; Kaufmann, Olivier; Aqil, Hicham

    2010-09-01

    Many years ago, geophysical surveys (gravity and aeromagnetic) were initiated for economic investigation and recently the analysis of gravity and magnetic anomalies are used as a powerful tool for the geological mapping. The present study is based on various filtered maps of gravity and aeromagnetic anomalies of north-eastern Morocco (NEM) in order to highlight its main structural features. Filtering techniques such as horizontal gradient, upward continuation and Euler deconvolution were used to map structural lineaments in NEM. The obtained structural map is consistent with many faults already recognized or supposed by traditional structural studies, and highlights new major accidents by specifying their layout and dips.

  13. FORTRAN 4 programs for summarization and analysis of fracture trace and lineament patterns

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Lowman, P. D., Jr.

    1974-01-01

    Systematic and detailed analysis of lineament and fracture trace patterns has long been neglected because of the large number of observations involved in such an analysis. Three FORTRAN 4 programs were written to facilitate this manipulation. TRANSFORM converts the initial fracture map data into a format compatible with AZMAP, whose options allow repetitive manipulation of the data for optimization of the analysis. ROSE creates rose diagrams of the fracture patterns suitable for map overlays and tectonic interpretation. Examples are given and further analysis techniques using output from these programs are discussed.

  14. Ancient to Modern History of the Jemez Lineament of North Central New Mexico

    NASA Astrophysics Data System (ADS)

    Magnani, M.; Miller, K. C.; Levander, A.; Karlstrom, K. E.

    2002-12-01

    The continental lithosphere of the southwestern US was derived from mantle sources and accreted to the proto-North American continent, more specifically the Archean-age Wyoming province, during a succession of island arc collisions between 1.8-1.6 Ga. Many of the assembly structures, that is, the sutures between accreted island arcs and oceanic fragments, have been difficult to identify from surface geology. Likewise the tectonic significance of major lineaments in today's lithosphere remains uncertain. The Jemez Lineament (JL), originally defined as an alignment of Tertiary-Quaternary volcanic centers, is a NE trending zone characterized by active uplift, low seismic velocity in the mantle, and repeated reactivation. It also coincides with the southern edge of a 300 km wide transition zone between the Yavapai (1.8-1.7 Ga) and Mazatzal (1.7-1.6 Ga) Proterozoic provinces. This study presents new deep crustal seismic reflection results across the Jemez Lineament of NM, and the Proterozoic Yavapai-Mazatzal terrane boundary. The crust is strongly reflective from the sedimentary column to its base at ~39-42 km. The seismic data show large-scale structures that we interpret as a Proterozoic bi-vergent orogen that extends for at least 170 km laterally and roots into the mantle at the southern edge of the Jemez lineament. A significant portion of this orogen is a 20km thick, south-vergent, crustal duplex occupying at least 50% of the crust south of the JL. North of the JL the depth migrated seismic images show a set of upper crustal north verging recumbent folds and related thrusts in the Proterozoic rocks as well as a south dipping reflectivity in the lower crust that we interpret as one of the north-verging ramps associated with the orogen. Subhorizontal high amplitude reflections across the JL at about 7-15km depth are interpreted to be extensive magmatic intrusions of uncertain age. Based on the seismic and geologic data, we argue that the JL represents both a

  15. A model for impact-induced lineament formation and porosity growth on Eros

    NASA Astrophysics Data System (ADS)

    Tonge, Andrew L.; Ramesh, K. T.; Barnouin, Olivier

    2016-03-01

    We investigate the impact history of the Near Earth Asteroid (NEA) Eros 433 using a new material model for brittle materials such as rocks, where initial flaw distributions within the rock are explicitly defined to match what is known about flaw size distributions in rocks. These simulations are limited to the initial impact phase of the crater formation process and use a very crude approximation for the effect of the gravitational overburden pressure. Given these approximations, our simulations of this numerical approximation of Eros suggest that the current observed bulk porosity of about 25% could be consistent with the porosity generated by the formation of the three largest craters observed on Eros indicating that Eros could have started as an intact shard from a prior impact event. Further, we investigate the consequences of two possible internal flaw distributions for the asteroid: a "strong" flaw distribution with shorter crack lengths, that are more difficult to activate during cratering; and a "weak" flaw distribution with longer flaws. The "strong" distribution produces localized deformation regions (lineaments) that are resolved by the simulations, while the "weak" distribution does not produce resolved localized features. For either distribution of internal flaws the initial impact (assumed to be the Himeros forming impact) shatters but does not disrupt the body implying that simulations of asteroid mitigation approaches should assume that asteroids will behave like rubble piles. Subsequent impact events activate linear features created by prior impacts but only change the orientation of the lineament structure near the impact site.

  16. Relation of lineaments to sulfide deposits: Bald Eagle Mountain, Centre County, Pennsylvania

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator); Krohn, M. D.; Gold, D. P.

    1975-01-01

    The author has identified the following significant results. Discrete areas of finely-fractured and brecciated sandstone float are present along the crest of Bald Mountain and are commonly sites of sulfide mineralization, as evidenced by the presence of barite and limonite gossans. The frequency distributions of the brecciated float as the negative binomial distribution supports the interpretation of a separate population of intensely fractured material. Such zones of concentrated breccia float have an average width of one kilometer with a range from 0.4 to 1.6 kilometers and were observed in a quarry face to have subvertical dips. Direct spatial correlation of the Landsat-derived lineaments to the fractured areas on the ridge is low; however, the mineralized and fracture zones are commonly assymetrical to the lineament positions. Such a systematic dislocation might result from an inherent bias in the float population or could be the product of the relative erosional resistance of the silicified material in the mineralized areas in relation to the erosionally weak material at the stream gaps.

  17. Lineaments from airborne SAR images and the 1988 Saguenay earthquake, Quebec, Canada

    SciTech Connect

    Roy, D.W.; Schmitt, L.; Woussen, G.; Duberger, R. )

    1993-08-01

    Airborne SAR images provided essential clues to the tectonic setting of (1) the MbLg 6.5 Saguenay earthquake of 25 November 1988, (2) the Charlevoix-Kamouraska seismic source zone, and (3) some of the low *eve* seismic activity in the Eastern seismic background zone of Canada. The event occurred in the southeastern part of the Canadian Shield in an area where the boundary between the Saguenay graben and the Jacques Cartier horst is not well defined. These two tectonic blocks are both associated with the Iapetan St-Lawrence rift. These blocks exhibit several important structural breaks and distinct domains defined by the lineament orientations, densities, and habits. Outcrop observations confirm that several lineament sets correspond to Precambrian ductile shear zones reactivated as brittle faults during the Phanerozoic. In addition, the northeast and southwest limits of recent seismic activity in the Charlevoix-Kamouraska zone correspond to major elements of the fracture pattern identified on the SAR images. These fractures appear to be related to the interaction of the Charlevoix astrobleme with the tectonic features of the area. 20 refs.

  18. Raton-Clayton Volcanic Field magmatism in the context of the Jemez Lineament

    NASA Astrophysics Data System (ADS)

    Schrader, C. M.; Pontbriand, A.

    2013-12-01

    The Raton-Clayton Volcanic Field (RCVF) was active from 9 Ma to approximately 50 Ka and stretches from Raton, New Mexico in the west to Clayton, New Mexico in the east. The field occurs in the Great Plains at the northeastern end of the Jemez Lineament, a major crustal feature and focus of volcanism that extends southwest to the Colorado Plateau in Arizona and encompasses five other major volcanic fields. Jemez Lineament magmatism is temporally related to Rio Grande Rift magmatism, though it extends NE and SW from the rift itself, and it has been suggested that it represents an ancient crustal suture that serves as a conduit for magmatism occurring beneath the larger region of north and central New Mexico (Magnani et al., 2004, GEOL SOC AM BULL, 116:7/8, pp. 1-6). This study extends our work into the RCVF from prior and ongoing work in the Mount Taylor Volcanic Field, where we identified different mantle sources with varying degrees of subduction alteration and we determined some of the crustal processes that contribute to the diversity of magma chemistry and eruptive styles there (e.g., AGU Fall Meeting, abst. #V43D-2884 and #V43D-2883). In the RCVF, we are analyzing multiple phases by electron microprobe and plagioclase phenocrysts and glomerocrysts by LA-ICPMS for Sr isotopes and trace elements. We are undertaking this investigation with the following goals: (1) to evaluate previous magma mixing and crustal assimilation models for Sierra Grande andesites (Zhu, 1995, unpublished Ph.D. dissertation, Rice University; Hesse, 1999, unpublished M.S. thesis, Northern Arizona University); (2) to evaluate subduction-modified mantle as the source for RCVF basanites (specifically those at Little Grande); and (3) to assess the possible role of deep crustal cumulates in buffering transitional basalts. In the larger context, these data will be used to evaluate the varying degree of subduction-modification and the effect of crustal thickness on magmatism along the Jemez

  19. The wister mud pot lineament: Southeastward extension or abandoned strand of the San Andreas fault?

    USGS Publications Warehouse

    Lynch, D.K.; Hudnut, K.W.

    2008-01-01

    We present the results of a survey of mud pots in the Wister Unit of the Imperial Wildlife Area. Thirty-three mud pots, pot clusters, or related geothermal vents (hundreds of pots in all) were identified, and most were found to cluster along a northwest-trending line that is more or less coincident with the postulated Sand Hills fault. An extrapolation of the trace of the San Andreas fault southeastward from its accepted terminus north of Bombay Beach very nearly coincides with the mud pot lineament and may represent a surface manifestation of the San Andreas fault southeast of the Salton Sea. Additionally, a recent survey of vents near Mullet Island in the Salton Sea revealed eight areas along a northwest-striking line where gas was bubbling up through the water and in two cases hot mud and water were being violently ejected.

  20. A structural fabric defined by topographic lineaments: Correlation with Tertiary deformation of Ellesmere and Axel Heiberg Islands, Canadian Arctic

    NASA Technical Reports Server (NTRS)

    Oakey, Gordon

    1994-01-01

    Digital topographic contours from four 1:250000 scale maps have been gridded to produce a digital elevation model for part of Ellesmere and Axel Heiberg islands in the Canadian Arctic Islands. Gradient calculations were used to define both east and west dipping slopes defining a pattern of lineaments that have been compared with mapped geological structures. In ice-covered areas, where geological mapping was not possible, well-defined topographic lineaments have been identified and are correlated to extensions of major structural features. The northeast-southwest patterns of both topographic lineaments and mapped structures are strongly unimodal and support a single compressive event oriented at 67 deg west of north. This orientation is coincidental with the convergence direction calculated from the kinematic poles of rotation for Greenland relative to North America between 56 and 35 Ma. A minor secondary peak at 70 east of north is observed for thrust and normal fault solutions and is not directly related to the predicted convergence direction. Whether this represents a unique phase of deformation or is a subcomponent of a single event is not known. The agreement of structural components, lineament orientations, and convergence direction suggests an overwhelming over print of Eurekan deformation on any preexisting structural fabric. This study confirms, for the first time, an excellent compatibility between geological and geophysical constraints for the timing and geometry of the Eurekan orogeny.

  1. Striking Local Distinctions in Basaltic Melts within Nicaraguan Cross-arc Lineaments

    NASA Astrophysics Data System (ADS)

    Her, X.; Walker, J. A.; Roggensack, K.

    2015-12-01

    The Nejapa-Miraflores (NM) and Granada (G) lineaments which cut across the Central American volcanic front (CAVF) host numerous monogenetic vents which have erupted diverse basaltic magmas (e.g., Walker, 1984). As previously shown by Walker (1984), the basaltic magmas loosely fall into two groups: a high Ti, low K group which are reminiscent of MORB or BABB; and a low Ti, high K group which are more typical of subduction zones worldwide. Major element data obtained from over 200 olivine-hosted melt inclusions found within NM and G tephras from six separate monogenetic vents confirm this unusual compositional dichotomy. Melt inclusions from four of the six monogenetic vents are exclusively high- or low-Ti, while two of the volcanoes have both high- and low-Ti melt inclusions. New volatile and trace element data on over 40 of the NM and G melt inclusions has yielded additional compositional distinctions between the high- and low-Ti groups. Least degassed high-Ti melts tend to have lower water contents than their low-Ti counterparts. The high-Ti Inclusions also have lower concentrations of U, Th, Pb, Ba and Cs and lower La/Yb ratios. In addition, there are subtle HFSE variations between the two types of basalts. The overall geochemical differences between the high- and low-Ti groups suggest that the mantle wedge source of the latter contains a greater slab-derived (hemipelagic) sediment melt component than the former linked to a larger flux of hydrous fluids from deeper in the subducting Cocos plate. What is particularly significant is that the contrasting mafic emanations from these monogenetic volcano lineaments demonstrate that transport of fluids, volatiles and basaltic melts in subduction zones can be quite variable and complex on a very localized scale.

  2. An Analysis of Surface and Subsurface Lineaments and Fractures for Oil and Gas Exploration in the Mid-Continent Region

    SciTech Connect

    Guo, Genliang; and George, S.A.

    1999-04-08

    An extensive literature search was conducted and geological and mathematical analyses were performed to investigate the significance of using surface lineaments and fractures for delineating oil and gas reservoirs in the Mid-Continent region. Tremendous amount of data were acquired including surface lineaments, surface major fracture zones, surface fracture traces, gravity and magnetic lineaments, and Precambrian basement fault systems. An orientation analysis of these surface and subsurface linear features was performed to detect the basic structural grains of the region. The correlation between surface linear features and subsurface oil and gas traps was assessed, and the implication of using surface lineament and fracture analysis for delineating hydrocarbon reservoirs in the Mid-Continent region discussed. It was observed that the surface linear features were extremely consistent in orientation with the gravity and magnetic lineaments and the basement faults in the Mid-Continent region. They all consist of two major sets bending northeast and northwest, representing, therefore, the basic structural grains of the region. This consistency in orientation between the surface and subsurface linear features suggests that the systematic fault systems at the basement in the Mid-Continent region have probably been reactivated many times and have propagated upward all the way to the surface. They may have acted as the loci for the development of other geological structures, including oil and gas traps. Also observed was a strong association both in orientation and position between the surface linear features and the subsurface reservoirs in various parts of the region. As a result, surface lineament and fracture analysis can be used for delineating additional oil and gas reserves in the Mid-Continent region. The results presented in this paper prove the validity and indicate the significance of using surface linear features for inferring subsurface oil and gas reservoirs

  3. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  4. Statistical analysis of surface lineaments and fractures for characterizing naturally fractured reservoirs

    SciTech Connect

    Guo, Genliang; George, S.A.; Lindsey, R.P.

    1997-08-01

    Thirty-six sets of surface lineaments and fractures mapped from satellite images and/or aerial photos from parts of the Mid-continent and Colorado Plateau regions were collected, digitized, and statistically analyzed in order to obtain the probability distribution functions of natural fractures for characterizing naturally fractured reservoirs. The orientations and lengths of the surface linear features were calculated using the digitized coordinates of the two end points of each individual linear feature. The spacing data of the surface linear features within an individual set were, obtained using a new analytical sampling technique. Statistical analyses were then performed to find the best-fit probability distribution functions for the orientation, length, and spacing of each data set. Twenty-five hypothesized probability distribution functions were used to fit each data set. A chi-square goodness-of-fit test was used to rank the significance of each fit. A distribution which provides the lowest chi-square goodness-of-fit value was considered the best-fit distribution. The orientations of surface linear features were best-fitted by triangular, normal, or logistic distributions; the lengths were best-fitted by PearsonVI, PearsonV, lognormal2, or extreme-value distributions; and the spacing data were best-fitted by lognormal2, PearsonVI, or lognormal distributions. These probability functions can be used to stochastically characterize naturally fractured reservoirs.

  5. Hydrological characterisation of geological lineaments: a case study from the Aravalli terrain, India

    NASA Astrophysics Data System (ADS)

    Bhuiyan, Chandrashekhar

    2015-03-01

    Geological lineaments or `geolineaments', irrespective of their hydrological nature, are generally treated as a single class, and density of geolineaments is considered as a key hydrogeological factor influencing infiltration and recharge. However, all natural linear features do not behave alike in different hydrological events such as recharge and discharge. Considering this, the described study examines the role and influence of different types of geolineaments on the groundwater regime in the semi-arid crystalline Aravalli terrain of western India. Geolineaments have been classified with the aid of published geological maps, satellite imagery and selected field verification. Graphical analysis and statistical tests have revealed significant differences in the nature, role and efficiency of various geolineaments in terms of recharge and productivity. With respect to water-level fluctuation representing recharge and well productivity depicting discharge, reversal in hydrological efficiency is noticed for drainage lines (straight versus curvilinear), fold axes (antiformal versus synformal) and faults (vertical-slip versus lateral-slip). Analysis and comparison has revealed that the associated geological structures and their geometry, and hydraulic properties of the aquifers (vadose zone and saturated zone), are the key factors for the different nature, roles and efficiency of various geolineaments.

  6. Applicability of ERTS-1 to lineament and photogeologic mapping in Montana: Preliminary report

    NASA Technical Reports Server (NTRS)

    Weidman, R. M.; Alt, D. D.; Flood, R. E.; Hawley, K. T.; Wackwitz, L. K.; Berg, R. B.; Johns, W. M.

    1973-01-01

    A lineament map prepared from a mosaic of western Montana shows about 85 lines not represented on the state geologic map, including elements of a northeast-trending set through central western Montana which merit ground truth checking and consideration in regional structural analysis. Experimental fold annotation resulted in a significant local correction to the state geologic map. Photogeologic mapping studies produced only limited success in identification of rock types, but they did result in the precise delineation of a late Cretaceous or early Tertiary volcanic field (Adel Mountain field) and the mapping of a connection between two granitic bodies shown on the state map. Imagery was used successfully to map clay pans associated with bentonite beds in gently dipping Bearpaw Shale. It is already apparent that ERTS imagery should be used to facilitate preparation of a much needed statewide tectonic map and that satellite imagery mapping, aided by ground calibration, provides and economical means to discover and correct errors in the state geologic map.

  7. The tectonic and volcanic history of Mercury as inferred from studies of scarps, ridges, troughs, and other lineaments

    NASA Technical Reports Server (NTRS)

    Dzurisin, D.

    1978-01-01

    Tectonic and volcanic modification of the Mercurian surface is discussed, and nine landform classes are defined. An evolutionary chronology, based on reported interpretations of scarp, ridge, trough, and other lineament features, is presented, and the roles (in chronological order) of accretion and differentiation, tidal spindown, plains volcanism, heavy bombardment, cooling/contraction, Caloris impact, intense surface modification, basin subsidence, local plains volcanism, isostatic basin uplift, and light cratering are considered. The observational data were obtained by Mariner 10.

  8. Enhancement of Landsat images for lineament analysis in the area of the Salina Basin, New York and Pennsylvania

    USGS Publications Warehouse

    Krohn, M. Dennis

    1979-01-01

    Digital image processing of Landsat images of New York and Pennsylvania was undertaken to provide optimum images for lineament analysis in the area of the Salina Basin. Preliminary examination of Landsat images from photographic prints indicated sufficient differences between the spectral bands of the Landsat Multispectral Scanner (MSS) to warrant digital processing of both MSS band 7 and MSS band 5. Selective contrast stretching based on analysis of the Landsat MSS histograms proved to be the most important factor affecting the appearance of the images. A three-point linear stretch using the two end points and a middle point to the Landsat frequency distribution was most successful. The flexibility of the REMAPP image processing system was helpful in creating such custom-tailored stretches. An edge enhancement was tested on the MSS band 5 image, but was not used. Stereoscopic Landsat images acquired from adjacent orbits aided recognition of topographic features; the area of stereoscopic coverage could be increased by utilizing the precession of Landsat-1?s orbit . Improvements in the digitally processed scenes did affect the analysis of lineaments for the New York area; on the enhanced MSS band 5 image, an ENE trending set of lineaments is visible, which was not recognized from other images.

  9. An interdisciplinary analysis of multispectral satellite data for selected cover types in the Colorado Mountains, using automatic data processing techniques. [geological lineaments and mineral exploration

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. One capability which has been recognized by many geologists working with space photography is the ability to see linear features and alinements which were previously not apparent. To the exploration geologist, major lineaments seen on satellite images are of particular interest. A portion of ERTS-1 frame 1407-17193 (3 Sept. 1973) was used for mapping lineaments and producing an iso-lineament intersection map. Skylab photography over the area of prime area was not useable due to snow cover. Once the lineaments were mapped, a grid with 2.5 km spacing was overlayed on the map and the lineament intersections occurring within each grid square were counted and the number plotted in the center of the grid square. These numbers were then contoured producing a contour map of equal lineament intersection. It is believed that the areas of high intersection concentration would be the most favorable area for ore mineralization if favorable host rocks are also present. These highly fractured areas would act as conduits for carrying the ore forming solutions to the site of deposition in a favorable host rock. Two of the six areas of high intersection concentration are over areas of present or past mining camps and small claims are known to exist near the others. These would be prime target areas for future mineral exploration.

  10. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  11. Active faulting on the Wallula fault within the Olympic-Wallowa Lineament (OWL), eastern Washington State

    NASA Astrophysics Data System (ADS)

    Sherrod, B. L.; Lasher, J. P.; Barnett, E. A.

    2013-12-01

    Several studies over the last 40 years focused on a segment of the Wallula fault exposed in a quarry at Finley, Washington. The Wallula fault is important because it is part of the Olympic-Wallowa lineament (OWL), a ~500-km-long topographic and structural lineament extending from Vancouver Island, British Columbia to Walla Walla, Washington that accommodates Basin and Range extension. The origin and nature of the OWL is of interest because it contains potentially active faults that are within 50 km of high-level nuclear waste facilities at the Hanford Site. Mapping in the 1970's and 1980's suggested the Wallula fault did not offset Holocene and late Pleistocene deposits and is therefore inactive. New exposures of the Finley quarry wall studied here suggest otherwise. We map three main packages of rocks and sediments in a ~10 m high quarry exposure. The oldest rocks are very fine grained basalts of the Columbia River Basalt Group (~13.5 Ma). The next youngest deposits include a thin layer of vesicular basalt, white volcaniclastic deposits, colluvium containing clasts of vesicular basalt, and indurated paleosols. A distinct angular unconformity separates these vesicular basalt-bearing units from overlying late Pleistocene flood deposits, two colluvium layers containing angular clasts of basalt, and Holocene tephra-bearing loess. A tephra within the loess likely correlates to nearby outcrops of Mazama ash. We recognize three styles of faults: 1) a near vertical master reverse or oblique fault juxtaposing very fine grained basalt against late Tertiary-Holocene deposits, and marked by a thick (~40 cm) vertical seam of carbonate cemented breccia; 2) subvertical faults that flatten upwards and displace late Tertiary(?) to Quaternary(?) soils, colluvium, and volcaniclastic deposits; and 3) flexural slip faults along bedding planes in folded deposits in the footwall. We infer at least two Holocene earthquakes from the quarry exposure. The first Holocene earthquake deformed

  12. Parallel machines: Parallel machine languages

    SciTech Connect

    Iannucci, R.A. )

    1990-01-01

    This book presents a framework for understanding the tradeoffs between the conventional view and the dataflow view with the objective of discovering the critical hardware structures which must be present in any scalable, general-purpose parallel computer to effectively tolerate latency and synchronization costs. The author presents an approach to scalable general purpose parallel computation. Linguistic Concerns, Compiling Issues, Intermediate Language Issues, and hardware/technological constraints are presented as a combined approach to architectural Develoement. This book presents the notion of a parallel machine language.

  13. Lineament Extraction from SPOT 5 and NigeriaSat-X Imagery of the Upper Benue Trough, Nigeria

    NASA Astrophysics Data System (ADS)

    Ogunmola, J. K.; Ayolabi, E. A.; Olobaniyi, S. B.

    2014-11-01

    The Upper Benue Trough is part of the Benue Trough of Nigeria and is comprised of three basins: the east-west trending Yola Basin (Yola Arm), the north-south trending Gongola Basin (Gongola Arm) and the northeast-southwest trending Lau Basin (Main Arm). This research is an ongoing research at understanding the structural framework of the Upper Benue Trough using several techniques including the use of Remote Sensing and GIS. Several digital image enhancement techniques such as general contrast stretching and edge enhancement were applied to the NigeriaSat-X and SPOT 5 image in ERDAS IMAGINE 9.2 after which structures were mapped out on-screen using ArcMap 10. The Digital Elevation Model (DEM) of the Trough was also used to enhance geomorphic features. The analysis carried out on the images revealed that lineaments are abundant in the Upper Benue Trough and they can be subdivided into four major trends, NE-SW, NW-SE, W-E and N-S in order of abundance and range in length from about 300 m to 26 km. Several faults were also mapped out within the Basin such as a sinistral fault around Bakoreji village in Bauchi, a dextral fault close to Kalmai town in Gombe and a dextral fault close to Wong in Taraba. It was discovered that some of the sites where minerals such as lead and zinc ores are being mined occur in the zones of high lineament density. This study shows the capability of the DEM, SPOT 5 and NigeriaSat-X images for lineament/structural interpretations.

  14. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  15. Interpretation of high resolution aeromagnetic data for lineaments study and occurrence of Banded Iron Formation in Ogbomoso area, Southwestern Nigeria

    NASA Astrophysics Data System (ADS)

    Oladunjoye, Michael Adeyinka; Olayinka, Abel Idowu; Alaba, Mustapha; Adabanija, Moruffdeen Adedapo

    2016-02-01

    The quest for solid mineral resource as an alternative for oil income in Nigeria presents opportunity to diversify the resource base of the country. To fill some information gap on the long abandoned Ajase and Gbede Banded Iron Formations (BIF) in Ogbomoso area, Southwestern Nigeria, high resolution aeromagnetic data of Ogbomoso - Sheet 222 was interpreted; to provide a better understanding of the mode of occurrence of the iron ore and associated structural features and geologic model. These were accomplished by subjecting reduced-to-pole (RTP) residual aeromagnetic intensity map to various data filtering and processing involving total horizontal derivative, vertical derivative, Upward Continuation (UC), Downward Continuation (DC), Euler Deconvolution at different Spectral Indices (SI), and Analytical signal using Geosoft Oasis Montaj 6.4.2 (HJ) data processing and analysis software. The resultants maps were overlain, compared and or plotted on RTP residual aeromagnetic intensity map and or geological map and interpreted in relation to the surface geological map. Positive magnetic anomalies observed on the RTP residual aeromagnetic intensity map ranged from 2.1 to 94.0 nT and associated with contrasting basement rocks, Ajase and Gbede BIF; while negative magnetic anomalies varied between -54.7 nT and -2.8 nT and are associated with intrusive bodies. Interpreted lineaments obtained from total horizontal derivative map were separated into two categories namely ductile and brittle based on their character vis-à-vis magnetic anomalies on RTP intensity map. Whilst the brittle lineaments were interpreted as fracture or faults; the ductile lineaments were interpreted as folds or representing the internal fabric of the rock units. In addition prominent magnetic faults mainly due to offset of similar magnetic domain/gradient were also interpreted. The iron ore mineralization is distributed within the eastern portion of the study area with Ajase BIF at relatively greater

  16. Tectonic and volcanic history of Rhea as inferred from studies of scarps, ridges, troughs, and other lineaments

    SciTech Connect

    Thomas, P.G.

    1988-06-01

    The 13 geomorphic feature types presently defined through the analysis of landforms on Rhea are with only one exception interpretable as of tectonic or volcanic-tectonic origin. The troughs, grabens, grooves, pit chains, scarps, and other lineaments are purely extensional in nature, while the ridges are volcanic features formed in an extensional stress field; this extension was followed by a global compression era generating megaridges and megascarps. The extensional landforms seem to form a global grid pattern that is directionally similar to the theoretically projected pattern of a tidally distorted planet. 17 references.

  17. Reconstructing the 3D fracture distribution model from core (10 cm) to outcrop (10 m) and lineament (10 km) scales

    NASA Astrophysics Data System (ADS)

    Darcel, C.; Davy, P.; Bour, O.; de Dreuzy, J.

    2006-12-01

    Considering the role of fractures in hydraulic flow, the knowledge of the 3D spatial distribution of fractures is a basic concern for any hydrogeology-related study (potential leakages in waste repository, aquifer management, ?). Unfortunately geophysical imagery is quite blind with regard to fractures, and only the largest ones are generally detected, if they are. Actually most of the information has to be derived from statistical models whose parameters are defined from a few sparse sampling areas, such as wells, outcrops, or lineament maps. How these observations obtained at different scales can be linked to each other is a critical point, which directly addresses the issue of fracture scaling. In this study, we use one of the most important datasets that have ever been collected for characterizing fracture networks. It was collected by the Swedish company SKB for their research program on deep repository for radioactive waste, and consists of large-scale lineament maps covering about 100 km2, several outcrops of several hundreds of m2 mapped with a fracture trace length resolution down to 0.50 m, and a series of 1000m-deep cored boreholes where both fracture orientations and fracture intensities were carefully recorded. Boreholes are an essential complement to surface outcrops as they allow the sampling of horizontal fracture planes that, generally, are severely undersampled in subhorizontal outcrops. Outcrops, on the other hand, provide information on fracture sizes which is not possible to address from core information alone. However linking outcrops and boreholes is not straightforward: the sampling scale is obviously different and some scaling rules have to be applied to relate both fracture distributions; outcrops are 2D planes while boreholes are mostly 1D records; outcrops can be affected by superficial fracturing processes that are not representative of the fracturing at depth. We present here the stereology methods for calculating the 3D distribution

  18. A multi-directional and multi-scale roughness filter to detect lineament segments on digital elevation models - analyzing spatial objects in R

    NASA Astrophysics Data System (ADS)

    Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke

    2016-04-01

    Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map

  19. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer's task easier.

  20. Data parallelism

    SciTech Connect

    Gorda, B.C.

    1992-09-01

    Data locality is fundamental to performance on distributed memory parallel architectures. Application programmers know this well and go to great pains to arrange data for optimal performance. Data Parallelism, a model from the Single Instruction Multiple Data (SIMD) architecture, is finding a new home on the Multiple Instruction Multiple Data (MIMD) architectures. This style of programming, distinguished by taking the computation to the data, is what programmers have been doing by hand for a long time. Recent work in this area holds the promise of making the programmer`s task easier.

  1. Automatic Extraction of tectonic lineaments from high and medium resolution remote sensing data in the Hindukush-Pamir.

    NASA Astrophysics Data System (ADS)

    Rahnama, Mehdi; Gloaguen, Richard

    2013-04-01

    We propose to investigate neotectonic activity and deformation in north-east of the Afghanistan and south-east Tajikistan using remote sensing. We analyse geomorphic features such as stream profiles and drainage patterns and combine it with lineament analysis from high and medium resolution satellite data. For this purpose, we developed a new toolbox for the Automatic Lineaments Feature Extraction . Considering both classical and recently-reported edge detection methods, we chose 4 filter types: Sobel, LoG, Canny and Prewitt and then used the Hough transform for the identification of linear features. We finally merge close line segments with similar directions using polynomial curve fitting. The modern deformation, the fault movements" and the induced earthquakes in Afghanistan and Tajikistan are driven by the collision between the northward-moving Indian subcontinent and Eurasia. The general patterns and orientations of faults and the styles of deformation that we interpret from the imagery are consistent with the styles of faulting determined from focal mechanisms of historical earthquakes. With these techniques we are able to assess the activity of faults otherwise inaccessible. We show that the SW-Pamir is largely controlled by the Chaman-Herat fault system and, to a lesser extent by the Darvaz fault zone.

  2. GIS rock unit and lineament analysis distribution in Northern Chihuahua, Mexico: Cenozoic reactivation of the Mojave-Sonora-Megashear (?)

    NASA Astrophysics Data System (ADS)

    Martinez-Pina, C.; Goodell, P.

    2007-12-01

    Regional tectonic and geohydrology research for the State of Chihuahua has led to the analysis of rock unit distribution and their relationship to major lineaments. GIS geology maps from Mexico and the United States were used to extract and create shapefiles of four types of rocks: Cenozoic rhyolites, andesites, basalts and conglomerates. In addition to the generation of these layers, maps, including gravity, were converted into raster and georeferenced to serve as ground-truth. Rhyolite distribution demonstrates two different patterns of the upper volcanic series, a vast continuous region and a smaller area in the north where this region is broken in distinctive patterns. Basalt and conglomerate units terminate or decrease in abundance at major lineaments. Characteristics of these features combine to form linear features. Ages of rocks are pre-Miocene, thus the linear, extensional features breaking them up are younger. A strong N-S component is present interpreted to be associated with the Rio Grande Rift. A provocative observation is the presence of multiple WNW trending features. The Rio Papagochic southeast trending topographic embayment is one part of the evidence. Miocene 18 ma NW trending transtensional faulting has been recognized in west Texas. This trend of faults has also influenced breakaway zones of detachment faults in the Early Miocene in northern Sonora. Layering of these newly recognized concurrences over traces of the proposed Mojave-Sonora-Megashear show a strong coincidence. Is this a reactivation?

  3. Hypothesis for epeirogenic uplift above the Jemez lineament: Is Neogene doming recorded by river profiles and terraces?

    NASA Astrophysics Data System (ADS)

    Brown, S. W.; Karlstrom, K. E.; Kirby, E.; Ouimet, W.; Dillon, M.; Cox, C.; Newell, D.; de Moore, M.; van Wijk, J.; Coblentz, D.; Sower, T. R.; Rose-Coss, D.; Crossey, L. J.

    2008-12-01

    Rivers in the Rio Grande drainage of southern Colorado and northern New Mexico drain the southern Rockies southwards through the Rio Grande rift and across the NE trending Jemez lineament. We test the hypothesis that Quaternary tectonism (both faulting and broad doming due to magmatism and mantle driven dynamic uplift) may be recorded by drainage patterns and river profiles. These effects are not easy to distinguish from those of base level fall, drainage reorganization, and climate changes, but a regional look at New Mexico's rivers through time may help distinguish tectonic from climatic and geomorphic forcings. Longitudinal profiles of major drainages in northern New Mexico were constructed from 7.5 minute topographic maps, and DEM analysis. Bedrock lithologies, geometry of elevated terrace, and positions of basalt flows were compiled for each river. There are a striking number of reaches that exhibit sharp knickpoints and/or convexities in the profile. Some of these convexities and knickpoints seem to be bedrock-controlled; that is, they exist at hard rock-soft rock contacts. However, some convexities are not controlled by bedrock (e.g. entirely in shale); and similarly, some hard rock areas show no convexity, suggesting that bedrock control cannot always be used to explain convexities. The Rio Grande exhibits a double concave profile suggesting ongoing adjustments to neotectonic and geomorphic forcings. In map pattern, DEM analysis suggests a regional spatial correlation between the appearance of multiple convexities in numerous drainages and the Jemez lineament, a northeast trending zone of Quaternary magmatism and tectonism. Slope-area analysis, combined with Hack index analysis and topographic roughness analysis show good correlations of topographic parameters: 1) high gradient reaches (normalized for discharge), 2) regions of highest topographic roughness, and 3) zones of lowest mantle velocity. Thus, we support and expand the hypothesis that convexities

  4. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  5. Tectonic lineaments in the cenozoic volcanics of southern Guatemala: Evidence for a broad continental plate boundary zone

    NASA Technical Reports Server (NTRS)

    Baltuck, M.; Dixon, T. H.

    1984-01-01

    The northern Caribbean plate boundary has been undergoing left lateral strike slip motion since middle Tertiary time. The western part of the boundary occurs in a complex tectonic zone in the continental crust of Guatemala and southernmost Mexico, along the Chixoy-Polochic, Motogua and possibly Jocotan-Chamelecon faults. Prominent lineaments visible in radar imagery in the Neogene volcanic belt of southern Guatemala and western El Salvador were mapped and interpreted to suggest southwest extensions of this already broad plate boundary zone. Because these extensions can be traced beneath Quaternary volcanic cover, it is thought that this newly mapped fault zone is active and is accommodating some of the strain related to motion between the North American and Caribbean plates. Onshore exposures of the Motoqua-Polochic fault systems are characterized by abundant, tectonically emplaced ultramafic rocks. A similar mode of emplacement for these off shore ultramafics, is suggested.

  6. Preliminary Use of the Seismo-Lineament Analysis Method (SLAM) to Investigate Seismogenic Faulting in the Grand Canyon Area, Northern Arizona

    NASA Astrophysics Data System (ADS)

    Cronin, V. S.; Cleveland, D. M.; Prochnow, S. J.

    2007-12-01

    This is a progress report on our application of the Seismo-Lineament Analysis Method (SLAM) to the eastern Grand Canyon area of northern Arizona. SLAM is a new integrated method for identifying potentially seismogenic faults using earthquake focal-mechanism solutions, geomorphic analysis and field work. There are two nodal planes associated with any double-couple focal-mechanism solution, one of which is thought to coincide with the fault that produced the earthquake; the slip vector is normal to the other (auxiliary) plane. When no uncertainty in the orientation of the fault-plane solution is reported, we use the reported vertical and horizontal uncertainties in the focal location to define a tabular uncertainty volume whose orientation coincides with that of the fault-plane solution. The intersection of the uncertainty volume and the ground surface (represented by the DEM) is termed a seismo-lineament. An image of the DEM surface is illuminated perpendicular to the strike of the seismo- lineament to accentuate geomorphic features within the seismo-lineament that may be related to seismogenic faulting. This evaluation of structural geomorphology is repeated for several different azimuths and elevations of illumination. A map is compiled that includes possible geomorphic indicators of faulting as well as previously mapped faults within each seismo-lineament, constituting a set of hypotheses for the possible location of seismogenic fault segments that must be evaluated through fieldwork. A fault observed in the field that is located within a seismo-lineament, and that has an orientation and slip characteristics that are statistically compatible with the fault-plane solution, is considered potentially seismogenic. We compiled a digital elevation model (DEM) of the Grand Canyon area from published data sets. We used earthquake focal-mechanism solutions produced by David Brumbaugh (2005, BSSA, v. 95, p. 1561-1566) for five M > 3.5 events reported between 1989 and 1995

  7. Geobotanical and lineament analysis of sandsat satellite imagery for hydrocarbon microseeps

    SciTech Connect

    Warner, T.A.

    1997-10-01

    Both geobotanical and structural interpretations of remotely sensed data tend to be plagued by random associations. However, a combination of these methods has the potential to provide a methodology for excluding many false associations. To test this approach, a test site in West Virginia has been studied using remotely sensed and field data. The historic Volcano Oil Field, in Wood, Pleasants and Ritchie Counties was known as an area of hydrocarbon seeps in the last century. Although pressures in the reservoir are much reduced today, hydrocarbons remain in the reservoir. An examination of a multi-seasonal Landsat Thematic Mapper imagery has shown little difference between the forests overlying the hydrocarbon reservoirs compared to the background areas, with the exception of an image in the very early fall. This image has been enhanced using an nPDF spectral transformation that maximizes the contrast between the anomalous and background areas. A field survey of soil gas chemistry showed that hydrocarbon concentration is generally higher over the anomalous region. In addition, soil gas hydrocarbon concentration increases with proximity to linear features that cross the strike of the overall structure of the reservoir. Linear features that parallel the strike, however, do not have any discernible influence on gas concentration. Field spectral measurements were made periodically through the summer and early fall to investigate the origin of the spectral reflectance anomaly. Measurements were made with a full-range spectro-radiometer (400 nm to 2500 nm) on a number of different species, both on and off the spectral anomaly. The results lend support to the finding that in the early fall spectral reflectance increases in the near infrared and mid infrared in the spectrally anomalous regions.

  8. Comparative lineament analysis using SIR-C radar, Landsat TM and stereo LFC photographs for assessing groundwater resources, Red Sea Hills area, Sudan

    SciTech Connect

    Koch, M.; Mather, P.M.; Leason, A.

    1996-08-01

    This paper describes preliminary results from a comparative investigation of lineament mapping from stereoscopic LFC and from SIR-C L and C band synthetic aperture radar data. The lineament patterns are used together with other spatial data sets describing lithology and geomorphic characteristics in order to test a model of groundwater flow in the semi-arid Red Sea Hills area of Sudan described by Koch (1993). Initial results show that the LFC imagery is most useful for mapping detailed fracture patterns while the combination of L and C bands (total power) of the SIR-C synthetic aperture radar is helpful in the location of major deep-seated fracture zones. L-band SAR data together with a false-colour Landsat TM composite show the presence of subsurface moisture and vegetation respectively. These results are discussed within the context of a hydrogeological model.

  9. Mariner 10 data analysis. 1: Scarps, ridges, troughs, and other lineaments on Mercury. 2: Geologic significance of photometric variations on Mercury. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dzurisin, D.

    1977-01-01

    Volcanic and tectonic implications of the surface morphology of Mercury are discussed. Mercurian scarps, ridges, troughs, and other lineaments are described and classified as planimetrically linear, arcuate, lobate, or irregular. A global pattern of lineaments is interpreted to reflect modification of linear crustal joints formed in response to stresses induced by tidal spindown. Large arcuate scarps on Mercury most likely record a period of compressional tectonism near the end of heavy bombardment. Shrinkage owing to planetary cooling is the mechanism preferred for their production. Measurements of local normal albedo are combined with computer-generated photometric maps of Mercury to provide constraints on the nature of surface materials and processes. If the mercurian surface obeys the average lunar photometric function, its normal albedo at 554 nm is .16 + or - .03.

  10. Divisions of potential fracture permeability, based on distribution of structures and lineaments, in sedimentary rocks of the Rocky Mountains-High Plains region, Western United States

    USGS Publications Warehouse

    Cooley, Maurice E.

    1986-01-01

    Fractures--joints and faults--affect the movement of fluids in rocks. Fracture permeability is important in sedimentary rocks that otherwise transmit water slowly, particularly limestone, dolomite, and fine-grained sandstone. A map of fracture traces may be used in assessing the spatial distribution of groundwater. The principal map in this report (scale 1:2,500 ,000) shows geologic structures and lineaments that affect the distribution of fracture traces in the sedimentary rocks of the High Plains and some adjacent areas. Potential fracture permeability is indicated on the map by 5 divisions, ranging from division 1 (smallest) to division 5 (largest potential fracture permeability). Geologic structure was the basis for delineating division boundaries. Generally, rocks in structurally uplifted areas and near conspicuous lineaments are more extensively fractured and have greater secondary permeability than rocks of structural basins. (USGS)

  11. Active tectonics on Deception Island (West-Antarctica): A new approach by using the fractal anisotropy of lineaments, fault slip measurements and the caldera collapse shape

    USGS Publications Warehouse

    Pérez-López, R.; Giner-Robles, J.L.; Martínez-Díaz, J.J.; Rodríguez-Pascua, M.A.; Bejar, M.; Paredes, C.; González-Casado, J.M.

    2007-01-01

    The tectonic field on Deception Island (South Shetlands, West Antarctica) is determined from structural and fractal analyses. Three different analyses are applied to the study of the strain and stress fields in the area: (1) field measurements of faults (strain analysis), (2) fractal geometry of the spatial distribution of lineaments and (3) the caldera shape (stress analyses). In this work, the identified strain field is extensional with the maximum horizontal shortening trending NE-SW and NW-SE. The fractal technique applied to the spatial distribution of lineaments indicates a stress field with SHMAX oriented NE-SW. The elliptical caldera of Deception Island, determined from field mapping, satellite imagery, vents and fissure eruptions, has an elongate shape and a stress field with SHMAX trending NE-SW.

  12. The role of the Antofagasta-Calama Lineament in ore deposit deformation in the Andes of northern Chile

    NASA Astrophysics Data System (ADS)

    Palacios, Carlos; Ramírez, Luis E.; Townley, Brian; Solari, Marcelo; Guerra, Nelson

    2007-02-01

    During the Late Jurassic-Early Oligocene interval, widespread hydrothermal copper mineralization events occurred in association with the geological evolution of the southern segment of the central Andes, giving rise to four NS-trending metallogenic belts of eastward-decreasing age: Late Jurassic, Early Cretaceous, Late Paleocene-Early Eocene, and Late Eocene-Early Oligocene. The Antofagasta-Calama Lineament (ACL) consists of an important dextral strike-slip NE-trending fault system. Deformation along the ACL system is evidenced by a right-lateral displacement of the Late Paleocene-Early Eocene metallogenic belts. Furthermore, clockwise rotation of the Early Cretaceous Mantos Blancos copper deposit and the Late Paleocene Lomas Bayas porphyry copper occurred. In the Late Eocene-Early Oligocene metallogenic belt, a sigmoidal deflection and a clockwise rotation is observed in the ACL. The ACL is thought to have controlled the emplacement of Early Oligocene porphyry copper deposits (34-37 Ma; Toki, Genoveva, Quetena, and Opache), whereas it deflected the Late Eocene porphyry copper belt (41-44 Ma; Esperanza, Telégrafo, Centinela, and Polo Sur ore deposits). These observations suggest that right-lateral displacement of the ACL was active during the Early Oligocene. We propose that the described structural features need to be considered in future exploration programs within this extensively gravel-covered region of northern Chile.

  13. The Alegre Lineament and its role over the tectonic evolution of the Campos Basin and adjacent continental margin, Southeastern Brazil

    NASA Astrophysics Data System (ADS)

    Calegari, Salomão Silva; Neves, Mirna Aparecida; Guadagnin, Felipe; França, George Sand; Vincentelli, Maria Gabriela Castillo

    2016-08-01

    The structural framework and tectonic evolution of the sedimentary basins along the eastern margin of the South American continent are closely associated with the tectonic framework and crustal heterogeneities inherited from the Precambrian basement. However, the role of NW-SE and NNW-SSE structures observed at the outcropping basement in Southeastern Brazil and its impact over the development of those basins have not been closely investigated. In the continental region adjacent to the Campos Basin, we described a geological feature with NNW-SSE orientation, named in this paper as the Alegre Fracture Zone (AFZ), which is observed in the onshore basement and can be projected to the offshore basin. The main goal of this work was to study this structural lineament and its influence on the tectonic evolution of the central portion of the Campos Basin and adjacent mainland. The onshore area was investigated through remote sensing data joint with field observations, and the offshore area was studied through the interpretation of 2-D seismic data calibrated by geophysical well logs. We concluded that the AFZ occurs in both onshore and offshore as a brittle deformation zone formed by multiple sets of fractures that originated in the Cambrian and were reactivated mainly as normal faults during the rift phase and in the Cenozoic. In the Campos Basin, the AFZ delimitates the western side of the Corvina-Parati Low, composing a complex fault system with the NE-SW faults and the NW-SE transfer faults.

  14. Tectonic lineament mapping of the Thaumasia Plateau, Mars: Comparing results from photointerpretation and a semi-automatic approach

    NASA Astrophysics Data System (ADS)

    Vaz, David A.; Di Achille, Gaetano; Barata, Maria Teresa; Alves, Eduardo Ivo

    2012-11-01

    Photointerpretation is the technique generally used to map and analyze the tectonic features existent on Mars surface. In this study we compare qualitatively and quantitatively two tectonic maps based on the interpretation of satellite imagery and a map derived semi-automatically. The comparison of the two photointerpreted datasets allowed us to infer some of the factors that can influence the process of lineament mapping on Mars. Comparing the manually mapped datasets with the semi-automatically mapped features allowed us to evaluate the accuracy of the semi-automatic mapping procedure, as well as to identify the main limitations of the semi-automatic approach to mapping tectonic structures from MOLA altimetry. Significant differences were found between the two photointerpretations. The qualitative and quantitative comparisons showed how mapping criteria, illumination conditions and scale of analysis can locally influence the interpretations. The semi-automatic mapping procedure proved to be mainly dependent on data quality; nevertheless the methodology, when applied to MOLA data, is able to produce meaningful results at a regional scale.

  15. Special parallel processing workshop

    SciTech Connect

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  16. Morphotectonics inferred from the analysis of topographic lineaments auto-detected from DEMs: Application and validation for the Sinai Peninsula, Egypt

    NASA Astrophysics Data System (ADS)

    Masoud, Alaa A.; Koike, Katsuaki

    2011-10-01

    Morphotectonic lineaments observed on the Sinai Peninsula in Egypt were auto-detected from Shuttle Radar Topography Mission 90-m digital elevation model (DEM) and gravity grid data and then analyzed to characterize the tectonic trends that dominated the geologic evolution of this area. The approach employed consists of DEM shading, segment tracing, grouping, statistical analysis of the distribution and orientation of the lineaments, fault plane characterization, and smooth representation techniques. Statistical quantification of counts, mean lengths, densities, and orientations was used to infer the relative severity of the tectonic regimes, to unravel the prominent structural trends, and to demarcate the contribution of various faulting styles that prevailed through time. Restored to the present-day geographic position, prominent N50°-60°W, N20°-40°W, N50°-60°E, and N20°-30°E and less prominent N-S, E-W, and ENE trends were common. The prominence of these trends varied through time. The NW and NE trends showed relatively equal abundances in the Precambrian and the Cambrian whereas the prominence of the NW trends prevailed from the Carboniferous to the Holocene. Lineaments in all formations were near vertical and on average, about 65% showed as strike-slip, 22% as reverse, and 13% as normal faulting styles. Statistics from the detected linear features and the reference geological data reveal the relative severity of five dominant tectonic regimes: Precambrian compression followed by extension at its end, Cretaceous compression, Eocene compression, Miocene extension, and finally Holocene compression. Auto-detected lineaments and the severity of the characterized tectonic periods correspond well with reference data on the geologic structure, geodynamic framework, and the gravity anomaly. Furthermore, the recent significance of the broad structural zones was confirmed by the foci of the earthquake epicenters along and at the intra-plate intersections of the

  17. Seismic investigation of the Yavapai-Mazatzal transition zone and the Jemez Lineament in northeastern New Mexico

    NASA Astrophysics Data System (ADS)

    Magnani, Maria Beatrice; Levander, Alan; Miller, Kate C.; Eshete, Tefera; Karlstrom, Karl E.

    A new seismic reflection profile of the Precambrian lithosphere under the Jemez Lineament (JL) (northeastern New Mexico, USA) shows impressive reflectivity throughout the crust. The upper crust is characterized by a 2 km thick undeformed Paleozoic and Mesozoic sedimentary sequence above the Precambrian basement. At a depth of 5-8 km, undulating reflections image a Proterozoic nappe cropping out in the nearby Rincon Range. To the south the upper crust is seismically transparent except for south dipping reflections at 2-10 km depth. The middle-lower crust, from 10-15 km depth, shows oppositely dipping reflections that converge in the deep crust (35-37 km) roughly at the center of the profile. To the north the reflectivity dips southward at 25° to a depth of 33 km before fading in the lower crust. In the southern part of the profile a crustal-scale duplex structure extends horizontally for more than 60 km. We interpret the oppositely dipping reflections as the elements of a doubly vergent suture zone that resulted from the accretion of the Mazatzal island arc to the southern margin of the Yavapai proto-craton at ˜1.65-1.68 Ga. Subhorizontal high amplitude reflections at 10-15 km depth overprint all the reflections mentioned above. These reflections, the brightest in the profile, are interpreted as mafic sills. Although their age is unconstrained we suggest that they could be either 1.1 Ga or Tertiary-aged intrusions related to the volcanic activity along the JL. We further speculate that the Proterozoic lithospheric suture provided a pathway for the basaltic magma to penetrate the crust and reach the surface.

  18. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  19. Parallel algorithms and architectures

    SciTech Connect

    Albrecht, A.; Jung, H.; Mehlhorn, K.

    1987-01-01

    Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.

  20. Fluid Transport in Lineaments

    NASA Astrophysics Data System (ADS)

    Kerrich, R.

    1986-04-01

    Fluid infiltration into fault zones and their deeper level counterparts, brittle-ductile shear zones, is examined in five different tectonic environments. In the 2.7 Ga Abitibi Greenstone Belt major tectonic discontinuities have lateral extents of hundreds of kilometres. These structures, initiated as listric normal faults accommodating rift extension of the greenstone belt, acted as sites for the extrusion of komatiitic magmas, and formed submarine scarps which delimit linear belts of clastic and chemical sediments. During reverse motion on the structures, accommodating shortening of the belt, these transcrustal faults were used as a conduit for the ascent of trondhjemitic magmas from the base of the crust, alkaline magmas from the asthenosphere, and for discharge of hundreds of cubic kilometres of hydrothermal fluids. Such fluids were characterized by δ 18O = 6 ± 2, δ D = -50 ± 20, δ 13C = -4 ± 3, and temperatures of 270-450 degrees C, probably derived from devolatilization of crustal rocks undergoing prograde metamorphism. Hydrothermal fluids were more radiogenic (87Sr/86Sr = 0.7010-0.7040) and possessed higher values of μ than contemporaneous mantle, komatiites or tholeiites, and thus carried a contribution from older sialic basement. Mineralized faults possess enrichments of l.i.l. elements, including K, Rb, Li, Cs, B and CO2, as well as rare elements such as Au, Ag, As, Sb, Se, Te, Bi, W. Fluids were characterized by XCO_{2}≈ 0.1, neutral to slightly acidic pH, low salinity (less than 3% by mass), and K/Na ≈ 0.1, carried minor CH4, CO and N2, and underwent transient effervescence of CO2 during decompression. At Yellowknife, a series of large-scale shear zones developed by brittle-ductile mechanisms, involving volume dilation with the migration of ca. 5% (by mass) volatiles into the shear zone from surrounding metabasalts. This early deformation involved no departures in redox state or whole-rock δ 18O from background states of Fe2+/ɛ Fe = 0.7 and δ 18O of 7-7.5 per thousand respectively, attesting to conditions of low water/rock ratios. Shear zones subsequently acted as high-permeability conduits for pulsed discharge of more than 9 km3 of reduced metamorphic hydrothermal fluids at 360-450 degrees C. The West Bay Fault, a late major transcurrent structure, contains massive vein quartz that grew at 200-300 degrees C from fluids of 2-6% salinity (possibly formation brines). At the Grenville Front, translation was accommodated along two mylonite zones and an intervening boundary fault. The high-temperature (MZ II) and low-temperature (MZ I) mylonite zones formed at 580-640 degrees C and 430-490 degrees C, respectively, in the presence of fluids of metamorphic origin, indigenous to the immediate rocks. A population of post-tectonic quartz veins occupying brittle fractures were precipitated from fluids with extremely negative δ 18O at 200-300 degrees C. The water may have been derived from downward penetration into fault zones of low 18C precipitation on a mountain range induced by continental collision, with uplift accommodated at deep levels by the mylonite zones coupled with rebound on the boundary faults. At Lagoa Real, Brazil, Archaean gneisses overlie Proterozoic sediments along thrust surfaces, and contain brittle-ductile shear zones locally occupied by uranium deposits. Following deformation at 500-540 degrees C, in the presence of metamorphic fluids and under conditions of low water/rock ratios, shear zones underwent local intense oxidation and desilication. All minerals undergo a shift of -10 per thousand δ 18O, indicating discharge up through the Archaean gneisses of formation brines recharged by meteoric water in the underlying Proterozoic sediments during overthrusting: about 1000 kM 3 of solution passed through these structures. The shear zones and Proterozoic sediments are less radiogenic (87 Sr/ 86 Sr = 0.720) than contemporaneous Archaean gneisses (87 Sr/ 86 Sr = 0.900), corroborating transport of fluids and solutes through the structure from a large external reservoir. Major crustal detachment faults of Tertiary age in the Picacho Cordilleran metamorphic core complex of Arizona show an upward transition from undeformed granitic basement, through mylonitic to brecciated and hydrothermally altered counterparts. The highest tectonic levels are allochthonous, oxidatively altered Miocene volcanics, with hydrothernial sediments in listric normal fault basins. This transition is accompanied by a 12 per thousad increase in δ 18O from 7 to 19, and a decrease of temperature of 400 degrees C, because of expulsion of large volumes of metamorphic fluids during detachment. In the Miocene allochthon, mixing occurred between cool downward- penetrating meteoric thermal waters and hot, deeper aqueous reservoirs. In general, flow r6gimes in these fault and shear zones follow a sequence from conditions of high temperature and pressure with locally derived fluids at low water/rock ratios during initiation of the structures, to high fluxes of reduced formation or metamorphic fluids along conduits as the structures propagate and intersect hydrothermal reservoirs. Later in the tectonic evolution and at shallower crustal levels, there was incursion of oxidizing fluids from near-surface reservoirs into the faults.

  1. MPP parallel forth

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1987-01-01

    Massively Parallel Processor (MPP) Parallel FORTH is a derivative of FORTH-83 and Unified Software Systems' Uni-FORTH. The extension of FORTH into the realm of parallel processing on the MPP is described. With few exceptions, Parallel FORTH was made to follow the description of Uni-FORTH as closely as possible. Likewise, the parallel FORTH extensions were designed as philosophically similar to serial FORTH as possible. The MPP hardware characteristics, as viewed by the FORTH programmer, is discussed. Then a description is presented of how parallel FORTH is implemented on the MPP.

  2. The West Beverly Hills Lineament and Beverly Hills High School: Ethical Issues in Geo-Hazard Communication

    NASA Astrophysics Data System (ADS)

    Gath, Eldon; Gonzalez, Tania; Roe, Joe; Buchiarelli, Philip; Kenny, Miles

    2014-05-01

    Results of geotechnical studies for the Westside Subway were disclosed in a public hearing on Oct. 19, 2011, showing new "active faults" of the Santa Monica fault and the West Beverly Hills Lineament (WBHL), identified as a northern extension of the Newport-Inglewood fault. Presentations made spoke of the danger posed by these faults, the possibility of killing people, and how it was good news that these faults had been discovered now instead of later. The presentations were live and are now memorialized as YouTube videos, (http://www.youtube.com/watch?v=Omx2BTIpzAk and others). No faults had been physically exposed or observed by the study; the faults were all interpreted from cone penetrometer probes, supplemented by core borings and geophysical transects. Several of the WBHL faults traversed buildings of the Beverly Hills High School (BHHS), triggering the school district to geologically map and characterize these faults for future planning efforts, and to quantify risk to the students in the 1920's high school building. 5 exploratory trenches were excavated within the high school property, 12 cone penetrometers were pushed, and 26-cored borings were drilled. Geologic logging of the trenches and borings and interpretation of the CPT data failed to confirm the presence of the mapped WBHL faults, instead showing an unfaulted, 3° NE dipping sequence of mid-Pleistocene alluvial fan deposits conformably overlying an ~1 Ma marine sand. Using 14C, OSL, and soil pedology for stratigraphic dating, the BHHS site was cleared from fault rupture hazards and the WBHL was shown to be an erosional margin of Benedict Canyon, partially buttressed by 40-200 ka alluvial deposits from Benedict Wash. The consequence of the Westside Subway's active fault maps has been the unexpected expenditure of millions of dollars for emergency fault investigations at BHHS and several other private properties within a densely developed urban highrise environment. None of these studies have found

  3. Analysis of the influence of tectonics on the evolution valley network based on the SRTM DEM and the relationship of automatically extracted lineaments and the tectonic faults, Jemma River basin, Ethiopia

    NASA Astrophysics Data System (ADS)

    Kusák, Michal

    2016-04-01

    The Ethiopian Highland is good example of high plateau landscape formed by combination of tectonic uplift and episodic volcanism (Kazmin, 1975; Pik et al., 2003; Gani et al., 2009). Deeply incised gorges indicate active fluvial erosion which leads to instabilities of over-steepened slopes. In this study we focus on Jemma River basin which is a left tributary of Abay - Blue Nile to assess the influence of neotectonics on the evolution of its river and valley network. Tectonic lineaments, shape of valley networks, direction of river courses and intensity of fluvial erosion were compared in six subregions which were delineate beforehand by means of morphometric analysis. The influence of tectonics on the valley network is low in the older deep and wide canyons and in the and on the high plateau covered with Tertiary lava flows while younger upper part of the canyons it is high. Furthermore, the coincidence of the valley network with the tectonic lineaments differs in the subregions. The fluvial erosion along the main tectonic zones (NE-SW) direction made the way for backward erosion possible to reach far distant areas in E for the fluvial erosion. This tectonic zone also separates older areas in the W from the youngest landscape evolution subregions in the E, next to the Rift Valley. We studied the functions that can automatically extract lineaments in programs ArcGIS 10.1 and PCI Geomatica. The values of input parameters and their influence of the final shape and number of lineaments. A map of automated extracted lineaments was created and compared with 1) the tectonic faults by Geology Survey of Ethiopia (1996); and 2) the lineaments based on visual interpretation of by the author. The comparation of lineaments by automated visualization in GIS and visual interpretation of lineaments by the author proves that both sets of lineaments are in the same azimuth (NE-SW) - the same direction as the orientation of the rift. But it the mapping of lineaments by automated

  4. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  5. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  6. Proceedings of the 38th Lunar and Planetary Science Conference

    NASA Technical Reports Server (NTRS)

    2007-01-01

    The sessions in the conference include: Titan, Mars Volcanism, Mars Polar Layered Deposits, Early Solar System Isotopes, SPECIAL SESSION: Mars Reconnaissance Orbiter: New Ways of Studying the Red Planet, Achondrites: Exploring Oxygen Isotopes and Parent-Body Processes, Solar System Formation and Evolution, SPECIAL SESSION: SMART-1, . Impact Cratering: Observations and Experiments, SPECIAL SESSION: Volcanism and Tectonism on Saturnian Satellites, Solar Nebula Composition, Mars Fluvial Geomorphology, Asteroid Observations: Spectra, Mostly, Mars Sediments and Geochemistry: View from the Surface, Mars Tectonics and Crustal Dichotomy, Stardust: Wild-2 Revealed, Impact Cratering from Observations and Interpretations, Mars Sediments and Geochemistry: The Map View, Chondrules and Their Formation, Enceladus, Asteroids and Deep Impact: Structure, Dynamics, and Experiments, Mars Surface Process and Evolution, Martian Meteorites: Nakhlites, Experiments, and the Great Shergottite Age Debate, Stardust: Mainly Mineralogy, Astrobiology, Wind-Surface Interactions on Mars and Earth, Icy Satellite Surfaces, Venus, Lunar Remote Sensing, Space Weathering, and Impact Effects, Interplanetary Dust/Genesis, Mars Cratering: Counts and Catastrophes?, Chondrites: Secondary Processes, Mars Sediments and Geochemistry: Atmosphere, Soils, Brines, and Minerals, Lunar Interior and Differentiation, Mars Magnetics and Atmosphere: Core to Ionosphere, Metal-rich Chondrites, Organics in Chondrites, Lunar Impacts and Meteorites, Presolar/Solar Grains, Topics for Print Only papers are: Outer Planets/Satellites, Early Solar System, Interplanetary Dust, Comets and Kuiper Belt Objects, Asteroids and Meteoroids, Chondrites, Achondrites, Meteorite Related, Mars Reconnaissance Orbiter, Mars, Astrobiology, Planetary Differentiation, Impacts, Mercury, Lunar Samples and Modeling, Venus, Missions and Instruments, Global Warming, Education and Public Outreach, Poster sessions are: Asteroids/Kuiper Belt Objects, Galilean Satellites: Geology and Mapping, Titan, Volcanism and Tectonism on Saturnian Satellites, Early Solar System, Achondrite Hodgepodge, Ordinary Chondrites, Carbonaceous Chondrites, Impact Cratering from Observations and Interpretations, Impact Cratering from Experiments and Modeling, SMART-1, Planetary Differentiation, Mars Geology, Mars Volcanism, Mars Tectonics, Mars: Polar, Glacial, and Near-Surface Ice, Mars Valley Networks, Mars Gullies, Mars Outflow Channels, Mars Sediments and Geochemistry: Spirit and Opportunity, Mars Reconnaissance Orbiter: New Ways of Studying the Red Planet, Mars Reconnaissance Orbiter: Geology, Layers, and Landforms, Oh, My!, Mars Reconnaissance Orbiter: Viewing Mars Through Multicolored Glasses; Mars Science Laboratory, Phoenix, and ExoMars: Science, Instruments, and Landing Sites; Planetary Analogs: Chemical and Mineral, Planetary Analogs: Physical, Planetary Analogs: Operations, Future Mission Concepts, Planetary Data, Imaging, and Cartography, Outer Solar System, Presolar/Solar Grains, Stardust Mission; Interplanetary Dust, Genesis, Asteroids and Comets: Models, Dynamics, and Experiments, Venus, Mercury, Laboratory Instruments, Methods, and Techniques to Support Planetary Exploration; Instruments, Techniques, and Enabling Techologies for Planetary Exploration; Lunar Missions and Instruments, Living and Working on the Moon, Meteoroid Impacts on the Moon, Lunar Remote Sensing, Lunar Samples and Experiments, Lunar Atmosphere, Moon: Soils, Poles, and Volatiles, Lunar Topography and Geophysics, Lunar Meteorites, Chondrites: Secondary Processes, Chondrites, Martian Meteorites, Mars Cratering, Mars Surface Processes and Evolution, Mars Sediments and Geochemistry: Regolith, Spectroscopy, and Imaging, Mars Sediments and Geochemistry: Analogs and Mineralogy, Mars: Magnetics and Atmosphere, Mars Aeolian Geomorphology, Mars Data Processing and Analyses, Astrobiology, Engaging Student Educators and the Public in Planetary Science,

  7. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  8. Eclipse Parallel Tools Platform

    SciTech Connect

    Watson, Gregory; DeBardeleben, Nathan; Rasmussen, Craig

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices, and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis

  9. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  10. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  11. Candidate-penetrative-fracture mapping of the Grand Canyon area, Arizona, from spatial correlation of deep geophysical features and surficial lineaments

    USGS Publications Warehouse

    Gettings, Mark E.; Bultman, Mark W.

    2005-01-01

    Some aquifers of the southwestern Colorado Plateaus Province are deeply buried and overlain by several impermeable shale layers, and so recharge to the aquifer probably is mainly by seepage down penetrative-fracture systems. The purpose of this 2-year study, sponsored by the U.S. National Park Service, was to map candidate deep penetrative fractures over a 120,000-km2 area, using gravity and aeromagnetic-anomaly data together with surficial-fracture data. The study area was on the Colorado Plateau south of the Grand Canyon and west of Black Mesa; mapping was carried out at a scale of 1:250,000. The resulting database constitutes a spatially registered estimate of deep-fracture locations. Candidate penetrative fractures were located by spatial correlation of horizontal- gradient and analytic-signal maximums of gravity and magnetic anomalies with major surficial lineaments obtained from geologic, topographic, side-looking-airborne-radar, and satellite imagery. The maps define a subset of candidate penetrative fractures because of limitations in the data coverage and the analytical technique. In particular, the data and analytical technique used cannot predict whether the fractures are open or closed. Correlations were carried out by using image-processing software, such that every pixel on the resulting images was coded to uniquely identify which datasets are correlated. The technique correctly identified known and many new deep fracture systems. The resulting penetrative-fracture-distribution maps constitute an objectively obtained, repeatable dataset and a benchmark from which additional studies can begin. The maps also define in detail the tectonic fabrics of the southwestern Colorado Plateaus Province. Overlaying the correlated lineaments on the normalized-density-of-vegetation-index image reveals that many of these lineaments correlate with the boundaries of vegetation zones in drainages and canyons and so may be controlling near-surface water availability in

  12. The occurrence of a complete continental rift type of volcanic rocks suite along the Yerer-Tullu Wellel Volcano Tectonic Lineament, Central Ethiopia

    NASA Astrophysics Data System (ADS)

    Abebe Adhana, Tsegaye

    2014-11-01

    The Yerer-Tullu Wellel Volcano-tectonic Lineament (YTVL) is an E-W trending fault system or aborted rift that intercepts the Main Ethiopian Rift (MER) at Debre Zeyt (Bishoftu)/Yerer, in the eastern periphery of Addis Ababa. The structure is in correspondence with the westward extension of the southern margin of the Gulf of Aden rift. The YTVL extends for more than 500 km with a very clear northern fault margin, between Addis Ababa and Ambo known as the “Ambo Fault”. The southern margin is indicated by an E-W trending segmented lineaments at the latitude of about N 8°30‧, the Bedele-Metu being the most clear segment. In between these limits there are several evolved central volcanoes and cinder cones. The central volcanoes range in age from 12 to 7 Ma in the western most (Tullu Wellel) and gradually the upper limit get younger towards East to less than 1 Ma in the Wenchi and Debre Zeyt (Bishoftu) areas. These volcanic products cover the whole spectrum of a continental rift volcanic rocks suite: (1) in the eastern zone (Yerer-Bishoftu) the suite is silica over-saturated, ranging in composition from transitional basalt to peralkaline rhyolite, (2) moving westwards, between Wechacha and Wenchi, the rocks suite is silica saturated ranging in composition from alkali basalt to trachyte, (3) further West between Ijaji-Konchi and Nekemt the rocks suite is silica under-saturated ranging in composition from basanite to phonolite. Crossing the Dedessa lineament, the Tullu Wellel rocks appear to be silica saturated. Within a single suite fractional crystallization is the predominant evolutional process even in the silica over-saturated suite. The westwards progressive silica under-saturation and increase in alkalinity (except for the Tullu Wellel volcanic centers) is interpreted by the gradual deepening of an anomalous mantle where partial fusion took place. Therefore, as distance increases from the MER junction to the West, the amount of melt on the upper mantle was

  13. Geochemical survey of lower Pennsylvanian Corbin Sandstone outcrop belt in eastern Kentucky

    SciTech Connect

    Richers, D.M.

    1981-09-01

    Geochemical anomalies that may constitute further evidence for the existence of the east-west trending 38th Parallel lineament have been discovered in Wolfe, Powell, and Menifee Counties, Kentucky. Stream-water, stream-sediment, and outcrop samples collected along the northeast-southwest-trending Corbin Sandstone outcrop belt show anomalous concentrations of U, Th, Zn, Cu, and Ni only in parts of the belt that skirt the 38th Parallel lineament. Landsat studies also show that anomalies are closely associated with the intersections of the four major linear trends present in eastern Kentucky. This association, in part, suggests that the anomalies resulted from ascending fluids which utilize these lineaments as conduits. The presence of slightly uraniferous rock at Bell Branch in Menifee County (together with the presence of geochemical anomalies in the stream-water and sediment samples in Powell, Wolfe, and Menifee Counties) is encouraging even though commercial quantities of uranium or base metals have not been discovered. An anomalously uraniferous kimberlite pipe in Elliot County warrants additional study for this part of eastern Kentucky.

  14. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  15. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  16. Eclipse Parallel Tools Platform

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  17. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  18. A comparison of Landsat 8 (OLI) and Landsat 7 (ETM+) in mapping geology and visualising lineaments: A case study of central region Kenya

    NASA Astrophysics Data System (ADS)

    Mwaniki, M. W.; Moeller, M. S.; Schellmann, G.

    2015-04-01

    Availability of multispectral remote sensing data cheaply and its higher spectral resolution compared to remote sensing data with higher spatial resolution has proved valuable for geological mapping exploitation and mineral mapping. This has benefited applications such as landslide quantification, fault pattern mapping, rock and lineament mapping especially with advanced remote sensing techniques and the use of short wave infrared bands. While Landsat and Aster data have been used to map geology in arid areas and band ratios suiting the application established, mapping in geology in highland regions has been challenging due to vegetation land cover. The aim of this study was to map geology and investigate bands suited for geological applications in a study area containing semi arid and highland characteristics. Therefore, Landsat 7 (ETM+, 2000) and Landsat 8 (OLI, 2014) were compared in determining suitable bands suited for geological mapping in the study area. The methodology consist performing principal component and factor loading analysis, IHS transformation and decorrelation stretch of the FCC with the highest contrast, band rationing and examining FCC with highest contrast, and then performing knowledge base classification. PCA factor loading analysis with emphasis on geological information showed band combination (5, 7, 3) for Landsat 7 and (6, 7, 4) for Landsat 8 had the highest contrast and more contrast was enhanced by performing decorrelation stretch. Band ratio combination (3/2, 5/1, 7/3) for Landsat 7 and (4/3, 6/2, 7/4) for Landsat 8 had more contrast on geologic information and formed the input data in knowledge base classification. Lineament visualisazion was achieved by performing IHS transformation of FCC with highest contrast and its saturation band combined as follows: Landsat 7 (IC1, PC2, saturation band), Landsat 8 (IC1, PC4, saturation band). The results were compared against existing geology maps and were superior and could be used to update

  19. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  20. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  1. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  2. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  3. Parallel system simulation

    SciTech Connect

    Tai, H.M.; Saeks, R.

    1984-03-01

    A relaxation algorithm for solving large-scale system simulation problems in parallel is proposed. The algorithm, which is composed of both a time-step parallel algorithm and a component-wise parallel algorithm, is described. The interconnected nature of the system, which is characterized by the component connection model, is fully exploited by this approach. A technique for finding an optimal number of the time steps is also described. Finally, this algorithm is illustrated via several examples in which the possible trade-offs between the speed-up ratio, efficiency, and waiting time are analyzed.

  4. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  5. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  6. Parallels with nature

    NASA Astrophysics Data System (ADS)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  7. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  8. Simplified Parallel Domain Traversal

    SciTech Connect

    Erickson III, David J

    2011-01-01

    Many data-intensive scientific analysis techniques require global domain traversal, which over the years has been a bottleneck for efficient parallelization across distributed-memory architectures. Inspired by MapReduce and other simplified parallel programming approaches, we have designed DStep, a flexible system that greatly simplifies efficient parallelization of domain traversal techniques at scale. In order to deliver both simplicity to users as well as scalability on HPC platforms, we introduce a novel two-tiered communication architecture for managing and exploiting asynchronous communication loads. We also integrate our design with advanced parallel I/O techniques that operate directly on native simulation output. We demonstrate DStep by performing teleconnection analysis across ensemble runs of terascale atmospheric CO{sub 2} and climate data, and we show scalability results on up to 65,536 IBM BlueGene/P cores.

  9. Partitioning and parallel radiosity

    NASA Astrophysics Data System (ADS)

    Merzouk, S.; Winkler, C.; Paul, J. C.

    1996-03-01

    This paper proposes a theoretical framework, based on domain subdivision for parallel radiosity. Moreover, three various implementation approaches, taking advantage of partitioning algorithms and global shared memory architecture, are presented.

  10. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  11. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  12. Continuous parallel coordinates.

    PubMed

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  13. Dynamic topography of the western Great Plains: landscape evidence for mantle-driven uplift associated with the Jemez lineament of NE New Mexico and SE Colorado

    NASA Astrophysics Data System (ADS)

    Nereson, A. L.; Karlstrom, K. E.; McIntosh, W. C.; Heizler, M. T.; Kelley, S. A.; Brown, S. W.

    2011-12-01

    Dynamic topography results when viscous stresses created by flow within the mantle are transmitted through the lithosphere and interact with, and deform, the Earth's surface. Because dynamic topography is characterized by low amplitudes and long wavelengths, its subtle effects may be best recorded in low-relief areas such as the Great Plains of the USA where they can be readily observed and measured. We apply this concept to a unique region of the western Great Plains in New Mexico and Colorado where basalt flows of the Jemez lineament (Raton-Clayton and Ocate fields) form mesas (inverted topography) that record the evolution of the Great Plains surface through time. This study uses multiple datasets to evaluate the mechanisms which have driven the evolution of this landscape. Normalized channel steepness index (ksn) analysis identifies anomalously steep river gradients across broad (50-100 km) convexities within a NE- trending zone of differential river incision where higher downstream incision rates in the last 1.5 Ma suggest headwater uplift. At 2-8 Ma timescales, 40Ar/39Ar ages of basalt-capped paleosurfaces in the Raton-Clayton and Ocate volcanic fields indicate that rates of denudation increase systematically towards the NW from a NE-trending zone of approximately zero denudation (that approximately coincides with the high ksn zone), also suggestive of regional warping above the Jemez lineament. Onset of more rapid denudation is observed in the Raton-Clayton field beginning at ca. 3.6 Ma. Furthermore, two 300-400-m-high NE-trending erosional escarpments impart a staircase-like topographic profile to the region. Tomographic images from the EarthScope experiment show that NE-trending topographic features of this region correspond to an ~8 % P-wave velocity gradient of similar trend at the margin of the low-velocity Jemez mantle anomaly. We propose that the erosional landscapes of this unique area are, in large part, the surface expression of dynamic mantle

  14. Parallel time integration software

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  15. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  16. Parallel optical sampler

    SciTech Connect

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  17. Photogeologic and kinematic analysis of lineaments at Yucca Mountain, Nevada: Implications for strike-slip faulting and oroclinal bending

    SciTech Connect

    O`Neill, J.M.; Whitney, J.W.; Hudson, M.R.

    1992-12-31

    The main structural grain at Yucca Mountain, as seen from aerial photographs, is a pronounced north-trending linear fabric defined by parallel east-tilted fault-block ridges. The ridges are bounded on the west by normal faults that are easily recognizable on aerial photographs, mainly as isolated, colinear scarps in alluvium and as offset bedrock units. AH ridge-bounding to adjacent faults, most commonly by short northwest-trending fault splays. The generally north-trending high-angle faults primarily display down-to-the-west normal offset, but also have an auxiliary component of left-lateral slip. Left-lateral slip is indicated by offset stream channels, slickenlines, and en echelon fault splays that are structurally linked, commonly by pull-apart grabens. These grabens, best seen on low-sun angle aerial photographs, rangefrom tens of meters to more than 3 kilometers wide. The smallest pull-apart zones are well developed along the Windy Wash and Solitario Canyon faults on the west side of Yucca Mountain; the largest of these features is interpreted to structurally link the Bow Ridge and Solitario Canyon faults in the north-central part of Yucca Mountain; the pronounced northwest-trending drainage system in this part of Yucca Mountain appears to be controlled by tension fractures related to left-lateral strike-slip movement on these north-trending faults. Midway Valley, directly east of this pull-apart graben, may also owe its origin, in part, to a pull-apart mechanism.

  18. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  19. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  20. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  1. Programming parallel vision algorithms

    SciTech Connect

    Shapiro, L.G.

    1988-01-01

    Computer vision requires the processing of large volumes of data and requires parallel architectures and algorithms to be useful in real-time, industrial applications. The INSIGHT dataflow language was designed to allow encoding of vision algorithms at all levels of the computer vision paradigm. INSIGHT programs, which are relational in nature, can be translated into a graph structure that represents an architecture for solving a particular vision problem or a configuration of a reconfigurable computational network. The authors consider here INSIGHT programs that produce a parallel net architecture for solving low-, mid-, and high-level vision tasks.

  2. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  3. Coarrars for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van

    2011-01-01

    The design of the Coarray feature of Fortran 2008 was guided by answering the question "What is the smallest change required to convert Fortran to a robust and efficient parallel language." Two fundamental issues that any parallel programming model must address are work distribution and data distribution. In order to coordinate work distribution and data distribution, methods for communication and synchronization must be provided. Although originally designed for Fortran, the Coarray paradigm has stimulated development in other languages. X10, Chapel, UPC, Titanium, and class libraries being developed for C++ have the same conceptual framework.

  4. Neoarchean and Paleoproterozoic granitoids marginal to the Jeceaba-Bom Sucesso lineament (SE border of the southern São Francisco craton): Genesis and tectonic evolution

    NASA Astrophysics Data System (ADS)

    Campos, José Carlos Sales; Carneiro, Maurício Antônio

    2008-12-01

    The sialic crust of the southern São Francisco craton along the Jeceaba-Bom Sucesso lineament, central-southern part of Minas Gerais (Brazil), encompasses, among other rock types, Neoarchean and Paleoproterozoic granitoids. These granitoids, according to their petrographic, lithogeochemical and geochronologic characteristics, were grouped into two Neoarchean suites (Samambaia-Bom Sucesso and Salto Paraopeba-Babilônia) and three Paleoproterozoic suites (Cassiterita-Tabuões, Ritápolis and São Tiago). Varied processes and tectonic environments were involved in the genesis of these suites. In particular, the lithogeochemistry of the (Archean and Paleoproterozoic) TTG-type granitoids indicates an origin by partial melting of hydrated basaltic crust in a subduction environment. In the Neoarchean, between 2780 and 2703 Ma, a dominant TTG granitoid genesis related to an active continental margin was followed by another granite genesis related to crustal anatexis processes at 2612-2550 Ma. In the Paleoproterozoic, the generation of TTG and granites s.s. occurred at three distinct times: 2162, 2127 and 1887 Ma. This fact, plus the rock-type diversity produced by this granite genesis, indicates that the continental margin of the southern portion of the São Francisco craton was affected by more than one consumption episode of oceanic crust, involving different island arc segments, and the late Neoarchean consolidate continent. A Paleoproterozoic tectonic evolution in three stages is proposed in this work.

  5. Parallel Total Energy

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  6. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  7. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  8. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  9. Parallel Multigrid Equation Solver

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  10. Parallel Dislocation Simulator

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  11. Optical parallel selectionist systems

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  12. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  13. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  14. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  15. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  16. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  17. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  18. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  19. Seeing in parallel

    SciTech Connect

    Little, J.J.; Poggio, T.; Gamble, E.B. Jr.

    1988-01-01

    Computer algorithms have been developed for early vision processes that give separate cues to the distance from the viewer of three-dimensional surfaces, their shape, and their material properties. The MIT Vision Machine is a computer system that integrates several early vision modules to achieve high-performance recognition and navigation in unstructured environments. It is also an experimental environment for theoretical progress in early vision algorithms, their parallel implementation, and their integration. The Vision Machine consists of a movable, two-camera Eye-Head input device and an 8K Connection Machine. The authors have developed and implemented several parallel early vision algorithms that compute edge detection, stereopsis, motion, texture, and surface color in close to real time. The integration stage, based on coupled Markov random field models, leads to a cartoon-like map of the discontinuities in the scene, with partial labeling of the brightness edges in terms of their physical origin.

  20. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Gryphon, Coranth D.; Miller, Mark D.

    1991-01-01

    PCLIPS (Parallel CLIPS) is a set of extensions to the C Language Integrated Production System (CLIPS) expert system language. PCLIPS is intended to provide an environment for the development of more complex, extensive expert systems. Multiple CLIPS expert systems are now capable of running simultaneously on separate processors, or separate machines, thus dramatically increasing the scope of solvable tasks within the expert systems. As a tool for parallel processing, PCLIPS allows for an expert system to add to its fact-base information generated by other expert systems, thus allowing systems to assist each other in solving a complex problem. This allows individual expert systems to be more compact and efficient, and thus run faster or on smaller machines.

  2. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  3. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  4. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  5. ASSEMBLY OF PARALLEL PLATES

    DOEpatents

    Groh, E.F.; Lennox, D.H.

    1963-04-23

    This invention is concerned with a rigid assembly of parallel plates in which keyways are stamped out along the edges of the plates and a self-retaining key is inserted into aligned keyways. Spacers having similar keyways are included between adjacent plates. The entire assembly is locked into a rigid structure by fastening only the outermost plates to the ends of the keys. (AEC)

  6. Adaptive parallel logic networks

    SciTech Connect

    Martinez, T.R.; Vidal, J.J.

    1988-02-01

    This paper presents a novel class of special purpose processors referred to as ASOCS (adaptive self-organizing concurrent systems). Intended applications include adaptive logic devices, robotics, process control, system malfunction management, and in general, applications of logic reasoning. ASOCS combines massive parallelism with self-organization to attain a distributed mechanism for adaptation. The ASOCS approach is based on an adaptive network composed of many simple computing elements (nodes) which operate in a combinational and asynchronous fashion. Problem specification (programming) is obtained by presenting to the system if-then rules expressed as Boolean conjunctions. New rules are added incrementally. In the current model, when conflicts occur, precedence is given to the most recent inputs. With each rule, desired network response is simply presented to the system, following which the network adjusts itself to maintain consistency and parsimony of representation. Data processing and adaptation form two separate phases of operation. During processing, the network acts as a parallel hardware circuit. Control of the adaptive process is distributed among the network nodes and efficiently exploits parallelism.

  7. Pleistocene terrace deposition related to tectonically controlled surface uplift: An example of the Kyrenia Range lineament in the northern part of Cyprus

    NASA Astrophysics Data System (ADS)

    Palamakumbura, Romesh N.; Robertson, Alastair H. F.

    2016-06-01

    In this study, we consider how surface uplift of a narrow mountain range has interacted with glacial-related sea-level cyclicity and climatic change to produce a series of marine and non-marine terrace systems. The terrace deposits of the Kyrenia Range record rapid surface uplift of a long-lived tectonic lineament during the early Pleistocene, followed by continued surface uplift at a reduced rate during mid-late Pleistocene. Six terrace depositional systems are distinguished and correlated along the northern and southern flanks of the range, termed K0 to K5. The oldest and highest (K0 terrace system) is present only within the central part of the range. The K2-K5 terrace systems formed later, at sequentially lower levels away from the range. The earliest stage of surface uplift (K0 terrace system) comprises lacustrine carbonates interbedded with mass-flow facies (early Pleistocene?). The subsequent terrace system (K1) is made up of colluvial conglomerate and aeolian dune facies on both flanks of the range. The later terrace systems (K2 to K5) each begin with a basal marine deposit, interpreted as a marine transgression. Deltaic conglomerates prograded during inferred global interglacial stages. Overlying aeolian dune facies represent marine regressions, probably related to global glacial stages. Each terrace depositional system was uplifted and preserved, followed by subsequent deposits at progressively lower topographic levels. Climatic variation during interglacial-glacial cycles and autocyclic processes also exerted an influence on deposition, particularly on short-period fluvial and aeolian deposition.

  8. Trajectory optimization using parallel shooting method on parallel computer

    SciTech Connect

    Wirthman, D.J.; Park, S.Y.; Vadali, S.R.

    1995-03-01

    The efficiency of a parallel shooting method on a parallel computer for solving a variety of optimal control guidance problems is studied. Several examples are considered to demonstrate that a speedup of nearly 7 to 1 is achieved with the use of 16 processors. It is suggested that further improvements in performance can be achieved by parallelizing in the state domain. 10 refs.

  9. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  10. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  11. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  12. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  13. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any

  14. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  15. Fastpath Speculative Parallelization

    NASA Astrophysics Data System (ADS)

    Spear, Michael F.; Kelsey, Kirk; Bai, Tongxin; Dalessandro, Luke; Scott, Michael L.; Ding, Chen; Wu, Peng

    We describe Fastpath, a system for speculative parallelization of sequential programs on conventional multicore processors. Our system distinguishes between the lead thread, which executes at almost-native speed, and speculative threads, which execute somewhat slower. This allows us to achieve nontrivial speedup, even on two-core machines. We present a mathematical model of potential speedup, parameterized by application characteristics and implementation constants. We also present preliminary results gleaned from two different Fastpath implementations, each derived from an implementation of software transactional memory.

  16. CSM parallel structural methods research

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  17. Synchronous Parallel Kinetic Monte Carlo

    SciTech Connect

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  18. Roo: A parallel theorem prover

    SciTech Connect

    Lusk, E.L.; McCune, W.W.; Slaney, J.K.

    1991-11-01

    We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

  19. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  20. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  1. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  2. Making parallel lines meet

    PubMed Central

    Baskin, Tobias I.; Gu, Ying

    2012-01-01

    The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

  3. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  4. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  5. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  6. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  7. Aerial photographic interpretation of lineaments and faults in late Cenozoic deposits in the eastern parts of the Saline Valley 1:100, 000 quadrangle, Nevada and California, and the Darwin Hills 1:100, 000 quadrangle, California

    SciTech Connect

    Reheis, M.C.

    1991-09-01

    Faults and fault-related lineaments in Quaternary and late Tertiary deposits in the southern part of the Walker Lane are potentially active and form patterns that are anomalous compared to those in most other areas of the Great Basin. Two maps at a scale of 1:100,000 summarize information about lineaments and faults in the area around and southwest of the Death Valley-Furnace Creek fault system based on extensive aerial-photo interpretation, limited field interpretation, limited field investigations, and published geologic maps. There are three major fault zones and two principal faults in the Saline Valley and Darwin Hills 1:100,000 quadrangles. (1) The Death Valley-Furnace Creek fault system and (2) the Hunter Mountain fault zone are northwest-trending right-lateral strike-slip fault zones. (3) The Panamint Valley fault zone and associated Towne Pass and Emigrant faults are north-trending normal faults. The intersection of the Hunter Mountain and Panamint Valley fault zones is marked by a large complex of faults and lineaments on the floor of Panamint Valley. Additional major faults include (4) the north-northwest-trending Ash Hill fault on the west side of Panamint Valley, and (5) the north-trending range-front Tin Mountain fault on the west side of the northern Cottonwood Mountains. The most active faults at present include those along the Death Valley-Furnace Creek fault system, the Tin Mountain fault, the northwest and southeast ends of the Hunter Mountain fault zone, the Ash Hill fault, and the fault bounding the west side of the Panamint Range south of Hall Canyon. Several large Quaternary landslides on the west sides of the Cottonwood Mountains and the Panamint Range apparently reflect slope instability due chiefly to rapid uplift of these ranges. 16 refs.

  8. Unified Parallel Software

    SciTech Connect

    McKay, Mike

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use of EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.

  9. Parallel Polarization State Generation

    PubMed Central

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  10. Unified Parallel Software

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  11. Parallel Polarization State Generation

    NASA Astrophysics Data System (ADS)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  12. Parallel Polarization State Generation.

    PubMed

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  13. Parallel tridiagonal equation solvers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    Three parallel algorithms were compared for the direct solution of tridiagonal linear systems of equations. The algorithms are suitable for computers such as ILLIAC 4 and CDC STAR. For array computers similar to ILLIAC 4, cyclic odd-even reduction has the least operation count for highly structured sets of equations, and recursive doubling has the least count for relatively unstructured sets of equations. Since the difference in operation counts for these two algorithms is not substantial, their relative running times may be more related to overhead operations, which are not measured in this paper. The third algorithm, based on Buneman's Poisson solver, has more arithmetic operations than the others, and appears to be the least favorable. For pipeline computers similar to CDC STAR, cyclic odd-even reduction appears to be the most preferable algorithm for all cases.

  14. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  15. Using high-precision 40Ar/39Ar geochronology to understand volcanic hazards within the Rio Grande rift and along the Jemez lineament, New Mexico

    NASA Astrophysics Data System (ADS)

    Zimmerer, M. J.; McIntosh, W. C.; Heizler, M. T.; Lafferty, J.

    2014-12-01

    High-precision Ar/Ar ages were generated for late Quaternary volcanic fields in the Rio Grande rift and along the Jemez Lineament, New Mexico, to assess the time-space patterns of volcanism and begin quantifying volcanic hazards for the region. The published chronology of most late Quaternary volcanic centers in the region is not sufficiently precise, accurate, or complete for a comprehensive volcanic hazard assessment. Ar/Ar ages generated as part of this study were determined using the high-sensitivity, multi-collector ARGUS VI mass spectrometer, which provides about an order of magnitude more precise isotopic measurements compared to older generation, single-detector mass spectrometers. Ar/Ar ages suggest an apparent increase in eruption frequency during the late Quaternary within the Raton-Clayton volcanic field, northeastern NM. Only four volcanoes erupted between 426±8 and 97±3 ka. Contrastingly, four volcanoes erupted between 55±2 and 32±5 ka. This last eruptive phase displays a west to east migration of volcanism, has repose periods of 0 to 17 ka, and an average recurrence rate of 1 eruption per 5750 ka. The Zuni-Bandera volcanic field, west-central NM, is composed of the ~100 late Quaternary basaltic vents. Preliminary results suggest that most of the Chain of Craters, the largest and oldest part of the Zuni-Bandera field, erupted between ~100 and 250 ka. Volcanism then migrated to the east, where published ages indicate at least seven eruptions between 50 and 3 ka. Both volcanic fields display a west to east migration of volcanism during the last ~500 ka, although the pattern is more pronounced in the Zuni-Bandera field. A reassessment of low-precision published ages for other late Quaternary volcanic fields in region indicates that most fields display a similar west to east migration of volcanism during the last ~500 ka. One possible mechanism to explain the observed patterns of volcanism is the westward migration of the North American plate relative

  16. Parallelizing OVERFLOW: Experiences, Lessons, Results

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.

    1999-01-01

    The computer code OVERFLOW is widely used in the aerodynamic community for the numerical solution of the Navier-Stokes equations. Current trends in computer systems and architectures are toward multiple processors and parallelism, including distributed memory. This report describes work that has been carried out by the author and others at Ames Research Center with the goal of parallelizing OVERFLOW using a variety of parallel architectures and parallelization strategies. This paper begins with a brief description of the OVERFLOW code. This description includes the basic numerical algorithm and some software engineering considerations. Next comes a description of a parallel version of OVERFLOW, OVERFLOW/PVM, using PVM (Parallel Virtual Machine). This parallel version of OVERFLOW uses the manager/worker style and is part of the standard OVERFLOW distribution. Then comes a description of a parallel version of OVERFLOW, OVERFLOW/MPI, using MPI (Message Passing Interface). This parallel version of OVERFLOW uses the SPMD (Single Program Multiple Data) style. Finally comes a discussion of alternatives to explicit message-passing in the context of parallelizing OVERFLOW.

  17. Geology, geochemistry, geochronology, and economic potential of Neogene volcanic rocks in the Laguna Pedernal and Salar de Aguas Calientes segments of the Archibarca lineament, northwest Argentina

    NASA Astrophysics Data System (ADS)

    Richards, J. P.; Jourdan, F.; Creaser, R. A.; Maldonado, G.; DuFrane, S. A.

    2013-05-01

    This study presents new geochemical, geochronological, isotopic, and mineralogical data, combined with new geological mapping for a 2400 km2 area of Neogene volcanic rocks in northwestern Argentina near the border with Chile, between 25°10‧S and 25°45‧S. The area covers the zone of intersection between the main axis of the Cordillera Occidental and a set of NW-SE-trending structures that form part of the transverse Archibarca lineament. This lineament has localized major ore deposits in Chile (e.g., the late Eocene La Escondida porphyry Cu deposit) and large volcanic centers such as the active Llullaillaco and Lastarría volcanoes on the border between Chile and Argentina, and the Neogene Archibarca, Antofalla, and Cerro Galán volcanoes in Argentina. Neogene volcanic rocks in the Laguna Pedernal and Salar de Aguas Calientes areas are mostly high-K calc-alkaline in composition, and range from basaltic andesites, through andesites and dacites, to rhyolites. Magmatic temperatures and oxidation states, estimated from mineral compositions, range from ~ 1000 °C and ∆FMQ ≈ 1.0-1.5 in andesites, to ~ 850 °C and ∆FMQ ≈ 1.5-2.0 in dacites and rhyolites. The oldest rocks consist of early-middle Miocene andesite-dacite plagioclase-pyroxene-phyric lava flows and ignimbrites, with 40Ar/39Ar ages ranging from 17.14 ± 0.10 Ma to 11.76 ± 0.27 Ma. Their major and trace element compositions are typical of the Andean Central Volcanic Zone, and show strong crustal contamination trends for highly incompatible elements such as Cs, Rb, Th, and U. These rocks are geochemically grouped as sub-suite 1. This widespread intermediate composition volcanism was followed in the middle-late Miocene by a period of more focused rhyodacitic flow-dome complex formation. These felsic rocks are characterized by less extreme enrichments in highly incompatible elements, and increasing depletion of heavy rare earth elements. These rocks are geochemically grouped as sub-suite 2. The

  18. PMESH: A parallel mesh generator

    SciTech Connect

    Hardin, D.D.

    1994-10-21

    The Parallel Mesh Generation (PMESH) Project is a joint LDRD effort by A Division and Engineering to develop a unique mesh generation system that can construct large calculational meshes (of up to 10{sup 9} elements) on massively parallel computers. Such a capability will remove a critical roadblock to unleashing the power of massively parallel processors (MPPs) for physical analysis. PMESH will support a variety of LLNL 3-D physics codes in the areas of electromagnetics, structural mechanics, thermal analysis, and hydrodynamics.

  19. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  20. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  1. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  2. Parallel Programming in the Age of Ubiquitous Parallelism

    NASA Astrophysics Data System (ADS)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  3. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  4. Aerial photographic interpretation of lineaments and faults in late cenozoic deposits in the Eastern part of the Benton Range 1:100,000 quadrangle and the Goldfield, Last Chance Range, Beatty, and Death Valley Junction 1:100,000 quadrangles, Nevada and California

    SciTech Connect

    Reheis, M.C.; Noller, J.S.

    1991-09-01

    Lineaments and faults in Quaternary and late Tertiary deposits in the southern part of the Walker Lane are potentially active and form patterns that are anomalous with respect to the typical fault patterns in most of the Great Basin. Little work has been done to identify and characterize these faults, with the exception of those in the Death Valley-Furnace Creek (DVFCFZ) fault system and those in and near the Nevada Test Site. Four maps at a scale of 1:100,000 summarize the existing knowledge about these lineaments and faults based on extensive aerial-photo interpretation, limited field investigations, and published geologic maps. The lineaments and faults in all four maps can be divided geographically into two groups. The first group includes west- to north-trending lineaments and faults associated with the DVFCFZ and with the Pahrump fault zone in the Death Valley Junction quadrangle. The second group consists of north- to east-northeast-trending lineaments and faults in a broad area that lies east of the DVFCFZ and north of the Pahrump fault zone. Preliminary observations of the orientations and sense of slip of the lineaments and faults suggest that the least principle stress direction is west-east in the area of the first group and northwest-southeast in the area of the second group. The DVFCFZ appears to be part of a regional right-lateral strike-slip system. The DVFCFZ steps right, accompanied by normal faulting in an extensional zone, to the northern part of the Walker Lane a the northern end of Fish Lake Valley (Goldfield quadrangle), and appears to step left, accompanied by faulting and folding in a compressional zone, to the Pahrump fault zone in the area of Ash Meadows (Death Valley Junction quadrangle). 25 refs.

  5. Parallel execution model for Prolog

    SciTech Connect

    Fagin, B.S.

    1987-01-01

    One candidate language for parallel symbolic computing is Prolog. Numerous ways for executing Prolog in parallel have been proposed, but current efforts suffer from several deficiencies. Many cannot support fundamental types of concurrency in Prolog. Other models are of purely theoretical interest, ignoring implementation costs. Detailed simulation studies of execution models are scare; at present little is known about the costs and benefits of executing Prolog in parallel. In this thesis, a new parallel execution model for Prolog is presented: the PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR-parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented, and compilation to the PPP abstract instruction set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

  6. Reordering computations for parallel execution

    NASA Technical Reports Server (NTRS)

    Adams, L.

    1985-01-01

    The computations are reordered in the SOR algorithm to maintain the same asymptotic rate of convergence as the rowwise ordering to obtain parallelism at different levels. A parallel program is written to illustrate these ideas and actual machines for implementation of this program are discussed.

  7. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  8. Dextral Shear on the Olympic-Wallowa Lineament, Washington--Evidence from High-Resolution Aeromagnetic Anomalies and Implications for Cascadia Seismic Hazards

    NASA Astrophysics Data System (ADS)

    Blakely, R. J.; Sherrod, B. L.; Weaver, C. S.; Wells, R. E.

    2013-12-01

    The Olympic-Wallowa physiographic lineament (OWL) extends northwestward ~500 km from northeastern Oregon to Vancouver Island, British Columbia. The tectonic relevance of the OWL, particularly the degree to which horizontal shear has contributed to its evolution, is an important element in assessing kinematic connections between the Cascadia backarc and forearc, and the consequent seismic hazard it poses to the region. Past workers have come to rather different conclusions, some suggesting sinistral, others suggesting dextral, and still others suggesting no horizontal displacement on the OWL. North-northwest-striking dikes of the 8.5 Ma Ice Harbor Member of the Columbia River Basalt Group are offset and disrupted by the Wallula fault zone (WFZ), a 45-km-long segment of the OWL southeast of Kennewick. Although mostly concealed by young deposits, Ice Harbor dikes are clearly delineated in high-resolution aeromagnetic anomalies as near-vertical intrusions, affording an opportunity to estimate the sense and amount of offset along this part of the OWL. Aided by various derivative products calculated from magnetic anomalies, we interpret five piercing points from single-dike anomalies intersecting the Wallula fault (the northernmost strand of the WFZ), together indicating an average of 1.72 km of right-lateral offset on this single strand. Right-lateral offset across the entire WFZ is 6.9 km, an average rate of 0.8 mm/y since 8.5 Ma. We cannot rule out the possibility that tectonic offsets are only apparent because the injection process itself was offset during emplacement. However, consistent right-lateral offsets are observed in stream drainages along the OWL, supporting our magnetic anomaly interpretations. Aerial photography and airborne LiDAR surveys show seven offset streams along a fault scarp on the northeastern slope of Rattlesnake Mountain, located along the OWL about 70 km northwest of the WFZ. Incised in Pleistocene deposits, these stream offsets average about

  9. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  10. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  11. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  12. Parallel NPARC: Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  13. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  14. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  15. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  16. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  17. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  18. Multigrid on massively parallel architectures

    SciTech Connect

    Falgout, R D; Jones, J E

    1999-09-17

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs.

  19. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  20. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  1. Hydrothermal alteration related to a deep mantle source controlled by a Cambrian intracontinental strike-slip fault: Evidence for the Meruoca felsic intrusion associated with the Transbraziliano Lineament, Northeastern Brazil

    NASA Astrophysics Data System (ADS)

    Santos, Roberto Ventura; Oliveira, Claudinei Gouveia de; Parente, Clóvis Vaz; Garcia, Maria da Glória Motta; Dantas, Elton Luis

    2013-04-01

    One of the most prominent geological structures in Borborema Province, northeast Brazil, is the Transbraziliano Lineament that crosscuts most of the South American Platform and was active at least until the Devonian. This continental structure is responsible for the formation of rift and pull-apart basins in Northeastern Brazil, most of which filled with volcanic and continental sedimentary rocks (Parente et al., 2004). In the region of Sobral, Ceará State, this same continental structure controlled the intrusion of the Meruoca pluton and the formation of the Jaibaras Basin, which is bounded by strike-slip shear zones. Hydrothermal alterations seem to have been pervasive in Meruoca, as indicated by disturbances in both the Rb-Sr and U-Pb systems (Sial et al., 1981; Fetter, 1999) and by the large dispersion of anisotropic magnetic susceptibility (AMS) (Archanjo et al., 2009). In this paper, we address the origin of the hydrothermal fluids that affected the borders of the Meruoca batholith and their relationship with the activity of the Transbraziliano Lineament. These fluids were responsible for carbonate veins and Fe-Cu mineral concentrations that are commonly found associated with hydrothermally altered breccias. The carbon and oxygen isotope composition of these carbonate veins suggest that they may be related to CO2-bearing mantle-derived fluids that were channelized by the Transbraziliano Lineament. Based on oxygen isotopes, we argue that Fe-Cu concentrations may have formed in isotope equilibrium with the rhyolitic rocks at temperatures between 500 and 560 °C. This scenario points to magmatism as the main process in the formation of these rocks. We also report a K-Ar age of 530 ± 12 Ma for muscovite associated with the last ductile event that affected the Sobral-Pedro II Shear Zone and a U-Pb age of 540.8 ± 5.1 Ma for the Meruoca pluton. We further suggest that this granite is a late-kinematic intrusion that is most likely associated with the Parapu

  2. Role of local to regional-scale collisions in the closure history of the Southern Neotethys, exemplified by tectonic development of the Kyrenia Range active margin/collisional lineament, N Cyprus

    NASA Astrophysics Data System (ADS)

    Robertson, Alastair; Kinnaird, Tim; McCay, Gillian; Palamakumbura, Romesh; Chen, Guohui

    2016-04-01

    Active margin processes including subduction, accretion, arc magmatism and back-arc extension play a key role in the diachronous, and still incomplete closure of the S Neotethys. The S Neotethys rifted along the present-day Africa-Eurasia continental margin during the Late Triassic and, after sea-floor spreading, began to close related to northward subduction during the Late Cretaceous. The northern, active continental margin of the S Neotethys was bordered by several of the originally rifted continental fragments (e.g. Taurides). The present-day convergent lineament ranges from subaqueous (e.g. Mediterranean Ridge), to subaerial (e.g. SE Turkey). The active margin development is partially obscured by microcontinent-continent collision and post-collisional strike-slip deformation (e.g. Tauride-Arabian suture). However, the Kyrenia Range, N Cyprus provides an outstanding record of convergent margin to early stage collisional processes. It owes its existence to strong localised uplift during the Pleistocene, which probably resulted from the collision of a continental promontory of N Africa (Eratosthenes Seamount) with the long-lived S Neotethyan active margin to the north. A multi-stage convergence history is revealed, mainly from a combination of field structural, sedimentological and igneous geochemical studies. Initial Late Cretaceous convergence resulted in greenschist facies burial metamorphism that is likely to have been related to the collision, then rapid exhumation, of a continental fragment (stage 1). During the latest Cretaceous-Palaeogene, the Kyrenia lineament was characterised by subduction-influenced magmatism and syn-tectonic sediment deposition. Early to Mid-Eocene, S-directed thrusting and folding (stage 2) is likely to have been influenced by the suturing of the Izmir-Ankara-Erzincan ocean to the north ('N Neotethys'). Convergence continued during the Neogene, dominated by deep-water terrigenous gravity-flow accumulation in a foredeep setting

  3. Appendix E: Parallel Pascal development system

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The Parallel Pascal Development System enables Parallel Pascal programs to be developed and tested on a conventional computer. It consists of several system programs, including a Parallel Pascal to standard Pascal translator, and a library of Parallel Pascal subprograms. The library includes subprograms for using Parallel Pascal on a parallel system with a fixed degree of parallelism, such as the Massively Parallel Processor, to conveniently manipulate arrays which have dimensions than the hardware. Programs can be conveninetly tested with small sized arrays on the conventional computer before attempting to run on a parallel system.

  4. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  5. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  6. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  7. Parallel hierarchical method in networks

    NASA Astrophysics Data System (ADS)

    Malinochka, Olha; Tymchenko, Leonid

    2007-09-01

    This method of parallel-hierarchical Q-transformation offers new approach to the creation of computing medium - of parallel -hierarchical (PH) networks, being investigated in the form of model of neurolike scheme of data processing [1-5]. The approach has a number of advantages as compared with other methods of formation of neurolike media (for example, already known methods of formation of artificial neural networks). The main advantage of the approach is the usage of multilevel parallel interaction dynamics of information signals at different hierarchy levels of computer networks, that enables to use such known natural features of computations organization as: topographic nature of mapping, simultaneity (parallelism) of signals operation, inlaid cortex, structure, rough hierarchy of the cortex, spatially correlated in time mechanism of perception and training [5].

  8. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  9. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  10. Parallel computation using limited resources

    SciTech Connect

    Sugla, B.

    1985-01-01

    This thesis addresses itself to the task of designing and analyzing parallel algorithms when the resources of processors, communication, and time are limited. The two parts of this thesis deal with multiprocessor systems and VLSI - the two important parallel processing environments that are prevalent today. In the first part a time-processor-communication tradeoff analysis is conducted for two kinds of problems - N input, 1 output, and N input, N output computations. In the class of problems of the second kind, the problem of prefix computation, an important problem due to the number of naturally occurring computations it can model, is studied. Finally, a general methodology is given for design of parallel algorithms that can be used to optimize a given design to a wide set of architectural variations. The second part of the thesis considers the design of parallel algorithms for the VLSI model of computation when the resource of time is severely restricted.

  11. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  12. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  13. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  14. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  15. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. PMID:12240679

  16. The NICMOS Parallel Observing Program

    NASA Astrophysics Data System (ADS)

    McCarthy, Patrick

    2002-07-01

    We propose to manage the default set of pure parallels with NICMOS. Our experience with both our GO NICMOS parallel program and the public parallel NICMOS programs in cycle 7 prepared us to make optimal use of the parallel opportunities. The NICMOS G141 grism remains the most powerful survey tool for HAlpha emission-line galaxies at cosmologically interesting redshifts. It is particularly well suited to addressing two key uncertainties regarding the global history of star formation: the peak rate of star formation in the relatively unexplored but critical 1<= z <= 2 epoch, and the amount of star formation missing from UV continuum-based estimates due to high extinction. Our proposed deep G141 exposures will increase the sample of known HAlpha emission- line objects at z ~ 1.3 by roughly an order of magnitude. We will also obtain a mix of F110W and F160W images along random sight-lines to examine the space density and morphologies of the reddest galaxies. The nature of the extremely red galaxies remains unclear and our program of imaging and grism spectroscopy provides unique information regarding both the incidence of obscured star bursts and the build up of stellar mass at intermediate redshifts. In addition to carrying out the parallel program we will populate a public database with calibrated spectra and images, and provide limited ground- based optical and near-IR data for the deepest parallel fields.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  19. The economics of parallel trade.

    PubMed

    Danzon, P M

    1998-03-01

    The potential for parallel trade in the European Union (EU) has grown with the accession of low price countries and the harmonisation of registration requirements. Parallel trade implies a conflict between the principle of autonomy of member states to set their own pharmaceutical prices, the principle of free trade and the industrial policy goal of promoting innovative research and development (R&D). Parallel trade in pharmaceuticals does not yield the normal efficiency gains from trade because countries achieve low pharmaceutical prices by aggressive regulation, not through superior efficiency. In fact, parallel trade reduces economic welfare by undermining price differentials between markets. Pharmaceutical R&D is a global joint cost of serving all consumers worldwide; it accounts for roughly 30% of total costs. Optimal (welfare maximising) pricing to cover joint costs (Ramsey pricing) requires setting different prices in different markets, based on inverse demand elasticities. By contrast, parallel trade and regulation based on international price comparisons tend to force price convergence across markets. In response, manufacturers attempt to set a uniform 'euro' price. The primary losers from 'euro' pricing will be consumers in low income countries who will face higher prices or loss of access to new drugs. In the long run, even higher income countries are likely to be worse off with uniform prices, because fewer drugs will be developed. One policy option to preserve price differentials is to exempt on-patent products from parallel trade. An alternative is confidential contracting between individual manufacturers and governments to provide country-specific ex post discounts from the single 'euro' wholesale price, similar to rebates used by managed care in the US. This would preserve differentials in transactions prices even if parallel trade forces convergence of wholesale prices. PMID:10178655

  20. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  1. Bounded Parallel-Batch Scheduling on Unrelated Parallel Machines

    NASA Astrophysics Data System (ADS)

    Miao, Cuixia; Zhang, Yuzhong; Wang, Chengfei

    In this paper, we consider the bounded parallel-batch scheduling problem on unrelated parallel machines. Problems R m |B|F are NP-hard for any objective function F. For this reason, we discuss the special case with p ij = p i for i = 1, 2, ⋯ , m , j = 1, 2, ⋯ , n. We give optimal algorithms for the general scheduling to minimize total weighted completion time, makespan and the number of tardy jobs. And we design pseudo-polynomial time algorithms for the case with rejection penalty to minimize the makespan and the total weighted completion time plus the total penalty of the rejected jobs, respectively.

  2. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  3. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  4. Massively parallel MRI detector arrays

    NASA Astrophysics Data System (ADS)

    Keil, Boris; Wald, Lawrence L.

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays.

  5. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  6. Massively parallel MRI detector arrays.

    PubMed

    Keil, Boris; Wald, Lawrence L

    2013-04-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas via reception, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called "ultimate" SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  7. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  8. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  9. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  10. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  11. Hybrid parallel programming with MPI and Unified Parallel C.

    SciTech Connect

    Dinan, J.; Balaji, P.; Lusk, E.; Sadayappan, P.; Thakur, R.; Mathematics and Computer Science; The Ohio State Univ.

    2010-01-01

    The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity because of their ability to provide a shared global address space that spans the memories of multiple compute nodes. However, taking advantage of UPC can require a large recoding effort for existing parallel applications. In this paper, we explore a new hybrid parallel programming model that combines MPI and UPC. This model allows MPI programmers incremental access to a greater amount of memory, enabling memory-constrained MPI codes to process larger data sets. In addition, the hybrid model offers UPC programmers an opportunity to create static UPC groups that are connected over MPI. As we demonstrate, the use of such groups can significantly improve the scalability of locality-constrained UPC codes. This paper presents a detailed description of the hybrid model and demonstrates its effectiveness in two applications: a random access benchmark and the Barnes-Hut cosmological simulation. Experimental results indicate that the hybrid model can greatly enhance performance; using hybrid UPC groups that span two cluster nodes, RA performance increases by a factor of 1.33 and using groups that span four cluster nodes, Barnes-Hut experiences a twofold speedup at the expense of a 2% increase in code size.

  12. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  13. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-12-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

  14. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  15. ITER LHe Plants Parallel Operation

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  16. Medipix2 parallel readout system

    NASA Astrophysics Data System (ADS)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  17. Parallelization of the SIR code

    NASA Astrophysics Data System (ADS)

    Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jurčak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

    A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

  18. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  19. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  20. Coupled parallel waveguide semiconductor laser

    NASA Technical Reports Server (NTRS)

    Katz, J.; Kapon, E.; Lindsey, C.; Rav-Noy, Z.; Margalit, S.; Yariv, A.; Mukai, S.

    1984-01-01

    The operation of a new type of tunable laser, where the two separately controlled individual lasers are placed vertically in parallel, has been demonstrated. One of the cavities ('control' cavity) is operated below threshold and assists the longitudinal mode selection and tuning of the other laser. With a minor modification, the same device can operate as an independent two-wavelength laser source.

  1. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  2. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  3. Mirror versus parallel bimanual reaching

    PubMed Central

    2013-01-01

    Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p < 0.001). This was the only group that began and maintained low errors throughout training. Conclusion Combined with other evidence, these results suggest that the most intuitive reaching performance can be observed with parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

  4. Low Mach number parallel and quasi-parallel shocks

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Quest, K. B.; Winske, D.

    1990-01-01

    The properties of low-Mach-number parallel and quasi-parallel shocks are studied using the results of one-dimensional hybrid simulations. It is shown that both the structure and ion dissipation at the shocks differ considerably. In the parallel limit, the shock remains coupled to the piston and consists of large-amplitude magnetosonic-whistler waves in the upstream, through the shock and into the downstream region, where the waves eventually damp out. These waves are generated by an ion beam instability due to the interaction between the incident and piston-reflected ions. The excited waves decelerate the plasma sufficiently that it becomes stable far into the downstream. The increase in ion temperature along the shock normal in the downstream region is due to superposition of incident and piston-rflected ions. These two populations of ions remain distinct through the downstream region. While they are both gyrophase-bunched, their counterstreaming nature results in a 180-deg phase shift in their perpendicular velocities.

  5. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  6. Two Level Parallel Grammatical Evolution

    NASA Astrophysics Data System (ADS)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  7. Template matching on parallel architectures

    SciTech Connect

    Sher

    1985-07-01

    Many important problems in computer vision can be characterized as template-matching problems on edge images. Some examples are circle detection and line detection. Two techniques for template matching are the Hough transform and correlation. There are two algorithms for correlation: a shift-and-add-based technique and a Fourier-transform-based technique. The most efficient algorithm of these three varies depending on the size of the template and the structure of the image. On different parallel architectures, the choice of algorithms for a specific problem is different. This paper describes two parallel architectures: the WARP and the Butterfly and describes why and how the criterion for making the choice of algorithms differs between the two machines.

  8. Parallel supercomputing with commodity components

    SciTech Connect

    Warren, M.S.; Goda, M.P.; Becker, D.J.

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  9. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  10. Parallel multiplex laser feedback interferometry

    SciTech Connect

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  11. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  12. A parallel graph coloring heuristic

    SciTech Connect

    Jones, M.T.; Plassmann, P.E. )

    1993-05-01

    The problem of computing good graph colorings arises in many diverse applications, such as in the estimation of sparse Jacobians and in the development of efficient, parallel iterative methods for solving sparse linear systems. This paper presents an asynchronous graph coloring heuristic well suited to distributed memory parallel computers. Experimental results obtained on an Intel iPSC/860 are presented, which demonstrate that, for graphs arising from finite element applications, the heuristic exhibits scalable performance and generates colorings usually within three or four colors of the best-known linear time sequential heuristics. For bounded degree graphs, it is shown that the expected running time of the heuristic under the P-Ram computation model is bounded by EO(log(n)/log log(n)). This bound is an improvement over the previously known best upper bound for the expected running time of a random heuristic for the graph coloring problem.

  13. Instruction-level parallel processing.

    PubMed

    Fisher, J A; Rau, R

    1991-09-13

    The performance of microprocessors has increased steadily over the past 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations-adds, multiplies, loads, and so on-to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures-superscalar, VLIW, and dataflow processors-and the compiler techniques necessary to make ILP work well. PMID:17831442

  14. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  15. A new parallel simulation technique

    NASA Astrophysics Data System (ADS)

    Blanco-Pillado, Jose J.; Olum, Ken D.; Shlaer, Benjamin

    2012-01-01

    We develop a "semi-parallel" simulation technique suggested by Pretorius and Lehner, in which the simulation spacetime volume is divided into a large number of small 4-volumes that have only initial and final surfaces. Thus there is no two-way communication between processors, and the 4-volumes can be simulated independently and potentially at different times. This technique allows us to simulate much larger volumes than we otherwise could, because we are not limited by total memory size. No processor time is lost waiting for other processors. We compare a cosmic string simulation we developed using the semi-parallel technique with our previous MPI-based code for several test cases and find a factor of 2.6 improvement in the total amount of processor time required to accomplish the same job for strings evolving in the matter-dominated era.

  16. Scans as primitive parallel operations

    SciTech Connect

    Blelloch, G.E. . Dept. of Computer Science)

    1989-11-01

    In most parallel random access machine (PRAM) models, memory references are assumed to take unit time. In practice, and in theory, certain scan operations, also known as prefix computations, can execute in no more time than these parallel memory references. This paper outlines an extensive study of the effect of including, in the PRAM models, such scan operations as unit-time primitives. The study concludes that the primitives improve the asymptotic running time of many algorithms by an O(log n) factor greatly simplify the description of many algorithms, and are significantly easier to implement than memory references. The authors argue that the algorithm designer should feel free to use these operations as if they were as cheap as a memory reference. This paper describes five algorithms that clearly illustrate how the scan primitives can be used in algorithm design. These all run on an EREW PRAM with the addition of two scan primitives.

  17. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  18. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  19. Massively parallel femtosecond laser processing.

    PubMed

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  20. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  1. NWChem: scalable parallel computational chemistry

    SciTech Connect

    van Dam, Hubertus JJ; De Jong, Wibe A.; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; Valiev, Marat

    2011-11-01

    NWChem is a general purpose computational chemistry code specifically designed to run on distributed memory parallel computers. The core functionality of the code focuses on molecular dynamics, Hartree-Fock and density functional theory methods for both plane-wave basis sets as well as Gaussian basis sets, tensor contraction engine based coupled cluster capabilities and combined quantum mechanics/molecular mechanics descriptions. It was realized from the beginning that scalable implementations of these methods required a programming paradigm inherently different from what message passing approaches could offer. In response a global address space library, the Global Array Toolkit, was developed. The programming model it offers is based on using predominantly one-sided communication. This model underpins most of the functionality in NWChem and the power of it is exemplified by the fact that the code scales to tens of thousands of processors. In this paper the core capabilities of NWChem are described as well as their implementation to achieve an efficient computational chemistry code with high parallel scalability. NWChem is a modern, open source, computational chemistry code1 specifically designed for large scale parallel applications2. To meet the challenges of developing efficient, scalable and portable programs of this nature a particular code design was adopted. This code design involved two main features. First of all, the code is build up in a modular fashion so that a large variety of functionality can be integrated easily. Secondly, to facilitate writing complex parallel algorithms the Global Array toolkit was developed. This toolkit allows one to write parallel applications in a shared memory like approach, but offers additional mechanisms to exploit data locality to lower communication overheads. This framework has proven to be very successful in computational chemistry but is applicable to any engineering domain. Within the context created by the features

  2. Parallel micromanipulation method for microassembly

    NASA Astrophysics Data System (ADS)

    Sin, Jeongsik; Stephanou, Harry E.

    2001-09-01

    Microassembly deals with micron or millimeter scale objects where the tolerance requirements are in the micron range. Typical applications include electronics components (silicon fabricated circuits), optoelectronics components (photo detectors, emitters, amplifiers, optical fibers, microlenses, etc.), and MEMS (Micro-Electro-Mechanical-System) dies. The assembly processes generally require not only high precision but also high throughput at low manufacturing cost. While conventional macroscale assembly methods have been utilized in scaled down versions for microassembly applications, they exhibit limitations on throughput and cost due to the inherently serialized process. Since the assembly process depends heavily on the manipulation performance, an efficient manipulation method for small parts will have a significant impact on the manufacturing of miniaturized products. The objective of this study on 'parallel micromanipulation' is to achieve these three requirements through the handling of multiple small parts simultaneously (in parallel) with high precision (micromanipulation). As a step toward this objective, a new manipulation method is introduced. The method uses a distributed actuation array for gripper free and parallel manipulation, and a centralized, shared actuator for simplified controls. The method has been implemented on a testbed 'Piezo Active Surface (PAS)' in which an actively generated friction force field is the driving force for part manipulation. Basic motion primitives, such as translation and rotation of objects, are made possible with the proposed method. This study discusses the design of the proposed manipulation method PAS, and the corresponding manipulation mechanism. The PAS consists of two piezoelectric actuators for X and Y motion, two linear motion guides, two sets of nozzle arrays, and solenoid valves to switch the pneumatic suction force on and off in individual nozzles. One array of nozzles is fixed relative to the surface on

  3. Course of Study: Occupational, Vocational, and Technical Education: Phase 3--8th Grade. Exploratory Education.

    ERIC Educational Resources Information Center

    Pittsburgh Board of Public Education, PA.

    The curriculum guide outlines learning patterns which may be adapted or adopted by the creative teacher in occupational education. Emphasis is placed on processes basic to specific job activities found within the areas of: (1) business education, (2) home economics, and (3) industrial arts. Students are able to associate, integrate, and catalog…

  4. The society for craniofacial genetics and developmental biology 38th annual meeting.

    PubMed

    Taneyhill, Lisa A; Hoover-Fong, Julie; Lozanoff, Scott; Marcucio, Ralph; Richtsmeier, Joan T; Trainor, Paul A

    2016-07-01

    The mission of the Society for Craniofacial Genetics and Developmental Biology (SCGDB) is to promote education, research, and communication about normal and abnormal development of the tissues and organs of the head. The SCGDB welcomes as members undergraduate students, graduate students, post doctoral researchers, clinicians, orthodontists, scientists, and academicians who share an interest in craniofacial biology. Each year our members come together to share their novel findings, build upon, and challenge current knowledge of craniofacial biology. © 2016 Wiley Periodicals, Inc. PMID:27102868

  5. Teaching and Teacher Education among the Professions. 38th Charles W. Hunt Memorial Lecture.

    ERIC Educational Resources Information Center

    Shulman, Lee S.

    This paper argues that a real profession involves a community of people committed to ensuring that they individually and collectively develop the capacity to learn from experience, so they can serve the social responsibilities or needs to which they are committed. It explains that there are six commonplaces inevitably associated with a profession…

  6. Annual Adult Education Research Conference Proceedings (38th, Stillwater, Oklahoma, May 16-18, 1997).

    ERIC Educational Resources Information Center

    Nolan, Robert E., Comp.; Chelesvig, Heath, Comp.

    The following are among 50 papers included: "The Politics of Planning Culturally Relevant AIDS Education for African-American Women" (Archie-Booker); "Developing White Consciousness through a Transformative Learning Process" (Barlas); "Executive Businesswomen's Learning in the Context of Organizational Culture" (Bierma); "The Myth of the Universal…

  7. Complex Rift-Parallel, Strike-Slip Faulting in Iceland: Kinematic Analysis of the Gljúfurá Fault Zone

    NASA Astrophysics Data System (ADS)

    Nanfito, A.; Karson, J. A.

    2009-12-01

    The N-S striking Gljúfurá Fault Zone is an anomalous, dextral, strike-slip fault cutting Tertiary basaltic lavas in west-central Iceland. The fault zone is nearly parallel to structures formed at extinct spreading centers that were active from ~15 to 7 Ma ago in this region, suggesting ridge-parallel strike-slip faulting. The fault zone is well exposed in a river gorge for ~2 km along a well-defined regional lineament. The combined damage zone and fault core are about 50 m wide revealing an especially intense and complex style of deformation compared to other Icelandic fault zones. Basaltic lava flows on either side of the fault zone are cut by numerous closely spaced (10s of cm to m) Riedel shear fractures that grade into a fault core of progressively more intensely fractured lava and strongly altered and mineralized fault breccias, cataclasite and fault gouge. Riedel shears are frequently rotated or bend into the main fault zone. Distinctive bands of fault breccia derived from lava flow interiors, flow tops and dike rock are mapped for tens of meters along strike and reach thicknesses of several meters wide. Breccias contain angular basaltic fragments that range from few meters to millimeters. Fault breccias are typically clast supported with a matix of finely comminuted basalt clasts to clay gouge. 'Jigsaw' breccias are supported by a calcite matrix. Discrete faults and shear fractures show dominantly gently plunging slickenlines and abundant kinematic indicators showing dextral>normal oblique slip. Zeolite and calcite veins show multiple episodes of extension. Local left steps in fault zone are marked by extensional duplex structures with vertical separations of tens of meters bounded by major strike-slip fault strands. The overall architecture of the fault zone in interpreted as an exhumed flower structure. Numerous deformed and undeformed basaltic dikes sub-parallel the deformation structures, suggesting synkinematic intrusion. Some dikes deviate from the

  8. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  9. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  10. Global Arrays Parallel Programming Toolkit

    SciTech Connect

    Nieplocha, Jaroslaw; Krishnan, Manoj Kumar; Palmer, Bruce J.; Tipparaju, Vinod; Harrison, Robert J.; Chavarría-Miranda, Daniel

    2011-01-01

    The two predominant classes of programming models for parallel computing are distributed memory and shared memory. Both shared memory and distributed memory models have advantages and shortcomings. Shared memory model is much easier to use but it ignores data locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this characteristic can have a negative impact on performance and scalability. Careful code restructuring to increase data reuse and replacing fine grain load/stores with block access to shared data can address the problem and yield performance for shared memory that is competitive with message-passing. However, this performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distributed memory models, such as message-passing or one-sided communication, offer performance and scalability but they are difficult to program. The Global Arrays toolkit attempts to offer the best features of both models. It implements a shared-memory programming model in which data locality is managed by the programmer. This management is achieved by calls to functions that transfer data between a global address space (a distributed array) and local storage. In this respect, the GA model has similarities to the distributed shared-memory models that provide an explicit acquire/release protocol. However, the GA model acknowledges that remote data is slower to access than local data and allows data locality to be specified by the programmer and hence managed. GA is related to the global address space languages such as UPC, Titanium, and, to a lesser extent, Co-Array Fortran. In addition, by providing a set of data-parallel operations, GA is also related to data-parallel languages such as HPF, ZPL, and Data Parallel C. However, the Global Array programming model is implemented as a library that works with most languages used for technical computing and does not rely on compiler technology for achieving

  11. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  12. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  13. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  14. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  15. Scalable Parallel Algebraic Multigrid Solvers

    SciTech Connect

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  16. Fault-tolerant parallel processor

    SciTech Connect

    Harper, R.E.; Lala, J.H. )

    1991-06-01

    This paper addresses issues central to the design and operation of an ultrareliable, Byzantine resilient parallel computer. Interprocessor connectivity requirements are met by treating connectivity as a resource that is shared among many processing elements, allowing flexibility in their configuration and reducing complexity. Redundant groups are synchronized solely by message transmissions and receptions, which aslo provide input data consistency and output voting. Reliability analysis results are presented that demonstrate the reduced failure probability of such a system. Performance analysis results are presented that quantify the temporal overhead involved in executing such fault-tolerance-specific operations. Empirical performance measurements of prototypes of the architecture are presented. 30 refs.

  17. Parallel Assembly of LIGA Components

    SciTech Connect

    Christenson, T.R.; Feddema, J.T.

    1999-03-04

    In this paper, a prototype robotic workcell for the parallel assembly of LIGA components is described. A Cartesian robot is used to press 386 and 485 micron diameter pins into a LIGA substrate and then place a 3-inch diameter wafer with LIGA gears onto the pins. Upward and downward looking microscopes are used to locate holes in the LIGA substrate, pins to be pressed in the holes, and gears to be placed on the pins. This vision system can locate parts within 3 microns, while the Cartesian manipulator can place the parts within 0.4 microns.

  18. True Shear Parallel Plate Viscometer

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin; Kaukler, William

    2010-01-01

    This viscometer (which can also be used as a rheometer) is designed for use with liquids over a large temperature range. The device consists of horizontally disposed, similarly sized, parallel plates with a precisely known gap. The lower plate is driven laterally with a motor to apply shear to the liquid in the gap. The upper plate is freely suspended from a double-arm pendulum with a sufficiently long radius to reduce height variations during the swing to negligible levels. A sensitive load cell measures the shear force applied by the liquid to the upper plate. Viscosity is measured by taking the ratio of shear stress to shear rate.

  19. Exploring Parallel Concordancing in English and Chinese.

    ERIC Educational Resources Information Center

    Lixun, Wang

    2001-01-01

    Investigates the value of computer technology as a medium for the delivery of parallel texts in English and Chinese for language learning. A English-Chinese parallel corpus was created for use in parallel concordancing--a technique that has been developed to respond to the desire to study language in its natural contexts of use. (Author/VWL)

  20. Parallel Computing Using Web Servers and "Servlets".

    ERIC Educational Resources Information Center

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  1. Parallel Computation Of Forward Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1993-01-01

    Report presents parallel algorithms and special parallel architecture for computation of forward dynamics of robotics manipulators. Products of effort to find best method of parallel computation to achieve required computational efficiency. Significant speedup of computation anticipated as well as cost reduction.

  2. A parallel version of FORM 3

    NASA Astrophysics Data System (ADS)

    Fliegner, D.; Rétey, A.; Vermaseren, J. A. M.

    2001-08-01

    The parallel version of the symbolic manipulation program FORM for clusters of workstations and massive parallel systems is presented. We discuss various cluster architectures and the implementation of the parallel program using message passing (MPI). Performance results for real physics applications are shown.

  3. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    ERIC Educational Resources Information Center

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  4. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  5. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  6. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 1 2014-04-01 2014-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  7. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 1 2013-04-01 2013-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  8. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 1 2011-04-01 2011-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  9. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 1 2012-04-01 2012-04-01 false Parallel proceedings. 12.24... REPARATIONS General Information and Preliminary Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall include: (1) An arbitration...

  10. Geophysical evidence for deep basin in western Kentucky

    SciTech Connect

    Soderberg, R.K.; Keller, G.R.

    1981-02-01

    The Rough Creek fault zone is a major element of the 38th Parallel lineament in western Kentucky and southern Illinois near the head of the Mississippi embayment. Gravity, magnetic, and subsurface data suggest that this fault zone marks the northern boundary of a large graben in the Precambrian basement for which we propose the name Rough Creek graben. This graben is a major structural feature which probably formed initially in late Precambrian to early Paleozoic time and has been reactivated (perhaps several times) during the late Paleozoic and possibly the Mesozoic. The graben is as much as 5.5 km deep and the large volume of deeply buried sediments favors further exploration.

  11. Satellite and surface geophysical expression of anomalous crustal structure in Kentucky and Tennessee

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Thomas, H. H.; Wasilewski, P. J.

    1981-01-01

    An equivalent layer magnetization model is discussed. Inversion of long wavelength satellite magnetic anomaly data indicates a very magnetic source region centered in south central Kentucky. Refraction profiles suggest that the source of the gravity anomaly is a large mass of rock occupying much of the crustal thickness. The outline of the source delineated by gravity contours is also discernible in aeromagnetic anomaly patterns. The mafic plutonic complex, and several lines of evidence are consistent with a rift association. The body is, however, clearly related to the inferred position of the Grenville Front. It is bounded on the north by the fault zones of the 38th Parallel Lineament. It is suggested that such magnetization levels are achieved with magnetic mineralogies produced by normal oxidation and metamorphic processes and enhanced by viscous build-up, especially in mafic rocks of alkaline character.

  12. Magnetic anomaly map of North America south of 50 degrees north from Pogo data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.

    1976-01-01

    A magnetic anomaly map produced from Pogo data for North America and adjacent ocean areas is presented. At satellite elevations anomalies have wavelengths measured in hundreds of kilometers, and reflect regional structures on a large scale. Prominent features of the map are: (1) a large east-west high through the mid-continent, breached at the Mississippi Embayment; (2) a broad low over the Gulf of Mexico; (3) a strong gradient separating these features, which follows the Southern Appalachian-Ouachita curvature; and (4) a high over the Antilles-Bahamas Platform which extends to northern Florida. A possible relationship between the high of the mid-continent and the 38th parallel lineament is noted.

  13. Xyce parallel electronic simulator design.

    SciTech Connect

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  14. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  15. A parallel execution model for Prolog

    SciTech Connect

    Fagin, B.

    1987-01-01

    In this thesis a new parallel execution model for Prolog is presented: The PPP model or Parallel Prolog Processor. The PPP supports AND-parallelism, OR- parallelism, and intelligent backtracking. An implementation of the PPP is described, through the extension of an existing Prolog abstract machine architecture. Several examples of PPP execution are presented and compilation to the PPP abstract instructions set is discussed. The performance effects of this model are reported, based on a simulation of a large benchmark set. The implications of these results for parallel Prolog systems are discussed, and directions for future work are indicated.

  16. Information hiding in parallel programs

    SciTech Connect

    Foster, I.

    1992-01-30

    A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

  17. Nanocapillary Adhesion between Parallel Plates.

    PubMed

    Cheng, Shengfeng; Robbins, Mark O

    2016-08-01

    Molecular dynamics simulations are used to study capillary adhesion from a nanometer scale liquid bridge between two parallel flat solid surfaces. The capillary force, Fcap, and the meniscus shape of the bridge are computed as the separation between the solid surfaces, h, is varied. Macroscopic theory predicts the meniscus shape and the contribution of liquid/vapor interfacial tension to Fcap quite accurately for separations as small as two or three molecular diameters (1-2 nm). However, the total capillary force differs in sign and magnitude from macroscopic theory for h ≲ 5 nm (8-10 diameters) because of molecular layering that is not included in macroscopic theory. For these small separations, the pressure tensor in the fluid becomes anisotropic. The components in the plane of the surface vary smoothly and are consistent with theory based on the macroscopic surface tension. Capillary adhesion is affected by only the perpendicular component, which has strong oscillations as the molecular layering changes. PMID:27413872

  18. Embodied and Distributed Parallel DJing.

    PubMed

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  19. Parallel discovery of Alzheimer's therapeutics.

    PubMed

    Lo, Andrew W; Ho, Carole; Cummings, Jayna; Kosik, Kenneth S

    2014-06-18

    As the prevalence of Alzheimer's disease (AD) grows, so do the costs it imposes on society. Scientific, clinical, and financial interests have focused current drug discovery efforts largely on the single biological pathway that leads to amyloid deposition. This effort has resulted in slow progress and disappointing outcomes. Here, we describe a "portfolio approach" in which multiple distinct drug development projects are undertaken simultaneously. Although a greater upfront investment is required, the probability of at least one success should be higher with "multiple shots on goal," increasing the efficiency of this undertaking. However, our portfolio simulations show that the risk-adjusted return on investment of parallel discovery is insufficient to attract private-sector funding. Nevertheless, the future cost savings of an effective AD therapy to Medicare and Medicaid far exceed this investment, suggesting that government funding is both essential and financially beneficial. PMID:24944190

  20. Parallel Network Simulations with NEURON

    PubMed Central

    Migliore, M.; Cannia, C.; Lytton, W.W; Markram, Henry; Hines, M. L.

    2009-01-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored. PMID:16732488

  1. A highly parallel signal processor

    NASA Astrophysics Data System (ADS)

    Bigham, Jackson D., Jr.

    There is an increasing need for signal processors functional across a broad range of problems, from radar systems to E-O and ESM applications. To meet this challenge, a signal processing system capable of efficiently meeting the processing requirements over a broad range of avionics sensor systems has been developed. The CDC Parallel Modular Signal Processor (PMSP) is a complete MIL/E-5400-qualified digital signal processing system capable of computation rates greater than 600 MOPS (million operations per second). The signal processing element of the PMSP is the Micro-AFP. It is an all-VLSI processor capable of executing multiple simultaneous operations. Up to five Micro-AFPs and 12 MB of main store memory (MSM), along with associated control and I/O functions, are contained in the PMSP's standard ATR enclosure.

  2. Self-testing in parallel

    NASA Astrophysics Data System (ADS)

    McKague, Matthew

    2016-04-01

    Self-testing allows us to determine, through classical interaction only, whether some players in a non-local game share particular quantum states. Most work on self-testing has concentrated on developing tests for small states like one pair of maximally entangled qubits, or on tests where there is a separate player for each qubit, as in a graph state. Here we consider the case of testing many maximally entangled pairs of qubits shared between two players. Previously such a test was shown where testing is sequential, i.e., one pair is tested at a time. Here we consider the parallel case where all pairs are tested simultaneously, giving considerably more power to dishonest players. We derive sufficient conditions for a self-test for many maximally entangled pairs of qubits shared between two players and also two constructions for self-tests where all pairs are tested simultaneously.

  3. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  4. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  5. Hybrid Optimization Parallel Search PACKage

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  6. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  7. Integrated Task and Data Parallel Programming

    NASA Technical Reports Server (NTRS)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  8. The ParaScope parallel programming environment

    NASA Technical Reports Server (NTRS)

    Cooper, Keith D.; Hall, Mary W.; Hood, Robert T.; Kennedy, Ken; Mckinley, Kathryn S.; Mellor-Crummey, John M.; Torczon, Linda; Warren, Scott K.

    1993-01-01

    The ParaScope parallel programming environment, developed to support scientific programming of shared-memory multiprocessors, includes a collection of tools that use global program analysis to help users develop and debug parallel programs. This paper focuses on ParaScope's compilation system, its parallel program editor, and its parallel debugging system. The compilation system extends the traditional single-procedure compiler by providing a mechanism for managing the compilation of complete programs. Thus, ParaScope can support both traditional single-procedure optimization and optimization across procedure boundaries. The ParaScope editor brings both compiler analysis and user expertise to bear on program parallelization. It assists the knowledgeable user by displaying and managing analysis and by providing a variety of interactive program transformations that are effective in exposing parallelism. The debugging system detects and reports timing-dependent errors, called data races, in execution of parallel programs. The system combines static analysis, program instrumentation, and run-time reporting to provide a mechanical system for isolating errors in parallel program executions. Finally, we describe a new project to extend ParaScope to support programming in FORTRAN D, a machine-independent parallel programming language intended for use with both distributed-memory and shared-memory parallel computers.

  9. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  10. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2013-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Preliminary results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  11. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  12. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  13. A generic fine-grained parallel C

    NASA Technical Reports Server (NTRS)

    Hamet, L.; Dorband, John E.

    1988-01-01

    With the present availability of parallel processors of vastly different architectures, there is a need for a common language interface to multiple types of machines. The parallel C compiler, currently under development, is intended to be such a language. This language is based on the belief that an algorithm designed around fine-grained parallelism can be mapped relatively easily to different parallel architectures, since a large percentage of the parallelism has been identified. The compiler generates a FORTH-like machine-independent intermediate code. A machine-dependent translator will reside on each machine to generate the appropriate executable code, taking advantage of the particular architectures. The goal of this project is to allow a user to run the same program on such machines as the Massively Parallel Processor, the CRAY, the Connection Machine, and the CYBER 205 as well as serial machines such as VAXes, Macintoshes and Sun workstations.

  14. Parallel automated adaptive procedures for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Shephard, M. S.; Flaherty, J. E.; Decougny, H. L.; Ozturan, C.; Bottasso, C. L.; Beall, M. W.

    1995-01-01

    Consideration is given to the techniques required to support adaptive analysis of automatically generated unstructured meshes on distributed memory MIMD parallel computers. The key areas of new development are focused on the support of effective parallel computations when the structure of the numerical discretization, the mesh, is evolving, and in fact constructed, during the computation. All the procedures presented operate in parallel on already distributed mesh information. Starting from a mesh definition in terms of a topological hierarchy, techniques to support the distribution, redistribution and communication among the mesh entities over the processors is given, and algorithms to dynamically balance processor workload based on the migration of mesh entities are given. A procedure to automatically generate meshes in parallel, starting from CAD geometric models, is given. Parallel procedures to enrich the mesh through local mesh modifications are also given. Finally, the combination of these techniques to produce a parallel automated finite element analysis procedure for rotorcraft aerodynamics calculations is discussed and demonstrated.

  15. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  16. Linearly exact parallel closures for slab geometry

    SciTech Connect

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-15

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients)

  17. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  18. Runtime volume visualization for parallel CFD

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  19. Linearly exact parallel closures for slab geometry

    NASA Astrophysics Data System (ADS)

    Ji, Jeong-Young; Held, Eric D.; Jhang, Hogun

    2013-08-01

    Parallel closures are obtained by solving a linearized kinetic equation with a model collision operator using the Fourier transform method. The closures expressed in wave number space are exact for time-dependent linear problems to within the limits of the model collision operator. In the adiabatic, collisionless limit, an inverse Fourier transform is performed to obtain integral (nonlocal) parallel closures in real space; parallel heat flow and viscosity closures for density, temperature, and flow velocity equations replace Braginskii's parallel closure relations, and parallel flow velocity and heat flow closures for density and temperature equations replace Spitzer's parallel transport relations. It is verified that the closures reproduce the exact linear response function of Hammett and Perkins [Phys. Rev. Lett. 64, 3019 (1990)] for Landau damping given a temperature gradient. In contrast to their approximate closures where the vanishing viscosity coefficient numerically gives an exact response, our closures relate the heat flow and nonvanishing viscosity to temperature and flow velocity (gradients).

  20. Algorithmically Specialized Parallel Architecture For Robotics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Computing system called Robot Mathematics Processor (RMP) contains large number of processor elements (PE's) connected in various parallel and serial combinations reconfigurable via software. Special-purpose architecture designed for solving diverse computational problems in robot control, simulation, trajectory generation, workspace analysis, and like. System an MIMD-SIMD parallel architecture capable of exploiting parallelism in different forms and at several computational levels. Major advantage lies in design of cells, which provides flexibility and reconfigurability superior to previous SIMD processors.

  1. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report first results for several benchmark codes and one full application that have been parallelized using our system.

  2. Six-Degree-Of-Freedom Parallel Minimanipulator

    NASA Technical Reports Server (NTRS)

    Tahmasebi, Farhad; Tsai, Lung-Wen

    1994-01-01

    Six-degree-of-freedom parallel minimanipulator stiffer and simpler than earlier six-degree-of-freedom manipulators. Includes only three inextensible limbs with universal joints at ends. Limbs have equal lengths and act in parallel as they share load on manipulated platform. Designed to provide high resolution and high stiffness for fine control of position and force in hybrid serial/parallel-manipulator system.

  3. Running Geant on T. Node parallel computer

    SciTech Connect

    Jejcic, A.; Maillard, J.; Silva, J. ); Mignot, B. )

    1990-08-01

    AnInmos transputer-based computer has been utilized to overcome the difficulties due to the limitations on the processing abilities of event parallelism and multiprocessor farms (i.e., the so called bus-crisis) and the concern regarding the growing sizes of databases typical in High Energy Physics. This study was done on the T.Node parallel computer manufactured by TELMAT. Detailed figures are reported concerning the event parallelization. (AIP)

  4. Racing in parallel: Quantum versus Classical

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Troyer, Matthias

    In a fair comparison of the performance of a quantum algorithm to a classical one it is important to treat them on equal footing, both regarding resource usage and parallelism. We show how one may otherwise mistakenly attribute speedup due to parallelism as quantum speedup. We apply such an analysis both to analog quantum devices (quantum annealers) and gate model algorithms and give several examples where a careful analysis of parallelism makes a significant difference in the comparison between classical and quantum algorithms.

  5. Investigation of E-W Trending Parallel Ridges North of the Galápagos Archipelago

    NASA Astrophysics Data System (ADS)

    Mello, C.; Harpp, K. S.; Mittelstaedt, E. L.; Geist, D.; Fornari, D. J.; Soule, S.; R/v Melville Mv1007 Flamingo Cruise Scientific Party

    2010-12-01

    In 2010, the R/V Melville MV1007 cruise collected EM122 multibeam bathymetry, MR1 sidescan sonar reflectivity data, and seafloor rock samples from an area surrounding the Galápagos Spreading Center (GSC) in the vicinity of the Galápagos Islands. The GSC is cut by a large, N-S trending transform fault at 90°50’W. The Nazca Plate (west of the transform) is dominated by volcanic seamounts, most of which are aligned in NW-SE trending lineaments. In contrast, the Cocos Plate (east of the transform) is heavily faulted and features four major ridges that trend 90° to 110°. In comparison to the surrounding seafloor, the ridges are strongly fractured with faults that strike sub-parallel to the GSC. To the west, the ridges wrap into the transform and trend more NW-SE. These ridges are unusual and suggest complications to our current understanding of the tectonic history of the region. The faulted ridges extend to the transform and appear to be cut and deformed by it; therefore, they may be linked to the opening of the transform through a series of discrete ridge jumps. Moreover, the ridges’ roughly symmetrical topographic profiles mirror bathymetric profiles at abandoned spreading centers such as Mathematician Ridge, and are dissimilar to the asymmetric profiles of abyssal hills. Waning magma supplies at well-documented abandoned ridges yield differentiated rocks similar to the icelandites dredged from our study area, which exhibit SiO2 ~55-65% wt.%, and major element variations that indicate substantial removal of CPX and plagioclase. Incompatible trace element ratios (e.g., Nb/La) reveal the presence of Galápagos plume material throughout the tectonized region. Trace element ratios are similar to those in axial GSC lavas and suggest the faulted ridges may have originated at the active spreading center. Volcanic seamounts complicate the interpretation of the faulted ridges as being simple abandoned spreading centers, however, and may necessitate further

  6. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  7. Parallel execution and scriptability in micromagnetic simulations

    NASA Astrophysics Data System (ADS)

    Fischbacher, Thomas; Franchin, Matteo; Bordignon, Giuliano; Knittel, Andreas; Fangohr, Hans

    2009-04-01

    We demonstrate the feasibility of an "encapsulated parallelism" approach toward micromagnetic simulations that combines offering a high degree of flexibility to the user with the efficient utilization of parallel computing resources. While parallelization is obviously desirable to address the high numerical effort required for realistic micromagnetic simulations through utilizing now widely available multiprocessor systems (including desktop multicore CPUs and computing clusters), conventional approaches toward parallelization impose strong restrictions on the structure of programs: numerical operations have to be executed across all processors in a synchronized fashion. This means that from the user's perspective, either the structure of the entire simulation is rigidly defined from the beginning and cannot be adjusted easily, or making modifications to the computation sequence requires advanced knowledge in parallel programming. We explain how this dilemma is resolved in the NMAG simulation package in such a way that the user can utilize without any additional effort on his side both the computational power of multiple CPUs and the flexibility to tailor execution sequences for specific problems: simulation scripts written for single-processor machines can just as well be executed on parallel machines and behave in precisely the same way, up to increased speed. We provide a simple instructive magnetic resonance simulation example that demonstrates utilizing both custom execution sequences and parallelism at the same time. Furthermore, we show that this strategy of encapsulating parallelism even allows to benefit from speed gains through parallel execution in simulations controlled by interactive commands given at a command line interface.

  8. MMS Observations of Parallel Electric Fields

    NASA Astrophysics Data System (ADS)

    Ergun, R.; Goodrich, K.; Wilder, F. D.; Sturner, A. P.; Holmes, J.; Stawarz, J. E.; Malaspina, D.; Usanova, M.; Torbert, R. B.; Lindqvist, P. A.; Khotyaintsev, Y. V.; Burch, J. L.; Strangeway, R. J.; Russell, C. T.; Pollock, C. J.; Giles, B. L.; Hesse, M.; Goldman, M. V.; Drake, J. F.; Phan, T.; Nakamura, R.

    2015-12-01

    Parallel electric fields are a necessary condition for magnetic reconnection with non-zero guide field and are ultimately accountable for topological reconfiguration of a magnetic field. Parallel electric fields also play a strong role in charged particle acceleration and turbulence. The Magnetospheric Multiscale (MMS) mission targets these three universal plasma processes. The MMS satellites have an accurate three-dimensional electric field measurement, which can identify parallel electric fields as low as 1 mV/m at four adjacent locations. We present preliminary observations of parallel electric fields from MMS and provide an early interpretation of their impact on magnetic reconnection, in particular, where the topological change occurs. We also examine the role of parallel electric fields in particle acceleration. Direct particle acceleration by parallel electric fields is well established in the auroral region. Observations of double layers in by the Van Allan Probes suggest that acceleration by parallel electric fields may be significant in energizing some populations of the radiation belts. THEMIS observations also indicate that some of the largest parallel electric fields are found in regions of strong field-aligned currents associated with turbulence, suggesting a highly non-linear dissipation mechanism. We discuss how the MMS observations extend our understanding of the role of parallel electric fields in some of the most critical processes in the magnetosphere.

  9. A Parallel Multigrid Method for Neutronics Applications

    SciTech Connect

    Alcouffe, Raymond E.

    2001-01-01

    The multigrid method has been shown to be the most effective general method for solving the multi-dimensional diffusion equation encountered in neutronics. This being the method of choice, we develop a strategy for implementing the multigrid method on computers of massively parallel architecture. This leads us to strategies for parallelizing the relaxation, contraction (interpolation), and prolongation operators involved in the method. We then compare the efficiency of our parallel multigrid with other parallel methods for solving the diffusion equation on selected problems encountered in reactor physics.

  10. Conformal pure radiation with parallel rays

    NASA Astrophysics Data System (ADS)

    Leistner, Thomas; Nurowski, Paweł

    2012-03-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves.

  11. Parallel auto-correlative statistics with VTK.

    SciTech Connect

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  12. Remarks on parallel computations in MATLAB environment

    NASA Astrophysics Data System (ADS)

    Opalska, Katarzyna; Opalski, Leszek

    2013-10-01

    The paper attempts to summarize author's investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other - CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).

  13. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  14. Parallel computations and control of adaptive structures

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)

    1991-01-01

    The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.

  15. Applications of Parallel Processing to Astrodynamics

    NASA Astrophysics Data System (ADS)

    Coffey, S.; Healy, L.; Neal, H.

    1996-03-01

    Parallel processing is being used to improve the catalog of earth orbiting satellites and for problems associated with the catalog. Initial efforts centered around using SIMD parallel processors to perform debris conjunction analysis and satellite dynamics studies. More recently, the availability of cheap supercomputing processors and parallel processing software such as PVM have enabled the reutilization of existing astrodynamics software in distributed parallel processing environments, Computations once taking many days with traditional mainframes are now being performed in only a few hours. Efforts underway for the US Naval Space Command include conjunction prediction, uncorrelated target processing and a new space object catalog based on orbit determination and prediction with special perturbations methods.

  16. Parallel pivoting combined with parallel reduction and fill-in control

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1989-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generation of fill-ins and check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel pivoting technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  17. Super-parallel MR microscope.

    PubMed

    Matsuda, Yoshimasa; Utsuzawa, Shin; Kurimoto, Takeaki; Haishi, Tomoyuki; Yamazaki, Yukako; Kose, Katsumi; Anno, Izumi; Marutani, Mitsuhiro

    2003-07-01

    A super-parallel MR microscope in which multiple (up to 100) samples can be imaged simultaneously at high spatial resolution is described. The system consists of a multichannel transmitter-receiver system and a gradient probe array housed in a large-bore magnet. An eight-channel MR microscope was constructed for verification of the system concept, and a four-channel MR microscope was constructed for a practical application. Eight chemically fixed mouse fetuses were simultaneously imaged at the 200 micro m(3) voxel resolution in a 1.5 T superconducting magnet of a whole-body MRI, and four chemically fixed human embryos were simultaneously imaged at 120 micro m(3) voxel resolution in a 2.35 T superconducting magnet. Although the spatial resolutions achieved were not strictly those of MR microscopy, the system design proposed here can be used to attain a much higher spatial resolution imaging of multiple samples, because higher magnetic field gradients can be generated at multiple positions in a homogeneous magnetic field. PMID:12815693

  18. Parallel processing in immune networks

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Bartolucci, Silvia; Galluzzi, Andrea; Guerra, Francesco; Moauro, Francesco

    2013-04-01

    In this work, we adopt a statistical-mechanics approach to investigate basic, systemic features exhibited by adaptive immune systems. The lymphocyte network made by B cells and T cells is modeled by a bipartite spin glass, where, following biological prescriptions, links connecting B cells and T cells are sparse. Interestingly, the dilution performed on links is shown to make the system able to orchestrate parallel strategies to fight several pathogens at the same time; this multitasking capability constitutes a remarkable, key property of immune systems as multiple antigens are always present within the host. We also define the stochastic process ruling the temporal evolution of lymphocyte activity and show its relaxation toward an equilibrium measure allowing statistical-mechanics investigations. Analytical results are compared with Monte Carlo simulations and signal-to-noise outcomes showing overall excellent agreement. Finally, within our model, a rationale for the experimentally well-evidenced correlation between lymphocytosis and autoimmunity is achieved; this sheds further light on the systemic features exhibited by immune networks.

  19. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  20. Vectoring of parallel synthetic jets

    NASA Astrophysics Data System (ADS)

    Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume

    2015-11-01

    A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).

  1. Parallel genotypic adaptation: when evolution repeats itself

    PubMed Central

    Wood, Troy E.; Burke, John M.; Rieseberg, Loren H.

    2008-01-01

    Until recently, parallel genotypic adaptation was considered unlikely because phenotypic differences were thought to be controlled by many genes. There is increasing evidence, however, that phenotypic variation sometimes has a simple genetic basis and that parallel adaptation at the genotypic level may be more frequent than previously believed. Here, we review evidence for parallel genotypic adaptation derived from a survey of the experimental evolution, phylogenetic, and quantitative genetic literature. The most convincing evidence of parallel genotypic adaptation comes from artificial selection experiments involving microbial populations. In some experiments, up to half of the nucleotide substitutions found in independent lineages under uniform selection are the same. Phylogenetic studies provide a means for studying parallel genotypic adaptation in non-experimental systems, but conclusive evidence may be difficult to obtain because homoplasy can arise for other reasons. Nonetheless, phylogenetic approaches have provided evidence of parallel genotypic adaptation across all taxonomic levels, not just microbes. Quantitative genetic approaches also suggest parallel genotypic evolution across both closely and distantly related taxa, but it is important to note that this approach cannot distinguish between parallel changes at homologous loci versus convergent changes at closely linked non-homologous loci. The finding that parallel genotypic adaptation appears to be frequent and occurs at all taxonomic levels has important implications for phylogenetic and evolutionary studies. With respect to phylogenetic analyses, parallel genotypic changes, if common, may result in faulty estimates of phylogenetic relationships. From an evolutionary perspective, the occurrence of parallel genotypic adaptation provides increasing support for determinism in evolution and may provide a partial explanation for how species with low levels of gene flow are held together. PMID:15881688

  2. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  3. Parallel-End-Point Drafting Compass

    NASA Technical Reports Server (NTRS)

    Cronander, J.

    1986-01-01

    Parallelogram linkage ensures greater accuracy in drafting and scribing. Two members of arm of compass remain parallel for all angles pair makes with hub axis. They maintain opposing end members in parallelism. Parallelogram-linkage principle used on dividers as well as on compasses.

  4. EPIC: E-field Parallel Imaging Correlator

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2015-11-01

    E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.

  5. Parallel Computing Strategies for Irregular Algorithms

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  6. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  7. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  8. National Combustion Code: Parallel Implementation and Performance

    NASA Technical Reports Server (NTRS)

    Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.

    2000-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.

  9. Parallel language support on shared memory multiprocessors

    SciTech Connect

    Sah, A.

    1991-01-01

    The study of general purpose parallel computing requires efficient and inexpensive platforms for parallel program execution. This helps in ascertaining tradeoff choices between hardware complexity and software solutions for massively parallel systems design. In this paper, the authors present an implementation of an efficient parallel execution model on shared memory multiprocessors based on a Threaded Abstract Machine. The authors discuss a k-way generalized locking strategy suitable for our model. The authors study the performance gains obtained by a queuing strategy which uses multiple gueues with reduced access contention. The authors also present performance models in shared memory machines, related to lock contention and serialization in shared memory allocation. A bin-based memory management technique which reduces the serialization is presented. These issues are critical for obtaining an efficient parallel execution environment.

  10. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  11. Differences Between Distributed and Parallel Systems

    SciTech Connect

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  12. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  13. Configuration space representation in parallel coordinates

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Inselberg, Alfred

    1989-01-01

    By means of a system of parallel coordinates, a nonprojective mapping from R exp N to R squared is obtained for any positive integer N. In this way multivariate data and relations can be represented in the Euclidean plane (embedded in the projective plane). Basically, R squared with Cartesian coordinates is augmented by N parallel axes, one for each variable. The N joint variables of a robotic device can be represented graphically by using parallel coordinates. It is pointed out that some properties of the relation are better perceived visually from the parallel coordinate representation, and that new algorithms and data structures can be obtained from this representation. The main features of parallel coordinates are described, and an example is presented of their use for configuration space representation of a mechanical arm (where Cartesian coordinates cannot be used).

  14. A paradigm for parallel unstructured grid generation

    SciTech Connect

    Gaither, A.; Marcum, D.; Reese, D.

    1996-12-31

    In this paper, a sequential 2D unstructured grid generator based on iterative point insertion and local reconnection is coupled with a Delauney tessellation domain decomposition scheme to create a scalable parallel unstructured grid generator. The Message Passing Interface (MPI) is used for distributed communication in the parallel grid generator. This work attempts to provide a generic framework to enable the parallelization of fast sequential unstructured grid generators in order to compute grand-challenge scale grids for Computational Field Simulation (CFS). Motivation for moving from sequential to scalable parallel grid generation is presented. Delaunay tessellation and iterative point insertion and local reconnection (advancing front method only) unstructured grid generation techniques are discussed with emphasis on how these techniques can be utilized for parallel unstructured grid generation. Domain decomposition techniques are discussed for both Delauney and advancing front unstructured grid generation with emphasis placed on the differences needed for both grid quality and algorithmic efficiency.

  15. Broadcasting a message in a parallel computer

    DOEpatents

    Berg, Jeremy E.; Faraj, Ahmad A.

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  16. Randomized parallel speedups for list ranking

    SciTech Connect

    Vishkin, U.

    1987-06-01

    The following problem is considered: given a linked list of length n, compute the distance of each element of the linked list from the end of the list. The problem has two standard deterministic algorithms: a linear time serial algorithm, and an O(n log n)/ rho + log n) time parallel algorithm using rho processors. The authors present a randomized parallel algorithm for the problem. The algorithm is designed for an exclusive-read exclusive-write parallel random access machine (EREW PRAM). It runs almost surely in time O(n/rho + log n log* n) using rho processors. Using a recently published parallel prefix sums algorithm the list-ranking algorithm can be adapted to run on a concurrent-read concurrent-write parallel random access machine (CRCW PRAM) almost surely in time O(n/rho + log n) using rho processors.

  17. Implementation and performance of parallel Prolog interpreter

    SciTech Connect

    Wei, S.; Kale, L.V.; Balkrishna, R. . Dept. of Computer Science)

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  18. Parallelism extraction and program restructuring for parallel simulation of digital systems

    SciTech Connect

    Vellandi, B.L.

    1990-01-01

    Two topics currently of interest to the computer aided design (CADF) for the very-large-scale integrated circuit (VLSI) community are using the VHSIC Hardware Description Language (VHDL) effectively and decreasing simulation times of VLSI designs through parallel execution of the simulator. The goal of this research is to increase the degree of parallelism obtainable in VHDL simulation, and consequently to decrease simulation times. The research targets simulation on massively parallel architectures. Experimentation and instrumentation were done on the SIMD Connection Machine. The author discusses her method used to extract parallelism and restructure a VHDL program, experimental results using this method, and requirements for a parallel architecture for fast simulation.

  19. Parallel computation of manipulator inverse dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    In this article, parallel computation of manipulator inverse dynamics is investigated. A hierarchical graph-based mapping approach is devised to analyze the inherent parallelism in the Newton-Euler formulation at several computational levels, and to derive the features of an abstract architecture for exploitation of parallelism. At each level, a parallel algorithm represents the application of a parallel model of computation that transforms the computation into a graph whose structure defines the features of an abstract architecture, i.e., number of processors, communication structure, etc. Data-flow analysis is employed to derive the time lower bound in the computation as well as the sequencing of the abstract architecture. The features of the target architecture are defined by optimization of the abstract architecture to exploit maximum parallelism while minimizing architectural complexity. An architecture is designed and implemented that is capable of efficient exploitation of parallelism at several computational levels. The computation time of the Newton-Euler formulation for a 6-degree-of-freedom (dof) general manipulator is measured as 187 microsec. The increase in computation time for each additional dof is 23 microsec, which leads to a computation time of less than 500 microsec, even for a 12-dof redundant arm.

  20. On the Scalability of Parallel UCT

    NASA Astrophysics Data System (ADS)

    Segal, Richard B.

    The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where Zen using a single four-core machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of Fuego itself.

  1. Performance of the Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the input/output (I/O) needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. This interface conceals the parallism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. Initial experiments, reported in this paper, indicate that Galley is capable of providing high-performance 1/O to applications the applications that rely on them. In Section 3 we describe that access data in patterns that have been observed to be common.

  2. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  3. Run-time recognition of task parallelism within the P++ parallel array class library

    SciTech Connect

    Parsons, R.; Quinlan, D.

    1993-11-01

    This paper explores the use of a run-time system to recognize task parallelism with a C++ array class library. Run-time systems currently support data parallelism in P++, FORTRAN 90 D, and High Performance FORTRAN. But data parallelism in insufficient for many applications, including adaptive mesh refinement. Without access to both data and task parallelism such applications exhibit several orders of magnitude more message passing and poor performance. In this work, a C++ array class library is used to implement deferred evaluation and run-time dependence for task parallelism recognition, tp obtain task parallelism through a data flow interpretation of data parallel array statements. Performance results show that that analysis and optimizations are both efficient and practical, allowing us to consider more substantial optimizations.

  4. Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.

  5. Xyce parallel electronic simulator : users' guide.

    SciTech Connect

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique

  6. SLAPP: A systolic linear algebra parallel processor

    SciTech Connect

    Drake, B.L.; Luk, F.T.; Speiser, J.M.; Symanski, J.J.

    1987-07-01

    Systolic array computer architectures provide a means for fast computation of the linear algebra algorithms that form the building blocks of many signal-processing algorithms, facilitating their real-time computation. For applications to signal processing, the systolic array operates on matrices, an inherently parallel view of the data, using numerical linear algebra algorithms that have been suitably parallelized to efficiently utilize the available hardware. This article describes work currently underway at the Naval Ocean Systems Center, San Diego, California, to build a two-dimensional systolic array, SLAPP, demonstrating efficient and modular parallelization of key matric computations for real-time signal- and image-processing problems.

  7. Language constructs for modular parallel programs

    SciTech Connect

    Foster, I.

    1996-03-01

    We describe programming language constructs that facilitate the application of modular design techniques in parallel programming. These constructs allow us to isolate resource management and processor scheduling decisions from the specification of individual modules, which can themselves encapsulate design decisions concerned with concurrence, communication, process mapping, and data distribution. This approach permits development of libraries of reusable parallel program components and the reuse of these components in different contexts. In particular, alternative mapping strategies can be explored without modifying other aspects of program logic. We describe how these constructs are incorporated in two practical parallel programming languages, PCN and Fortran M. Compilers have been developed for both languages, allowing experimentation in substantial applications.

  8. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1997-01-01

    The multiblock reacting Navier-Stokes flow solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  9. Parallelization of the Implicit RPLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Orkwis, Paul D.

    1994-01-01

    The multiblock reacting Navier-Stokes flow-solver RPLUS2D was modified for parallel implementation. Results for non-reacting flow calculations of this code indicate parallelization efficiencies greater than 84% are possible for a typical test problem. Results tend to improve as the size of the problem increases. The convergence rate of the scheme is degraded slightly when additional artificial block boundaries are included for the purpose of parallelization. However, this degradation virtually disappears if the solution is converged near to machine zero. Recommendations are made for further code improvements to increase efficiency, correct bugs in the original version, and study decomposition effectiveness.

  10. Massively parallel neurocomputing for aerospace applications

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barhen, Jacob; Toomarian, Nikzad

    1993-01-01

    An innovative hybrid, analog-digital charge-domain technology, for the massively parallel VLSI implementation of certain large scale matrix-vector operations, has recently been introduced. It employs arrays of Charge Coupled/Charge Injection Device cells holding an analog matrix of charge, which process digital vectors in parallel by means of binary, non-destructive charge transfer operations. The impact of this technology on massively parallel processing is discussed. Fundamentally new classes of algorithms, specifically designed for this emerging technology, as applied to signal processing, are derived.

  11. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  12. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  13. Parallel Climate Analysis Toolkit (ParCAT)

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  14. Distributed parallel messaging for multiprocessor systems

    DOEpatents

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  15. Parallel optical memories for very large databases

    NASA Astrophysics Data System (ADS)

    Mitkas, Pericles A.; Berra, P. B.

    1993-02-01

    The steady increase in volume of current and future databases dictates the development of massive secondary storage devices that allow parallel access and exhibit high I/O data rates. Optical memories, such as parallel optical disks and holograms, can satisfy these requirements because they combine high recording density and parallel one- or two-dimensional output. Several configurations for database storage involving different types of optical memory devices are investigated. All these approaches include some level of optical preprocessing in the form of data filtering in an attempt to reduce the amount of data per transaction that reach the electronic front-end.

  16. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  17. Parallel Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Baggag, Abdalkader; Atkins, Harold; Keyes, David

    1999-01-01

    This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been implemented in all object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects.

  18. Improved chopper circuit uses parallel transistors

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Parallel transistor chopper circuit operates with one transistor in the forward mode and the other in the inverse mode. By using this method, it acts as a single, symmetrical, bidirectional transistor, and reduces and stabilizes the offset voltage.

  19. Parallel processor programs in the Federal Government

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  20. Parallelism In Rule-Based Systems

    NASA Astrophysics Data System (ADS)

    Sabharwal, Arvind; Iyengar, S. Sitharama; de Saussure, G.; Weisbin, C. R.

    1988-03-01

    Rule-based systems, which have proven to be extremely useful for several Artificial Intelligence and Expert Systems applications, currently face severe limitations due to the slow speed of their execution. To achieve the desired speed-up, this paper addresses the problem of parallelization of production systems and explores the various architectural and algorithmic possibilities. The inherent sources of parallelism in the production system structure are analyzed and the trade-offs, limitations and feasibility of exploitation of these sources of parallelism are presented. Based on this analysis, we propose a dedicated, coarse-grained, n-ary tree multiprocessor architecture for the parallel implementation of rule-based systems and then present algorithms for partitioning of rules in this architecture.

  1. Parallel programming with PCN. Revision 1

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  2. The PISCES 2 parallel programming environment

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment for scientific and engineering computations on MIMD parallel computers. It is currently implemented on a flexible FLEX/32 at NASA Langley, a 20 processor machine with both shared and local memories. The environment provides an extended Fortran for applications programming, a configuration environment for setting up a run on the parallel machine, and a run-time environment for monitoring and controlling program execution. This paper describes the overall design of the system and its implementation on the FLEX/32. Emphasis is placed on several novel aspects of the design: the use of a carefully defined virtual machine, programmer control of the mapping of virtual machine to actual hardware, forces for medium-granularity parallelism, and windows for parallel distribution of data. Some preliminary measurements of storage use are included.

  3. Parallel line scanning ophthalmoscope for retinal imaging.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2015-11-15

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of parallelism in illuminating the retina compared to traditional scanning laser ophthalmoscope systems utilizing scanning mirrors. The system operated at the shot-noise limit with a signal-to-noise ratio of 28 for an optical power measured at the cornea of 100 μW. To demonstrate the imaging capabilities of the system, the macula and the optic nerve head of a healthy volunteer were imaged. Confocal images show good contrast and lateral resolution with a 10°×10° field of view. PMID:26565868

  4. Parallel algorithms for dynamically partitioning unstructured grids

    SciTech Connect

    Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.

    1994-10-01

    Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.

  5. Massively Parallel Computing: A Sandia Perspective

    SciTech Connect

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  6. Social Problems and Deviance: Some Parallel Issues

    ERIC Educational Resources Information Center

    Kitsuse, John I.; Spector, Malcolm

    1975-01-01

    Explores parallel developments in labeling theory and in the value conflict approach to social problems. Similarities in their critiques of functionalism and etiological theory as well as their emphasis on the definitional process are noted. (Author)

  7. Parallel supercomputing today and the cedar approach.

    PubMed

    Kuck, D J; Davidson, E S; Lawrie, D H; Sameh, A H

    1986-02-28

    More and more scientists and engineers are becoming interested in using supercomputers. Earlier barriers to using these machines are disappearing as software for their use improves. Meanwhile, new parallel supercomputer architectures are emerging that may provide rapid growth in performance. These systems may use a large number of processors with an intricate memory system that is both parallel and hierarchical; they will require even more advanced software. Compilers that restructure user programs to exploit the machine organization seem to be essential. A wide range of algorithms and applications is being developed in an effort to provide high parallel processing performance in many fields. The Cedar supercomputer, presently operating with eight processors in parallel, uses advanced system and applications software developed at the University of Illinois during the past 12 years. This software should allow the number of processors in Cedar to be doubled annually, providing rapid performance advances in the next decade. PMID:17740294

  8. Feature Clustering for Accelerating Parallel Coordinate Descent

    SciTech Connect

    Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh; Haglin, David J.

    2012-12-06

    We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.

  9. Finite element computation with parallel VLSI

    NASA Technical Reports Server (NTRS)

    Mcgregor, J.; Salama, M.

    1983-01-01

    This paper describes a parallel processing computer consisting of a 16-bit microcomputer as a master processor which controls and coordinates the activities of 8086/8087 VLSI chip set slave processors working in parallel. The hardware is inexpensive and can be flexibly configured and programmed to perform various functions. This makes it a useful research tool for the development of, and experimentation with parallel mathematical algorithms. Application of the hardware to computational tasks involved in the finite element analysis method is demonstrated by the generation and assembly of beam finite element stiffness matrices. A number of possible schemes for the implementation of N-elements on N- or n-processors (N is greater than n) are described, and the speedup factors of their time consumption are determined as a function of the number of available parallel processors.

  10. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  11. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  12. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  13. The Nexus task-parallel runtime system

    SciTech Connect

    Foster, I.; Tuecke, S.; Kesselman, C.

    1994-12-31

    A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.

  14. Parallel processing of a rotating shaft simulation

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.

    1989-01-01

    A FORTRAN program describing the vibration modes of a rotor-bearing system is analyzed for parellelism in this simulation using a Pascal-like structured language. Potential vector operations are also identified. A critical path through the simulation is identified and used in conjunction with somewhat fictitious processor characteristics to determine the time to calculate the problem on a parallel processing system having those characteristics. A parallel processing overhead time is included as a parameter for proper evaluation of the gain over serial calculation. The serial calculation time is determined for the same fictitious system. An improvement of up to 640 percent is possible depending on the value of the overhead time. Based on the analysis, certain conclusions are drawn pertaining to the development needs of parallel processing technology, and to the specification of parallel processing systems to meet computational needs.

  15. Modified mesh-connected parallel computers

    SciTech Connect

    Carlson, D.A. )

    1988-10-01

    The mesh-connected parallel computer is an important parallel processing organization that has been used in the past for the design of supercomputing systems. In this paper, the authors explore modifications of a mesh-connected parallel computer for the purpose of increasing the efficiency of executing important application programs. These modifications are made by adding one or more global mesh structures to the processing array. They show how our modifications allow asymptotic improvements in the efficiency of executing computations having low to medium interprocessor communication requirements (e.g., tree computations, prefix computations, finding the connected components of a graph). For computations with high interprocessor communication requirements such as sorting, they show that they offer no speedup. They also compare the modified mesh-connected parallel computer to other similar organizations including the pyramid, the X-tree, and the mesh-of-trees.

  16. Fast and practical parallel polynomial interpolation

    SciTech Connect

    Egecioglu, O.; Gallopoulos, E.; Koc, C.K.

    1987-01-01

    We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.

  17. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  18. Massive parallelism in the future of science

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior. In the domain of data parallelism, computers comprising very large numbers of processing agents, one for each data item in the result will be designed. These agents collectively can solve problems thousands of times faster than current supercomputers. In the domain of distributed parallelism, computations comprising large numbers of resource attached to the world network will be designed. The network will support computations far beyond the power of any one machine. In the domain of people parallelism collaborations among large groups of scientists around the world who participate in projects that endure well past the sojourns of individuals within them will be designed. Computing and telecommunications technology will support the large, long projects that will characterize big science by the turn of the century. Scientists must become masters in these three domains during the coming decade.

  19. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

    SciTech Connect

    Ramkumar, B. ); Kale, L.V. )

    1992-02-01

    When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

  20. Subsurface geology of the Warfield structures in southwestern West Virginia: Implications for tectonic deformation and hydrocarbon exploration in the Central Appalachian basin

    SciTech Connect

    Gao, D.; Shumaker, R.C.

    1996-08-01

    Data from over 6000 wells and five multichannel reflection seismic lines were used to constrain the subsurface geometry of the Warfield structures in southwestern West Virginia within the central Appalachian basin. Based on their vertical differences in geometry and structural styles, we divided the Warfield structures into shallow (above the Devonian Onondaga Limestone), intermediate (between the Devonian Onondaga Limestone and the Silurian Tuscarora Sandstone), and deep (below the Ordovician Trenton horizon) structural levels. Shallow structures are related to the Alleghanian deformation above the major detachment horizon of the Devonian shales and consist of the Warfield anticline with a 91.5-m closure and southeast-dipping monoclines, which aided the northwest migration and entrapment of oil and gas. At the intermediate level, the closure of the Warfield anticline is lost because the Alleghanian deformation is obscured below the major detachment of the Devonian shales, which explains the reduced production from the Devonian and Silurian reservoirs. Deep structures are characterized by an asymmetric half graben within a continental rift system known as the Rome trough, in which a thick sequence of sedimentary rocks exists to provide sources for overlying reservoirs. Although stratigraphic traps may be associated with thickness and facies changes, the deep level is structurally unfavorable for trapping hydrocarbons. Based on changes we found in map trend, we divided the Warfield structures into a middle segment and southern and northern bends. The middle segment is parallel to the New York-Alabama lineament (a northeast-trending magnetic gradient); the southern and the northern bends are linked to the 38th parallel lineament (a west-trending fault system) and the Burning Springs-Mann Mountain lineament (a north-trending magnetic gradient), respectively.

  1. A survey of parallel programming tools

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.

    1991-01-01

    This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.

  2. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  3. LDV Measurement of Confined Parallel Jet Mixing

    SciTech Connect

    R.F. Kunz; S.W. D'Amico; P.F. Vassallo; M.A. Zaccaria

    2001-01-31

    Laser Doppler Velocimetry (LDV) measurements were taken in a confinement, bounded by two parallel walls, into which issues a row of parallel jets. Two-component measurements were taken of two mean velocity components and three Reynolds stress components. As observed in isolated three dimensional wall bounded jets, the transverse diffusion of the jets is quite large. The data indicate that this rapid mixing process is due to strong secondary flows, transport of large inlet intensities and Reynolds stress anisotropy effects.

  4. Enhancing Scalability of Parallel Structured AMR Calculations

    SciTech Connect

    Wissink, A M; Hysom, D; Hornung, R D

    2003-02-10

    This paper discusses parallel scaling performance of large scale parallel structured adaptive mesh refinement (SAMR) calculations in SAMRAI. Previous work revealed that poor scaling qualities in the adaptive gridding operations in SAMR calculations cause them to become dominant for cases run on up to 512 processors. This work describes algorithms we have developed to enhance the efficiency of the adaptive gridding operations. Performance of the algorithms is evaluated for two adaptive benchmarks run on up 512 processors of an IBM SP system.

  5. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect

    Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  6. Computational electromagnetics and parallel dense matrix computations

    SciTech Connect

    Forsman, K.; Kettunen, L.; Gropp, W.; Levine, D.

    1995-06-01

    We present computational results using CORAL, a parallel, three-dimensional, nonlinear magnetostatic code based on a volume integral equation formulation. A key feature of CORAL is the ability to solve, in parallel, the large, dense systems of linear equations that are inherent in the use of integral equation methods. Using the Chameleon and PSLES libraries ensures portability and access to the latest linear algebra solution technology.

  7. Partitioning And Packing Equations For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Milner, Edward J.

    1989-01-01

    Algorithm developed to identify parallelism in set of coupled ordinary differential equations that describe physical system and to divide set into parallel computational paths, along with parts of solution proceeds independently of others during at least part of time. Path-identifying algorithm creates number of paths consisting of equations that must be computed serially and table that gives dependent and independent arguments and "can start," "can end," and "must end" times of each equation. "Must end" time used subsequently by packing algorithm.

  8. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  9. Acoustic simulation in architecture with parallel algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaohong; Zhang, Xinrong; Li, Dan

    2004-03-01

    In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.

  10. Integrating GIS-based geologic mapping, LiDAR-based lineament analysis and site specific rock slope data to delineate a zone of existing and potential rock slope instability located along the grandfather mountain window-Linville Falls shear zone contact, Southern Appalachian Mountains, Watauga County, North Carolina

    USGS Publications Warehouse

    Gillon, K.A.; Wooten, R.M.; Latham, R.L.; Witt, A.W.; Douglas, T.J.; Bauer, J.B.; Fuemmeler, S.J.

    2009-01-01

    Landslide hazard maps of Watauga County identify >2200 landslides, model debris flow susceptibility, and evaluate a 14km x 0.5km zone of existing and potential rock slope instability (ZEPRSI) near the Town of Boone. The ZEPRSI encompasses west-northwest trending (WNWT) topographic ridges where 14 active/past-active rock/weathered rock slides occur mainly in rocks of the Grandfather Mountain Window (GMW). The north side of this ridgeline is the GMW / Linville Falls Fault (LFF) contact. Sheared rocks of the Linville Falls Shear Zone (LFSZ) occur along the ridge and locally in the valley north of the contact. The valley is underlain principally by layered granitic gneiss comprising the Linville Falls/Beech Mountain/Stone Mountain Thrust Sheet. The integration of ArcGIS??? - format digital geologic and lineament mapping on a 6m LiDAR (Light Detecting and Ranging) digital elevation model (DEM) base, and kinematic analyses of site specific rock slope data (e.g., presence and degree of ductile and brittle deformation fabrics, rock type, rock weathering state) indicate: WNWT lineaments are expressions of a regionally extensive zone of fractures and faults; and ZEPRSI rock slope failures concentrate along excavated, north-facing LFF/LFSZ slopes where brittle fabrics overprint older metamorphic foliations, and other fractures create side and back release surfaces. Copyright 2009 ARMA, American Rock Mechanics Association.

  11. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  12. A parallel PCG solver for MODFLOW.

    PubMed

    Dong, Yanhui; Li, Guomin

    2009-01-01

    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. PMID:19563427

  13. Modeling Parallel System Workloads with Temporal Locality

    NASA Astrophysics Data System (ADS)

    Minh, Tran Ngoc; Wolters, Lex

    In parallel systems, similar jobs tend to arrive within bursty periods. This fact leads to the existence of the locality phenomenon, a persistent similarity between nearby jobs, in real parallel computer workloads. This important phenomenon deserves to be taken into account and used as a characteristic of any workload model. Regrettably, this property has received little if any attention of researchers and synthetic workloads used for performance evaluation to date often do not have locality. With respect to this research trend, Feitelson has suggested a general repetition approach to model locality in synthetic workloads [6]. Using this approach, Li et al. recently introduced a new method for modeling temporal locality in workload attributes such as run time and memory [14]. However, with the assumption that each job in the synthetic workload requires a single processor, the parallelism has not been taken into account in their study. In this paper, we propose a new model for parallel computer workloads based on their result. In our research, we firstly improve their model to control locality of a run time process better and then model the parallelism. The key idea for modeling the parallelism is to control the cross-correlation between the run time and the number of processors. Experimental results show that not only the cross-correlation is controlled well by our model, but also the marginal distribution can be fitted nicely. Furthermore, the locality feature is also obtained in our model.

  14. Parallel object-oriented adaptive mesh refinement

    SciTech Connect

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  15. The Xyce Parallel Electronic Simulator - An Overview

    SciTech Connect

    HUTCHINSON,SCOTT A.; KEITER,ERIC R.; HOEKSTRA,ROBERT J.; WATTS,HERMAN A.; WATERS,ARLON J.; SCHELLS,REGINA L.; WIX,STEVEN D.

    2000-12-08

    The Xyce{trademark} Parallel Electronic Simulator has been written to support the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on providing the capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). In addition, they are providing improved performance for numerical kernels using state-of-the-art algorithms, support for modeling circuit phenomena at a variety of abstraction levels and using object-oriented and modern coding-practices that ensure the code will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows.

  16. Efficient communication in massively parallel computers

    SciTech Connect

    Cypher, R.E.

    1989-01-01

    A fundamental operation in parallel computation is sorting. Sorting is important not only because it is required by many algorithms, but also because it can be used to implement irregular, pointer-based communication. The author studies two algorithms for sorting in massively parallel computers. First, he examines Shellsort. Shellsort is a sorting algorithm that is based on a sequence of parameters called increments. Shellsort can be used to create a parallel sorting device known as a sorting network. Researchers have suggested that if the correct increment sequence is used, an optimal size sorting network can be obtained. All published increment sequences have been monotonically decreasing. He shows that no monotonically decreasing increment sequence will yield an optimal size sorting network. Second, he presents a sorting algorithm called Cubesort. Cubesort is the fastest known sorting algorithm for a variety of parallel computers aver a wide range of parameters. He also presents a paradigm for developing parallel algorithms that have efficient communication. The paradigm, called the data reduction paradigm, consists of using a divide-and-conquer strategy. Both the division and combination phases of the divide-and-conquer algorithm may require irregular, pointer-based communication between processors. However, the problem is divided so as to limit the amount of data that must be communicated. As a result the communication can be performed efficiently. He presents data reduction algorithms for the image component labeling problem, the closest pair problem and four versions of the parallel prefix problem.

  17. A fast algorithm for parallel computation of multibody dynamics on MIMD parallel architectures

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory; Bagherzadeh, Nader

    1993-01-01

    In this paper the implementation of a parallel O(LogN) algorithm for computation of rigid multibody dynamics on a Hypercube MIMD parallel architecture is presented. To our knowledge, this is the first algorithm that achieves the time lower bound of O(LogN) by using an optimal number of O(N) processors. However, in addition to its theoretical significance, the algorithm is also highly efficient for practical implementation on commercially available MIMD parallel architectures due to its highly coarse grain size and simple communication and synchronization requirements. We present a multilevel parallel computation strategy for implementation of the algorithm on a Hypercube. This strategy allows the exploitation of parallelism at several computational levels as well as maximum overlapping of computation and communication to increase the performance of parallel computation.

  18. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  19. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  20. Parallel software support for computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1987-01-01

    The application of the parallel programming methodology known as the Force was conducted. Two application issues were addressed. The first involves the efficiency of the implementation and its completeness in terms of satisfying the needs of other researchers implementing parallel algorithms. Support for, and interaction with, other Computational Structural Mechanics (CSM) researchers using the Force was the main issue, but some independent investigation of the Barrier construct, which is extremely important to overall performance, was also undertaken. Another efficiency issue which was addressed was that of relaxing the strong synchronization condition imposed on the self-scheduled parallel DO loop. The Force was extended by the addition of logical conditions to the cases of a parallel case construct and by the inclusion of a self-scheduled version of this construct. The second issue involved applying the Force to the parallelization of finite element codes such as those found in the NICE/SPAR testbed system. One of the more difficult problems encountered is the determination of what information in COMMON blocks is actually used outside of a subroutine and when a subroutine uses a COMMON block merely as scratch storage for internal temporary results.

  1. Iteration schemes for parallelizing models of superconductivity

    SciTech Connect

    Gray, P.A.

    1996-12-31

    The time dependent Lawrence-Doniach model, valid for high fields and high values of the Ginzburg-Landau parameter, is often used for studying vortex dynamics in layered high-T{sub c} superconductors. When solving these equations numerically, the added degrees of complexity due to the coupling and nonlinearity of the model often warrant the use of high-performance computers for their solution. However, the interdependence between the layers can be manipulated so as to allow parallelization of the computations at an individual layer level. The reduced parallel tasks may then be solved independently using a heterogeneous cluster of networked workstations connected together with Parallel Virtual Machine (PVM) software. Here, this parallelization of the model is discussed and several computational implementations of varying degrees of parallelism are presented. Computational results are also given which contrast properties of convergence speed, stability, and consistency of these implementations. Included in these results are models involving the motion of vortices due to an applied current and pinning effects due to various material properties.

  2. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. PMID:23127284

  3. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  4. Parallel and vector computation in heat transfer

    SciTech Connect

    Georgiadis, J.G. ); Murthy, J.Y. )

    1990-01-01

    This collection of manuscripts complements a number of other volumes related to engineering numerical analysis in general; it also gives a preview of the potential contribution of vector and parallel computing to heat transfer. Contributions have been made from the fields of heat transfer, computational fluid mechanics or physics, and from researchers in industry or in academia. This work serves to indicate that new or modified numerical algorithms have to be developed depending on the hardware used (as the long titles of most of the papers in this volume imply). This volume contains six examples of numerical simulation on parallel and vector computers that demonstrate the competitiveness of the novel methodologies. A common thread through all the manuscripts is that they address problems involving irregular geometries or complex physics, or both. Comparative studies of the performance of certain algorithms on various computers are also presented. Most machines used in this work belong to the coarse- to medium-grain group (consisting of a few to a hundred processors) with architectures of the multiple-instruction-stream-multiple- data-stream (MIMD) type. Some of the machines used have both parallel and vector processors, while parallel computations are certainly emphasized. We hope that this work will contribute to the increasing involvement of heat transfer specialists with parallel computation.

  5. IMPAIR: massively parallel deconvolution on the GPU

    NASA Astrophysics Data System (ADS)

    Sherry, Michael; Shearer, Andy

    2013-02-01

    The IMPAIR software is a high throughput image deconvolution tool for processing large out-of-core datasets of images, varying from large images with spatially varying PSFs to large numbers of images with spatially invariant PSFs. IMPAIR implements a parallel version of the tried and tested Richardson-Lucy deconvolution algorithm regularised via a custom wavelet thresholding library. It exploits the inherently parallel nature of the convolution operation to achieve quality results on consumer grade hardware: through the NVIDIA Tesla GPU implementation, the multi-core OpenMP implementation, and the cluster computing MPI implementation of the software. IMPAIR aims to address the problem of parallel processing in both top-down and bottom-up approaches: by managing the input data at the image level, and by managing the execution at the instruction level. These combined techniques will lead to a scalable solution with minimal resource consumption and maximal load balancing. IMPAIR is being developed as both a stand-alone tool for image processing, and as a library which can be embedded into non-parallel code to transparently provide parallel high throughput deconvolution.

  6. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  7. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  8. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  9. Parallel algorithms for the spectral transform method

    SciTech Connect

    Foster, I.T.; Worley, P.H.

    1994-04-01

    The spectral transform method is a standard numerical technique for solving partial differential equations on a sphere and is widely used in atmospheric circulation models. Recent research has identified several promising algorithms for implementing this method on massively parallel computers; however, no detailed comparison of the different algorithms has previously been attempted. In this paper, we describe these different parallel algorithms and report on computational experiments that we have conducted to evaluate their efficiency on parallel computers. The experiments used a testbed code that solves the nonlinear shallow water equations or a sphere; considerable care was taken to ensure that the experiments provide a fair comparison of the different algorithms and that the results are relevant to global models. We focus on hypercube- and mesh-connected multicomputers with cut-through routing, such as the Intel iPSC/860, DELTA, and Paragon, and the nCUBE/2, but also indicate how the results extend to other parallel computer architectures. The results of this study are relevant not only to the spectral transform method but also to multidimensional FFTs and other parallel transforms.

  10. Optics Program Modified for Multithreaded Parallel Computing

    NASA Technical Reports Server (NTRS)

    Lou, John; Bedding, Dave; Basinger, Scott

    2006-01-01

    A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.

  11. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  12. National Combustion Code Parallel Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Benyo, Theresa (Technical Monitor)

    2002-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.

  13. Parallel Harness for Informatic Stream Hashing

    SciTech Connect

    Steve Plimpton, Tim Shead

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPI message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.

  14. New parallel SOR method by domain partitioning

    SciTech Connect

    Xie, D.; Adams, L.

    1999-07-01

    In this paper the authors propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning and interprocessor data communication techniques. They prove that the PSOR method has the same asymptotic rate of convergence as the Red/Black (R/B) SOR method for the five-point stencil on both strip and block partitions, and as the four-color (R/B/G/O) SOR method for the nine-point stencil on strip partitions. They also demonstrate the parallel performance of the PSOR method on four different MIMD multiprocessors (a KSR1, an Intel Delta, a Paragon, and an IBM SP2). Finally, they compare the parallel performance of PSOR, R/B SOR, and R/B/G/O SOR. Numerical results on the Paragon indicate that PSOR is more efficient than R/B SOR and R/B/G/O SOR in both computation and interprocessor data communication.

  15. Parallel Algormiivls For Optical Digital Computers

    NASA Astrophysics Data System (ADS)

    Huang, Alan

    1983-04-01

    Conventional computers suffer from several communication bottlenecks which fundamentally limit their performance. These bottlenecks are characterized by an address-dependent sequential transfer of information which arises from the need to time-multiplex information over a limited number of interconnections. An optical digital computer based on a classical finite state machine can be shown to be free of these bottlenecks. Such a processor would be unique since it would be capable of modifying its entire state space each cycle while conventional computers can only alter a few bits. New algorithms are needed to manage and use this capability. A technique based on recognizing a particular symbol in parallel and replacing it in parallel with another symbol is suggested. Examples using this parallel symbolic substitution to perform binary addition and binary incrementation are presented. Applications involving Boolean logic, functional programming languages, production rule driven artificial intelligence, and molecular chemistry are also discussed.

  16. Measuring performance of parallel computers. Final report

    SciTech Connect

    Sullivan, F.

    1994-07-01

    Performance Measurement - the authors have developed a taxonomy of parallel algorithms based on data motion and example applications have been coded for each class of the taxonomy. Computational benchmark kernels have been extracted for several applications, and detailed measurements have been performed. Algorithms for Massively Parallel SIMD machines - measurement results and computational experiences indicate that top performance will be achieved by `iteration` type algorithms running on massively parallel SIMD machines. Reformulation as iteration may entail unorthodox approaches based on probabilistic methods. The authors have developed such methods for some applications. Here they discuss their approach to performance measurement, describe the taxonomy and measurements which have been made, and report on some general conclusions which can be drawn from the results of the measurements.

  17. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  18. Simulating Billion-Task Parallel Programs

    SciTech Connect

    Perumalla, Kalyan S; Park, Alfred J

    2014-01-01

    In simulating large parallel systems, bottom-up approaches exercise detailed hardware models with effects from simplified software models or traces, whereas top-down approaches evaluate the timing and functionality of detailed software models over coarse hardware models. Here, we focus on the top-down approach and significantly advance the scale of the simulated parallel programs. Via the direct execution technique combined with parallel discrete event simulation, we stretch the limits of the top-down approach by simulating message passing interface (MPI) programs with millions of tasks. Using a timing-validated benchmark application, a proof-of-concept scaling level is achieved to over 0.22 billion virtual MPI processes on 216,000 cores of a Cray XT5 supercomputer, representing one of the largest direct execution simulations to date, combined with a multiplexing ratio of 1024 simulated tasks per real task.

  19. Grundy - Parallel processor architecture makes programming easy

    NASA Technical Reports Server (NTRS)

    Meier, R. J., Jr.

    1985-01-01

    The hardware, software, and firmware of the parallel processor, Grundy, are examined. The Grundy processor uses a simple processor that has a totally orthogonal three-address instruction set. The system contains a relative and indirect processing mode to support the high-level language, and uses pseudoprocessors and read-only memory. The system supports high-level language in which arbitrary degrees of algorithmic parallelism is expressed. The functions of the compiler and invocation frame are described. Grundy uses an operating system that can be accessed by an arbitrary number of processes simultaneously, and the access time grows only as the logarithm of the number of active processes. Applications for the parallel processor are discussed.

  20. Parallel fault-tolerant robot control

    NASA Technical Reports Server (NTRS)

    Hamilton, D. L.; Bennett, J. K.; Walker, I. D.

    1992-01-01

    A shared memory multiprocessor architecture is used to develop a parallel fault-tolerant robot controller. Several versions of the robot controller are developed and compared. A robot simulation is also developed for control observation. Comparison of a serial version of the controller and a parallel version without fault tolerance showed the speedup possible with the coarse-grained parallelism currently employed. The performance degradation due to the addition of processor fault tolerance was demonstrated by comparison of these controllers with their fault-tolerant versions. Comparison of the more fault-tolerant controller with the lower-level fault-tolerant controller showed how varying the amount of redundant data affects performance. The results demonstrate the trade-off between speed performance and processor fault tolerance.