Parallel Computation of the Regional Ocean Modeling System (ROMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Song, Y T; Chao, Y
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less
2009-06-30
Atlantic Meridional Overturning Circulation in Depth and Quasi-Isopycnic Coordinate Global Ocean...2009 4. TITLE AND SUBTITLE Salinity Boundary Conditions and the Atlantic Meridional Overturning Circulation in Depth and Quasi-Isopycnic Coordinate... Atlantic Meridional Overturning Circulation (AMOC) in global simulations performed with the depth coordinate Parallel Ocean Program (POP) ocean
High Resolution Simulations of Arctic Sea Ice, 1979-1993
2003-01-01
William H. Lipscomb * PO[ARISSP To evaluate improvements in modelling Arctic sea ice, we compare results from two regional models at 1/120 horizontal...resolution. The first is a coupled ice-ocean model of the Arctic Ocean, consisting of an ocean model (adapted from the Parallel Ocean Program, Los...Alamos National Laboratory [LANL]) and the "old" sea ice model . The second model uses the same grid but consists of an improved "new" sea ice model (LANL
Performance and scalability evaluation of "Big Memory" on Blue Gene Linux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshii, K.; Iskra, K.; Naik, H.
2011-05-01
We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of 'Big Memory' - an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solelymore » for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.« less
New Community Education Program on Oceans and Global Climate Change: Results from Our Pilot Year
NASA Astrophysics Data System (ADS)
Bruno, B. C.; Wiener, C.
2010-12-01
Ocean FEST (Families Exploring Science Together) engages elementary school students and their parents and teachers in hands-on science. Through this evening program, we educate participants about ocean and earth science issues that are relevant to their local communities. In the process, we hope to inspire more underrepresented students, including Native Hawaiians, Pacific Islanders and girls, to pursue careers in the ocean and earth sciences. Hawaii and the Pacific Islands will be disproportionately affected by the impacts of global climate change, including rising sea levels, coastal erosion, coral reef degradation and ocean acidification. It is therefore critically important to train ocean and earth scientists within these communities. This two-hour program explores ocean properties and timely environmental topics through six hands-on science activities. Activities are designed so students can see how globally important issues (e.g., climate change and ocean acidification) have local effects (e.g., sea level rise, coastal erosion, coral bleaching) which are particularly relevant to island communities. The Ocean FEST program ends with a career component, drawing parallel between the program activities and the activities done by "real scientists" in their jobs. The take-home message is that we are all scientists, we do science every day, and we can choose to do this as a career. Ocean FEST just completed our pilot year. During the 2009-2010 academic year, we conducted 20 events, including 16 formal events held at elementary schools and 4 informal outreach events. Evaluation data were collected at all formal events. Formative feedback from adult participants (parents, teachers, administrators and volunteers) was solicited through written questionnaires. Students were invited to respond to a survey of five questions both before and after the program to see if there were any changes in content knowledge and career attitudes. In our presentation, we will present our evaluation results from the first year and discuss how our program has been informed by this feedback.
NASA Astrophysics Data System (ADS)
Tao, Xie; Shang-Zhuo, Zhao; William, Perrie; He, Fang; Wen-Jin, Yu; Yi-Jun, He
2016-06-01
To study the electromagnetic backscattering from a one-dimensional drifting fractal sea surface, a fractal sea surface wave-current model is derived, based on the mechanism of wave-current interactions. The numerical results show the effect of the ocean current on the wave. Wave amplitude decreases, wavelength and kurtosis of wave height increase, spectrum intensity decreases and shifts towards lower frequencies when the current occurs parallel to the direction of the ocean wave. By comparison, wave amplitude increases, wavelength and kurtosis of wave height decrease, spectrum intensity increases and shifts towards higher frequencies if the current is in the opposite direction to the direction of ocean wave. The wave-current interaction effect of the ocean current is much stronger than that of the nonlinear wave-wave interaction. The kurtosis of the nonlinear fractal ocean surface is larger than that of linear fractal ocean surface. The effect of the current on skewness of the probability distribution function is negligible. Therefore, the ocean wave spectrum is notably changed by the surface current and the change should be detectable in the electromagnetic backscattering signal. Project supported by the National Natural Science Foundation of China (Grant No. 41276187), the Global Change Research Program of China (Grant No. 2015CB953901), the Priority Academic Development Program of Jiangsu Higher Education Institutions (PAPD), Program for the Innovation Research and Entrepreneurship Team in Jiangsu Province, China, the Canadian Program on Energy Research and Development, and the Canadian World Class Tanker Safety Service.
Parallel Computation of Ocean-Atmosphere-Wave Coupled Storm Surge Model
NASA Astrophysics Data System (ADS)
Kim, K.; Yamashita, T.
2003-12-01
Ocean-atmosphere interactions are very important in the formation and development of tropical storms. These interactions are dominant in exchanging heat, momentum, and moisture fluxes. Heat flux is usually computed using a bulk equation. In this equation air-sea interface supplies heat energy to the atmosphere and to the storm. Dynamical interaction is most often one way in which it is the atmosphere that drives the ocean. The winds transfer momentum to both ocean surface waves and ocean current. The wind wave makes an important role in the exchange of the quantities of motion, heat and a substance between the atmosphere and the ocean. Storm surges can be considered as the phenomena of mean sea-level changes, which are the result of the frictional stresses of strong winds blowing toward the land and causing the set level and the low atmospheric pressure at the centre of the cyclone can additionally raise the sea level. In addition to the rise in water level itself, another wave factor must be considered. A rise of mean sea level due to white-cap wave dissipation should be considered. In bounded bodies of water, such as small seas, wind driven sea level set up is much serious than inverted barometer effects, in which the effects of wind waves on wind-driven current play an important role. It is necessary to develop the coupled system of the full spectral third-generation wind-wave model (WAM or WAVEWATCH III), the meso-scale atmosphere model (MM5) and the coastal ocean model (POM) for simulating these physical interactions. As the component of coupled system is so heavy for personal usage, the parallel computing system should be developed. In this study, first, we developed the coupling system of the atmosphere model, ocean wave model and the coastal ocean model, in the Beowulf System, for the simulation of the storm surge. It was applied to the storm surge simulation caused by Typhoon Bart (T9918) in the Yatsushiro Sea. The atmosphere model and the ocean model have been made the parallel codes by SPMD methods. The wave-current interface model was developed by defining the wave breaking stresses. And we developed the coupling program to collect and distribute the exchanging data with the parallel system. Every models and coupler are executed at same time, and they calculate own jobs and pass data with organic system. MPMD method programming was performed to couple the models. The coupler and each models united by the separated group, and they calculated by the group unit. Also they passed message when exchanging data by global unit. The data are exchanged every 60-second model time that is the least common multiple time of the atmosphere model, the wave model and the ocean model. The model was applied to the storm surge simulation in the Yatsushiro Sea, in which we could not simulated the observed maximum surge height with the numerical model that did not include the wave breaking stress. It is confirmed that the simulation which includes the wave breaking stress effects can produce the observed maximum height, 450 cm, at Matsuai.
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
NASA Astrophysics Data System (ADS)
Bruno, B. C.; Hsia, M.; Wiener, C.
2012-12-01
Climate change is not just an atmospheric phenomenon. It has serious impacts on the ocean, such as sea level rise, ocean acidification, and coral bleaching. Ocean FEST (Families Exploring Science Together) aims to educate participants about how increasing carbon dioxide is affecting our oceans, and to inspire students to pursue ocean, earth and environmental science careers. Throughout the program, participants examine their everyday decisions and the impact of their choices on the planet's climate and oceans. Ocean FEST is a two-hour program that explores the ocean and relevant environmental topics through six hands-on science activities. Activities are designed so students can see how globally important issues (e.g., climate change and ocean acidification) have local effects (e.g., sea level rise, coastal erosion, coral bleaching). The program ends with a career component, drawing parallels between the program activities and the activities done by "real scientists" in their jobs. Over the past three years, we have conducted over 60 Ocean FEST events. Evaluations are conducted at selected events using electronic surveys, which students and parents complete immediately prior to (pre-survey) and following (post-survey) the program. Survey items were developed and cognitively tested in collaboration with professional evaluators from the American Institute of Research. The nine-item survey includes items on science content knowledge, personal responsibility, and career interest. For each survey item, participants are asked to indicate agreement (coded as 2.0), disagreement (1.0) or don't know (1.5). By comparing the pre- and post-survey results, we can evaluate program efficacy. For example, one survey item is: "I can do something every day to help fight global climate change." Student mean data moved from 1.78 pre-survey to 1.89 post-survey, which is a statistically significant gain at p<.000. Mean parent data for this same item moved from 1.90 pre-survey to 1.96 post-survey, which is again a statistically significant gain at p<.000. In summary, we have found positive statistically significant gains on all survey items for students, and on all but one survey item for parents. These results strongly indicate program efficacy. For more information, please visit our web site: oceanfest.soest.hawaii.edu
NASA Astrophysics Data System (ADS)
Tao, Xie; William, Perrie; Shang-Zhuo, Zhao; He, Fang; Wen-Jin, Yu; Yi-Jun, He
2016-07-01
Sea surface current has a significant influence on electromagnetic (EM) backscattering signals and may constitute a dominant synthetic aperture radar (SAR) imaging mechanism. An effective EM backscattering model for a one-dimensional drifting fractal sea surface is presented in this paper. This model is used to simulate EM backscattering signals from the drifting sea surface. Numerical results show that ocean currents have a significant influence on EM backscattering signals from the sea surface. The normalized radar cross section (NRCS) discrepancies between the model for a coupled wave-current fractal sea surface and the model for an uncoupled fractal sea surface increase with the increase of incidence angle, as well as with increasing ocean currents. Ocean currents that are parallel to the direction of the wave can weaken the EM backscattering signal intensity, while the EM backscattering signal is intensified by ocean currents propagating oppositely to the wave direction. The model presented in this paper can be used to study the SAR imaging mechanism for a drifting sea surface. Project supported by the National Natural Science Foundation of China (Grant No. 41276187), the Global Change Research Program of China (Grant No. 2015CB953901), the Priority Academic Program Development of Jiangsu Higher Education Institutions, China, the Program for the Innovation Research and Entrepreneurship Team in Jiangsu Province, China, the Canadian Program on Energy Research and Development, and the Canadian World Class Tanker Safety Service Program.
The positive Indian Ocean Dipole-like response in the tropical Indian Ocean to global warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yiyong; Lu, Jian; Liu, Fukai
Climate models project a positive Indian Ocean Dipole (pIOD)-like SST response in the tropical Indian Ocean to global warming. By employing the Community Earth System Model (CESM) and applying an overriding technique to its ocean component Parallel Ocean Program version 2 (POP2), this study investigates the similarity and difference of the formation mechanisms for the changes in the tropical Indian Ocean during the pIOD versus global warming. Results show that their formation processes and related seasonality are quite similar; in particular, the Bjerknes feedback is the leading mechanism in producing the anomalous cooling over the eastern tropics in both cases.more » Some differences are also found, including that the cooling effect of the vertical advection over the eastern tropical Indian Ocean is dominated by the anomalous vertical velocity during the pIOD while it is dominated by the anomalous upper-ocean stratification under global warming. Lastly, these findings above are further examined with an analysis of the mixed layer heat budget.« less
The positive Indian Ocean Dipole-like response in the tropical Indian Ocean to global warming
Luo, Yiyong; Lu, Jian; Liu, Fukai; ...
2016-02-04
Climate models project a positive Indian Ocean Dipole (pIOD)-like SST response in the tropical Indian Ocean to global warming. By employing the Community Earth System Model (CESM) and applying an overriding technique to its ocean component Parallel Ocean Program version 2 (POP2), this study investigates the similarity and difference of the formation mechanisms for the changes in the tropical Indian Ocean during the pIOD versus global warming. Results show that their formation processes and related seasonality are quite similar; in particular, the Bjerknes feedback is the leading mechanism in producing the anomalous cooling over the eastern tropics in both cases.more » Some differences are also found, including that the cooling effect of the vertical advection over the eastern tropical Indian Ocean is dominated by the anomalous vertical velocity during the pIOD while it is dominated by the anomalous upper-ocean stratification under global warming. Lastly, these findings above are further examined with an analysis of the mixed layer heat budget.« less
Modeling seasonality of ice and ocean carbon production in the Arctic
NASA Astrophysics Data System (ADS)
Jin, M.; Deal, C. M.; Ji, R.
2011-12-01
In the Arctic Ocean, both phytoplankton and sea ice algae are important contributors to the primary production and the arctic food web. Copepod in the arctic regions have developed their feeding habit depending on the timing between the ice algal bloom and the subsequent phytoplankton bloom. A mismatch of the timing due to climate changes could have dramatic consequences on the food web as shown by some regional observations. In this study, a global coupled ice-ocean-ecosystem model was used to assess the seasonality of the ice algal and phytoplankton blooms in the arctic. The ice-ocean ecosystem modules are fully coupled in the physical model POP-CICE (Parallel Ocean Program- Los Alamos Sea Ice Model). The model results are compared with various observations. The modeled ice and ocean carbon production were analyzed by regions and their linkage to the physical environment changes (such as changes of ice concentration and water temperature, and light intensity etc.) between low- and high-ice years.
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.
The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.
Department of Defense High Performance Computing Modernization Program. 2006 Annual Report
2007-03-01
Department. We successfully completed several software development projects that introduced parallel, scalable production software now in use across the...imagined. They are developing and deploying weather and ocean models that allow our soldiers, sailors, marines and airmen to plan missions more effectively...and to navigate adverse environments safely. They are modeling molecular interactions leading to the development of higher energy fuels, munitions
CICE, The Los Alamos Sea Ice Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth; Lipscomb, William; Jones, Philip
The Los Alamos sea ice model (CICE) is the result of an effort to develop a computationally efficient sea ice component for a fully coupled atmosphere–land–ocean–ice global climate model. It was originally designed to be compatible with the Parallel Ocean Program (POP), an ocean circulation model developed at Los Alamos National Laboratory for use on massively parallel computers. CICE has several interacting components: a vertical thermodynamic model that computes local growth rates of snow and ice due to vertical conductive, radiative and turbulent fluxes, along with snowfall; an elastic-viscous-plastic model of ice dynamics, which predicts the velocity field of themore » ice pack based on a model of the material strength of the ice; an incremental remapping transport model that describes horizontal advection of the areal concentration, ice and snow volume and other state variables; and a ridging parameterization that transfers ice among thickness categories based on energetic balances and rates of strain. It also includes a biogeochemical model that describes evolution of the ice ecosystem. The CICE sea ice model is used for climate research as one component of complex global earth system models that include atmosphere, land, ocean and biogeochemistry components. It is also used for operational sea ice forecasting in the polar regions and in numerical weather prediction models.« less
A new simple concept for ocean colour remote sensing using parallel polarisation radiance
He, Xianqiang; Pan, Delu; Bai, Yan; Wang, Difeng; Hao, Zengzhou
2014-01-01
Ocean colour remote sensing has supported research on subjects ranging from marine ecosystems to climate change for almost 35 years. However, as the framework for ocean colour remote sensing is based on the radiation intensity at the top-of-atmosphere (TOA), the polarisation of the radiation, which contains additional information on atmospheric and water optical properties, has largely been neglected. In this study, we propose a new simple concept to ocean colour remote sensing that uses parallel polarisation radiance (PPR) instead of the traditional radiation intensity. We use vector radiative transfer simulation and polarimetric satellite sensing data to demonstrate that using PPR has two significant advantages in that it effectively diminishes the sun glint contamination and enhances the ocean colour signal at the TOA. This concept may open new doors for ocean colour remote sensing. We suggest that the next generation of ocean colour sensors should measure PPR to enhance observational capability. PMID:24434904
Climate Ocean Modeling on Parallel Computers
NASA Technical Reports Server (NTRS)
Wang, P.; Cheng, B. N.; Chao, Y.
1998-01-01
Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.
How To Promote Data Quality And Access? Publish It!
NASA Astrophysics Data System (ADS)
Carlson, D. J.; Pfeiffenberger, H.
2011-12-01
Started during IPY 2007-2008, the Earth System Science Data journal (Copernicus) has now 'tested the waters' of earth system data publishing for approximately 2 years with some success. The journal has published more than 30 data sets, of remarkable breadth and variety, all under a Creative Commons Attribution license. Users can now find well-described, quality-controlled and freely accessible data on soils, permafrost, sediment transport, ice sheets, surface radiation, ocean-atmosphere fluxes, ocean chemistry, gravity fields, and combined radar and web cam observations of the Eyjafjallajökull eruption plume. Several of the data sets derive specifically from IPY or from polar regions, but a large portion, including a substantial special issue on ocean carbon, cover broad temporal and geographic domains; the contributors themselves come from leading science institutions around the world. ESSD has attracted the particular interest of international research teams, particularly those who, as in the case of ocean carbon data, have spent many years gathering, collating and calibrating global data sets under long-term named programs, but who lack within those programs the mechanisms to distribute those data sets widely outside their specialist teams and to ensure proper citation credit for those remarkable collaborative data processing efforts. An in-progress special issue on global ocean plankton function types, again representing years of international data collaboration, provides a further example of ESSD utility to large research programs. We anticipate an interesting test case of parallel special issues with companion science journals - data sets in ESSD to accompany science publications in a prominent research journal. We see the ESSD practices and products as useful steps to increase quality of and access to important data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamnes, K.; Ellingson, R.G.; Curry, J.A.
1999-01-01
Recent climate modeling results point to the Arctic as a region that is particularly sensitive to global climate change. The Arctic warming predicted by the models to result from the expected doubling of atmospheric carbon dioxide is two to three times the predicted mean global warming, and considerably greater than the warming predicted for the Antarctic. The North Slope of Alaska-Adjacent Arctic Ocean (NSA-AAO) Cloud and Radiation Testbed (CART) site of the Atmospheric Radiation Measurement (ARM) Program is designed to collect data on temperature-ice-albedo and water vapor-cloud-radiation feedbacks, which are believed to be important to the predicted enhanced warming inmore » the Arctic. The most important scientific issues of Arctic, as well as global, significance to be addressed at the NSA-AAO CART site are discussed, and a brief overview of the current approach toward, and status of, site development is provided. ARM radiometric and remote sensing instrumentation is already deployed and taking data in the perennial Arctic ice pack as part of the SHEBA (Surface Heat Budget of the Arctic ocean) experiment. In parallel with ARM`s participation in SHEBA, the NSA-AAO facility near Barrow was formally dedicated on 1 July 1997 and began routine data collection early in 1998. This schedule permits the US Department of Energy`s ARM Program, NASA`s Arctic Cloud program, and the SHEBA program (funded primarily by the National Science Foundation and the Office of Naval Research) to be mutually supportive. In addition, location of the NSA-AAO Barrow facility on National Oceanic and Atmospheric Administration land immediately adjacent to its Climate Monitoring and Diagnostic Laboratory Barrow Observatory includes NOAA in this major interagency Arctic collaboration.« less
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.
Design Patterns to Achieve 300x Speedup for Oceanographic Analytics in the Cloud
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Greguska, F. R., III; Huang, T.; Quach, N.; Wilson, B. D.
2017-12-01
We describe how we achieve super-linear speedup over standard approaches for oceanographic analytics on a cluster computer and the Amazon Web Services (AWS) cloud. NEXUS is an open source platform for big data analytics in the cloud that enables this performance through a combination of horizontally scalable data parallelism with Apache Spark and rapid data search, subset, and retrieval with tiled array storage in cloud-aware NoSQL databases like Solr and Cassandra. NEXUS is the engine behind several public portals at NASA and OceanWorks is a newly funded project for the ocean community that will mature and extend this capability for improved data discovery, subset, quality screening, analysis, matchup of satellite and in situ measurements, and visualization. We review the Python language API for Spark and how to use it to quickly convert existing programs to use Spark to run with cloud-scale parallelism, and discuss strategies to improve performance. We explain how partitioning the data over space, time, or both leads to algorithmic design patterns for Spark analytics that can be applied to many different algorithms. We use NEXUS analytics as examples, including area-averaged time series, time averaged map, and correlation map.
The Utility of SAR to Monitor Ocean Processes.
1981-11-01
echo received from ocean waves include the motion of the a horizontally polarized wave will have its E vector parallel to scattering surfaces, the so...radiation is defined by the direction of the electric field intensity, E, vector . For example, a horizontally polarized wave will have its E vector ...Oil Spill Off the East Coast of the United States ................ .... 55 19. L-band Parallel and Cross Polarized SAR Imagery of Ice in the Beaufort
Monthly Atmospheric 13C/12C Isotopic Ratios for 11 SIO Stations (1977-2008)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, R. F.; Piper, S. C.; Bollenbacher, A. F.
Stable isotopic measurements for atmospheric 13C/12C and 18O/16O at global sampling sites were initiated by Dr. C.D. Keeling and co-workers at Scripps Institution of Oceanography (SIO) in 1977. These isotopic measurements complement the continuing global atmospheric and oceanic CO2 measurements initiated by Keeling in 1957. This work is currently being continued under the direction of R.F. Keeling, who also runs a parallel program at SIO to measure changes in atmospheric O2 and Ar abundances (Scripps O2 Program). A more complete set of 13CO2 data is found online at http://scrippsco2.ucsd.edu/data/atmospheric_co2.html
Ocean Modeling and Visualization on Massively Parallel Computer
NASA Technical Reports Server (NTRS)
Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.
1997-01-01
Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.
Fast I/O for Massively Parallel Applications
NASA Technical Reports Server (NTRS)
OKeefe, Matthew T.
1996-01-01
The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.
High-resolution coupled ice sheet-ocean modeling using the POPSICLES model
NASA Astrophysics Data System (ADS)
Ng, E. G.; Martin, D. F.; Asay-Davis, X.; Price, S. F.; Collins, W.
2014-12-01
It is expected that a primary driver of future change of the Antarctic ice sheet will be changes in submarine melting driven by incursions of warm ocean water into sub-ice shelf cavities. Correctly modeling this response on a continental scale will require high-resolution modeling of the coupled ice-ocean system. We describe the computational and modeling challenges in our simulations of the full Southern Ocean coupled to a continental-scale Antarctic ice sheet model at unprecedented spatial resolutions (0.1 degree for the ocean model and adaptive mesh refinement down to 500m in the ice sheet model). The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), with the BISICLES ice-sheet model (Cornford et al., 2012) using a synchronous offline-coupling scheme. Part of the PISCEES SciDAC project and built on the Chombo framework, BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). For the POPSICLES Antarctic-Southern Ocean simulations, ice sheet and ocean models communicate at one-month coupling intervals.
NASA Astrophysics Data System (ADS)
Carbotte, S. M.; Canales, J.; Carton, H. D.; Nedimovic, M. R.; Han, S.; Marjanovic, M.; Gibson, J. C.; Janiszewski, H. A.; Horning, G.; Delescluse, M.; Watremez, L.; Farkas, A.; Biescas Gorriz, B.; Bornstein, G.; Childress, L. B.; Parker, B.
2012-12-01
The evolution of oceanic lithosphere involves incorporation of water into the physical and chemical structure of the crust and shallow mantle through fluid circulation, which initiates at the mid-ocean ridge and continues on the ridge flanks long after crustal formation. At subduction zones, water stored and transported with the descending plate is gradually released at depth, strongly influencing subduction zone processes. Cascadia is a young-lithosphere end member of the global subduction system where relatively little hydration of the downgoing Juan de Fuca (JdF) plate is expected due to its young age and presumed warm thermal state. However, numerous observations support the abundant presence of water within the subduction zone, suggesting that the JdF plate is significantly hydrated prior to subduction. Knowledge of the state of hydration of the JdF plate is limited, with few constraints on crustal and upper mantle structure. During the Cascadia Ridge-to-Trench experiment conducted in June-July 2012 over 4000 km of active source seismic data were acquired as part of a study of the evolution and state of hydration of the crust and shallow mantle of the JdF plate prior to subduction at the Cascadia margin. Coincident long-streamer (8 km) multi-channel seismic (MCS) and wide-angle ocean bottom seismometer (OBS) data were acquired in a two-ship program with the R/V Langseth (MGL1211), and R/V Oceanus (OC1206A). Our survey included two ridge-perpendicular transects across the full width of the JdF plate, a long trench-parallel line ~10 km seaward of the Cascadia deformation front, as well as three fan lines to study mantle anisotropy. The plate transects were chosen to provide reference sections of JdF plate evolution over the maximum range of JdF plate ages (8-9 Ma), offshore two contrasting regions of the Cascadia Subduction zone, and provide the first continuous ridge-to-trench images acquired at any oceanic plate. The trench-parallel line was designed to characterize variations in plate structure and hydration linked to JdF plate segmentation for over 450 km along the margin. Shipboard brute stacks of the MCS data reveal evidence for reactivation of abyssal hill faulting in the plate interior far from the trench. Ridgeward-dipping lower crustal reflectors are observed, similar to those observed in mature Pacific crust elsewhere, as well as conjugate reflectivity near the deformation front along the Oregon transect. Bright intracrustal reflectivity is also observed along the trench-parallel transect with marked changes in reflectivity along the Oregon and Washington margins. Initial inspection of the OBS record sections indicate good quality data with the expected oceanic crustal and upper mantle P-wave arrivals: Ps and Pg refractions through sedimentary and igneous layers, respectively, PmP wide-angle reflections from the crust-mantle transition zone, and Pn upper mantle refractions. The Pg-PmP-Pn triplication is typically observed at 40-50 km source-receiver offsets. Pn characteristics show evidence for upper mantle azimuthal anisotropic propagation: along the plate transects Pn is typically weaker and difficult to observe beyond ~80 km offsets, while along the trench-parallel transect Pn arrivals have higher amplitude and are easily observed up to source-receiver offsets of 160-180 km. An overview on the Cascadia Ridge to Trench data acquisition program and preliminary results will be presented.
NASA Astrophysics Data System (ADS)
Jakacki, Jaromir; Golenko, Mariya
2014-05-01
Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.
Response of the tropical Pacific Ocean to El Niño versus global warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Fukai; Luo, Yiyong; Lu, Jian
Climate models project an El Niño-like SST response in the tropical Pacific Ocean to global warming (GW). By employing the Community Earth System Model (CESM) and applying an overriding technique to its ocean component, Parallel Ocean Program version 2 (POP2), this study investigates the similarity and difference of formation mechanism for the changes in the tropical Pacific Ocean under El Niño and GW. Results show that, despite sharing some similarities between the two scenarios, there are many significant distinctions between GW and El Niño: 1) the phase locking of the seasonal cycle reduction is more notable under GW compared withmore » El Niño, implying more extreme El Niño events in the future; 2) in contrast to the penetration of the equatorial subsurface temperature anomaly that appears to propagate in the form of an oceanic equatorial upwelling Kelvin wave during El Niño, the GW-induced subsurface temperature anomaly manifest in the form of off-equatorial upwelling Rossby waves; 3) while significant across-equator northward heat transport (NHT) is induced by the wind stress anomalies associated with El Niño, little NHT is found at the equator due to a symmetric change in the shallow meridional overturning circulation that appears to be weakened in both North and South Pacific under GW; and 4) the maintaining mechanisms for the eastern equatorial Pacific warming are also substantially different.« less
Kimmel, Charles B.; Cresko, William A.; Phillips, Patrick C.; Ullmann, Bonnie; Currey, Mark; von Hippel, Frank; Kristjánsson, Bjarni K.; Gelmond, Ofer; McGuigan, Katrina
2014-01-01
Evolution of similar phenotypes in independent populations is often taken as evidence of adaptation to the same fitness optimum. However, the genetic architecture of traits might cause evolution to proceed more often toward particular phenotypes, and less often toward others, independently of the adaptive value of the traits. Freshwater populations of Alaskan threespine stickleback have repeatedly evolved the same distinctive opercle shape after divergence from an oceanic ancestor. Here we demonstrate that this pattern of parallel evolution is widespread, distinguishing oceanic and freshwater populations across the Pacific Coast of North America and Iceland. We test whether this parallel evolution reflects genetic bias by estimating the additive genetic variance– covariance matrix (G) of opercle shape in an Alaskan oceanic (putative ancestral) population. We find significant additive genetic variance for opercle shape and that G has the potential to be biasing, because of the existence of regions of phenotypic space with low additive genetic variation. However, evolution did not occur along major eigenvectors of G, rather it occurred repeatedly in the same directions of high evolvability. We conclude that the parallel opercle evolution is most likely due to selection during adaptation to freshwater habitats, rather than due to biasing effects of opercle genetic architecture. PMID:22276538
NASA Astrophysics Data System (ADS)
Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.
2002-12-01
The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Diansky, Nikolay; Zalesny, Vladimir
2010-05-01
The original program complex is proposed for the ocean circulation sigma-model, developed in the Institute of Numerical Mathematics (INM), Russian Academy of Sciences (RAS). The complex can be used in various curvilinear orthogonal coordinate systems. In addition to ocean circulation model, the complex contains a sea ice dynamics and thermodynamics model, as well as the original system of the atmospheric forcing implementation on the basis of both prescribed meteodata and atmospheric model results. This complex can be used as the oceanic block of Earth climate model as well as for solving the scientific and practical problems concerning the World ocean and its separate oceans and seas. The developed program complex can be effectively used on parallel shared memory computational systems and on contemporary personal computers. On the base of the complex proposed the ocean general circulation model (OGCM) was developed. The model is realized in the curvilinear orthogonal coordinate system obtained by the conformal transformation of the standard geographical grid that allowed us to locate the system singularities outside the integration domain. The horizontal resolution of the OGCM is 1 degree on longitude, 0.5 degree on latitude, and it has 40 non-uniform sigma-levels in depth. The model was integrated for 100 years starting from the Levitus January climatology using the realistic atmospheric annual cycle calculated on the base of CORE datasets. The experimental results showed us that the model adequately reproduces the basic characteristics of large-scale World Ocean dynamics, that is in good agreement with both observational data and results of the best climatic OGCMs. This OGCM is used as the oceanic component of the new version of climatic system model (CSM) developed in INM RAS. The latter is now ready for carrying out the new numerical experiments on climate and its change modelling according to IPCC (Intergovernmental Panel on Climate Change) scenarios in the scope of the CMIP-5 (Coupled Model Intercomparison Project). On the base of the complex proposed the Pacific Ocean circulation eddy-resolving model was realized. The integration domain covers the Pacific from Equator to Bering Strait. The model horizontal resolution is 0.125 degree and it has 20 non-uniform sigma-levels in depth. The model adequately reproduces circulation large-scale structure and its variability: Kuroshio meandering, ocean synoptic eddies, frontal zones, etc. Kuroshio high variability is shown. The distribution of contaminant was simulated that is admittedly wasted near Petropavlovsk-Kamchatsky. The results demonstrate contaminant distribution structure and provide us understanding of hydrological fields formation processes in the North-West Pacific.
Understanding the El Niño-like Oceanic Response in the Tropical Pacific to Global Warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Yiyong; Lu, Jian; Liu, Fukai
The enhanced central and eastern Pacific SST warming and the associated ocean processes under global warming are investigated using the ocean component of the Community Earth System Model (CESM), Parallel Ocean Program version 2 (POP2). The tropical SST warming pattern in the coupled CESM can be faithfully reproduced by the POP2 forced with surface fluxes computed using the aerodynamic bulk formula. By prescribing the wind stress and/or wind speed through the bulk formula, the effects of wind stress change and/or the wind-evaporation-SST (WES) feedback are isolated and their linearity is evaluated in this ocean-alone setting. Result shows that, although themore » weakening of the equatorial easterlies contributes positively to the El Niño-like SST warming, 80% of which can be simulated by the POP2 without considering the effects of wind change in both mechanical and thermodynamic fluxes. This result points to the importance of the air-sea thermal interaction and the relative feebleness of the ocean dynamical process in the El Niño-like equatorial Pacific SST response to global warming. On the other hand, the wind stress change is found to play a dominant role in the oceanic response in the tropical Pacific, accounting for most of the changes in the equatorial ocean current system and thermal structures, including the weakening of the surface westward currents, the enhancement of the near-surface stratification and the shoaling of the equatorial thermocline. Interestingly, greenhouse gas warming in the absence of wind stress change and WES feedback also contributes substantially to the changes at the subsurface equatorial Pacific. Further, this warming impact can be largely replicated by an idealized ocean experiment forced by a uniform surface heat flux, whereby, arguably, a purest form of oceanic dynamical thermostat is revealed.« less
NASA Astrophysics Data System (ADS)
Martin, D. F.; Asay-Davis, X.; Price, S. F.; Cornford, S. L.; Maltrud, M. E.; Ng, E. G.; Collins, W.
2014-12-01
We present the response of the continental Antarctic ice sheet to sub-shelf-melt forcing derived from POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period 1990 to 2010. Simulations are performed at 0.1 degree (~5 km) ocean resolution and ice sheet resolution as fine as 500 m using adaptive mesh refinement. A comparison of fully-coupled and comparable standalone ice-sheet model results demonstrates the importance of two-way coupling between the ice sheet and the ocean. The POPSICLES model couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), and the BISICLES ice-sheet model (Cornford et al., 2012). BISICLES makes use of adaptive mesh refinement to fully resolve dynamically-important regions like grounding lines and employs a momentum balance similar to the vertically-integrated formulation of Schoof and Hindmarsh (2009). Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests like MISMIP3D (Pattyn et al., 2013) and realistic configurations (Favier et al. 2014). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). A companion presentation, "Present-day circum-Antarctic simulations using the POPSICLES coupled land ice-ocean model" in session C027 describes the ocean-model perspective of this work, while we focus on the response of the ice sheet and on details of the model. The figure shows the BISICLES-computed vertically-integrated ice velocity field about 1 month into a 20-year coupled Antarctic run. Groundling lines are shown in green.
NASA Astrophysics Data System (ADS)
Martin, D. F.; Asay-Davis, X.; Cornford, S. L.; Price, S. F.; Ng, E. G.; Collins, W.
2015-12-01
We present POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period from 1990 to 2010. We use the CORE v. 2 interannual forcing data to force the ocean model. Simulations are performed at 0.1o(~5 km) ocean resolution with adaptive ice sheet resolution as fine as 500 m to adequately resolve the grounding line dynamics. We discuss the effect of improved ocean mixing and subshelf bathymetry (vs. the standard Bedmap2 bathymetry) on the behavior of the coupled system, comparing time-averaged melt rates below a number of major ice shelves with those reported in the literature. We also present seasonal variability and decadal melting trends from several Antarctic regions, along with the response of the ice shelves and the consequent dynamic response of the grounded ice sheet.POPSICLES couples the POP2x ocean model, a modified version of the Parallel Ocean Program, and the BISICLES ice-sheet model. POP2x includes sub-ice-shelf circulation using partial top cells and the commonly used three-equation boundary layer physics. Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP) and other continental-scale simulations and melt-rate observations. BISICLES makes use of adaptive mesh refinement and a 1st-order accurate momentum balance similar to the L1L2 model of Schoof and Hindmarsh to accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests (MISMIP-3d) and realistic configurations.The figure shows the BISICLES-computed vertically-integrated grounded ice velocity field 5 years into a 20-year coupled full-continent Antarctic-Southern-Ocean simulation. Submarine melt rates are painted onto the surface of the floating ice shelves. Grounding lines are shown in green.
Performance Improvements of the CYCOFOS Flow Model
NASA Astrophysics Data System (ADS)
Radhakrishnan, Hari; Moulitsas, Irene; Syrakos, Alexandros; Zodiatis, George; Nikolaides, Andreas; Hayes, Daniel; Georgiou, Georgios C.
2013-04-01
The CYCOFOS-Cyprus Coastal Ocean Forecasting and Observing System has been operational since early 2002, providing daily sea current, temperature, salinity and sea level forecasting data for the next 4 and 10 days to end-users in the Levantine Basin, necessary for operational application in marine safety, particularly concerning oil spills and floating objects predictions. CYCOFOS flow model, similar to most of the coastal and sub-regional operational hydrodynamic forecasting systems of the MONGOOS-Mediterranean Oceanographic Network for Global Ocean Observing System is based on the POM-Princeton Ocean Model. CYCOFOS is nested with the MyOcean Mediterranean regional forecasting data and with SKIRON and ECMWF for surface forcing. The increasing demand for higher and higher resolution data to meet coastal and offshore downstream applications motivated the parallelization of the CYCOFOS POM model. This development was carried out in the frame of the IPcycofos project, funded by the Cyprus Research Promotion Foundation. The parallel processing provides a viable solution to satisfy these demands without sacrificing accuracy or omitting any physical phenomena. Prior to IPcycofos project, there are been several attempts to parallelise the POM, as for example the MP-POM. The existing parallel code models rely on the use of specific outdated hardware architectures and associated software. The objective of the IPcycofos project is to produce an operational parallel version of the CYCOFOS POM code that can replicate the results of the serial version of the POM code used in CYCOFOS. The parallelization of the CYCOFOS POM model use Message Passing Interface-MPI, implemented on commodity computing clusters running open source software and not depending on any specialized vendor hardware. The parallel CYCOFOS POM code constructed in a modular fashion, allowing a fast re-locatable downscaled implementation. The MPI takes advantage of the Cartesian nature of the POM mesh, and use the built-in functionality of MPI routines to split the mesh, using a weighting scheme, along longitude and latitude among the processors. Each server processor work on the model based on domain decomposition techniques. The new parallel CYCOFOS POM code has been benchmarked against the serial POM version of CYCOFOS for speed, accuracy, and resolution and the results are more than satisfactory. With a higher resolution CYCOFOS Levantine model domain the forecasts need much less time than the serial CYCOFOS POM coarser version, both with identical accuracy.
A Tutorial on Parallel and Concurrent Programming in Haskell
NASA Astrophysics Data System (ADS)
Peyton Jones, Simon; Singh, Satnam
This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.
NASA Astrophysics Data System (ADS)
Kelly, D. Clay; Zachos, James C.; Bralower, Timothy J.; Schellenberg, Stephen A.
2005-12-01
The carbonate saturation profile of the oceans shoaled markedly during a transient global warming event known as the Paleocene-Eocene thermal maximum (PETM) (circa 55 Ma). The rapid release of large quantities of carbon into the ocean-atmosphere system is believed to have triggered this intense episode of dissolution along with a negative carbon isotope excursion (CIE). The brevity (120-220 kyr) of the PETM reflects the rapid enhancement of negative feedback mechanisms within Earth's exogenic carbon cycle that served the dual function of buffering ocean pH and reducing atmospheric greenhouse gas levels. Detailed study of the PETM stratigraphy from Ocean Drilling Program Site 690 (Weddell Sea) reveals that the CIE recovery period, which postdates the CIE onset by ˜80 kyr, is represented by an expanded (˜2.5 m thick) interval containing a unique planktic foraminiferal assemblage strongly diluted by coccolithophore carbonate. Collectively, the micropaleontological and sedimentological changes preserved within the CIE recovery interval reflect a transient state when ocean-atmosphere chemistry fostered prolific coccolithophore blooms that suppressed the local lysocline to relatively deeper depths. A prominent peak in the abundance of the clay mineral kaolinite is associated with the CIE recovery interval, indicating that continental weathering/runoff intensified at this time as well (Robert and Kennett, 1994). Such parallel stratigraphic changes are generally consonant with the hypothesis that enhanced continental weathering/runoff and carbonate precipitation helped sequester carbon during the PETM recovery period (e.g., Dickens et al., 1997; Zachos et al., 2005).
NASA Astrophysics Data System (ADS)
Cianca, A.; Caudet, E.; Vega, D.; Barrera, C.; Hernandez Brito, J.
2016-02-01
The European Station for Time Series in the Ocean, Canary Islands "ESTOC" is located in the Eastern Subtropical North Atlantic Gyre (29'10ºN, 15'30ºW). ESTOC started operations in 1994 based on a monthly ship-based sampling, in addition to hydrographic and sediment trap moorings. Since 2002, ESTOC is part of the European network for deep sea ocean observatories through several projects, among others ANIMATE (Atlantic Network of Interdisciplinary Moorings and Time-series for Europe), EuroSITES (European Ocean Observatory Network) and Fixed point Open Ocean Observatory network (FixO3). The main purpose of these projects was to improve the time-resolution of the biogeochemical measurements through moored biogeochemical sensors. Additionally, ESTOC is included in the Marine-Maritime observational network of the Macaronesian region, which is supported by the European overseas territories programs since 2009. This network aims to increase the quantity and quality of marine environmental observations. The goal is to understand phenomena which impact in the environment, and consequently at the socio-economy of the region to attempt their prediction. With this purpose, ESTOC has included the use of autonomous vehicles "glider" in order to increase the observational resolution and, by comparison with the parallel observational programs, to study the biogeochemical processes at different time scale resolutions. This study investigates the time variability of the dissolved oxygen and chlorophyll distributions in the water column focusing on the diel cycle, looking at the relevance of this variability in the already known seasonal distributions. Our interest is assessing net community production and remineralization rates through the use of oxygen variations, establishing the relationship between the DO anomalies values and those from the chlorophyll distribution in the water column.
Mississippi State University Center for Air Sea Technology FY95 Research Program
NASA Technical Reports Server (NTRS)
Yeske, Lanny; Corbin, James H.
1995-01-01
The Mississippi State University (MSU) Center for Air Sea Technology (CAST) evolved from the Institute for Naval Oceanography's (INO) Experimental Center for Mesoscale Ocean Prediction (ECMOP) which was started in 1989. MSU CAST subsequently began operation on 1 October 1992 under an Office of Naval Research (ONR) two-year grant which ended on 30 September 1994. In FY95 MSU CAST was successful in obtaining five additional research grants from ONR, as well as several other research contracts from the Naval Oceanographic Office via NASA, the Naval Research Laboratory, the Army Corps of Engineers, and private industry. In the past, MSU CAST technical research and development has produced tools, systems, techniques, and procedures that improve efficiency and overcome deficiency for both the operational and research communities residing with the Department of Defense, private industry, and university ocean modeling community. We continued this effort with the following thrust areas: to develop advanced methodologies and tools for model evaluation, validation and visualization, both oceanographic and atmospheric; to develop a system-level capability for conducting temporally and ; spatially scaled ocean simulations driven by or are responsive to ocean models, and take into consideration coupling to atmospheric models; to continue the existing oceanographic/atmospheric data management task with emphasis on distributed databases in a network environment, with database optimization and standardization, including use of Mosaic and World Wide Web (WWW) access; and to implement a high performance parallel computing technology for CAST ocean models
"One-Stop Shopping" for Ocean Remote-Sensing and Model Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook
2006-01-01
OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for
Ocean Drilling Program: Public Information: News
site ODP's main web site ODP/TAMU Science Operator Home Ocean Drilling Program News The Ocean Drilling Program was succeeded in 2003 by the Integrated Ocean Drilling Program (IODP). The IODP U.S. Implementing
NASA Astrophysics Data System (ADS)
Hellmer, Hartmut H.; Rhein, Monika; Heinemann, Günther; Abalichin, Janna; Abouchami, Wafa; Baars, Oliver; Cubasch, Ulrich; Dethloff, Klaus; Ebner, Lars; Fahrbach, Eberhard; Frank, Martin; Gollan, Gereon; Greatbatch, Richard J.; Grieger, Jens; Gryanik, Vladimir M.; Gryschka, Micha; Hauck, Judith; Hoppema, Mario; Huhn, Oliver; Kanzow, Torsten; Koch, Boris P.; König-Langlo, Gert; Langematz, Ulrike; Leckebusch, Gregor C.; Lüpkes, Christof; Paul, Stephan; Rinke, Annette; Rost, Bjoern; van der Loeff, Michiel Rutgers; Schröder, Michael; Seckmeyer, Gunther; Stichel, Torben; Strass, Volker; Timmermann, Ralph; Trimborn, Scarlett; Ulbrich, Uwe; Venchiarutti, Celia; Wacker, Ulrike; Willmes, Sascha; Wolf-Gladrow, Dieter
2016-11-01
In the early 1980s, Germany started a new era of modern Antarctic research. The Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) was founded and important research platforms such as the German permanent station in Antarctica, today called Neumayer III, and the research icebreaker Polarstern were installed. The research primarily focused on the Atlantic sector of the Southern Ocean. In parallel, the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) started a priority program `Antarctic Research' (since 2003 called SPP-1158) to foster and intensify the cooperation between scientists from different German universities and the AWI as well as other institutes involved in polar research. Here, we review the main findings in meteorology and oceanography of the last decade, funded by the priority program. The paper presents field observations and modelling efforts, extending from the stratosphere to the deep ocean. The research spans a large range of temporal and spatial scales, including the interaction of both climate components. In particular, radiative processes, the interaction of the changing ozone layer with large-scale atmospheric circulations, and changes in the sea ice cover are discussed. Climate and weather forecast models provide an insight into the water cycle and the climate change signals associated with synoptic cyclones. Investigations of the atmospheric boundary layer focus on the interaction between atmosphere, sea ice and ocean in the vicinity of polynyas and leads. The chapters dedicated to polar oceanography review the interaction between the ocean and ice shelves with regard to the freshwater input and discuss the changes in water mass characteristics, ventilation and formation rates, crucial for the deepest limb of the global, climate-relevant meridional overturning circulation. They also highlight the associated storage of anthropogenic carbon as well as the cycling of carbon, nutrients and trace metals in the ocean with special emphasis on the Weddell Sea.
NASA Astrophysics Data System (ADS)
Fine, Rana A.; Walker, Dan
In June 1996, the National Research Council (NRC) formed the Committee on Major U.S. Oceanographic Research Programs to foster coordination among the large programs (e.g., World Ocean Circulation Experiment, Ocean Drilling Program, Ridge Interdisciplinary Global Experiment, and others) and examine their role in ocean research. In particular, the committee is charged with (1) enhancing information sharing and the coordinated implementation of the research plans of the major ongoing and future programs; (2) assisting the federal agencies and ocean sciences community in identifying gaps, as well as appropriate followon activities to existing programs; (3) making recommendations on how future major ocean programs should be planned, structured and organized; and (4) evaluating the impact of major ocean programs on the understanding of the oceans, development of research facilities, education, and collegiality in the academic community. The activity was initiated at the request of the National Science Foundation (NSF) Division of Ocean Sciences, is overseen by the NRC's Ocean Studies Board (OSB), and is funded by both NSF and the Office of Naval Research.
Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.
Sohm, Jill A; Ahlgren, Nathan A; Thomson, Zachary J; Williams, Cheryl; Moffett, James W; Saito, Mak A; Webb, Eric A; Rocap, Gabrielle
2016-02-01
Marine picocyanobacteria, comprised of the genera Synechococcus and Prochlorococcus, are the most abundant and widespread primary producers in the ocean. More than 20 genetically distinct clades of marine Synechococcus have been identified, but their physiology and biogeography are not as thoroughly characterized as those of Prochlorococcus. Using clade-specific qPCR primers, we measured the abundance of 10 Synechococcus clades at 92 locations in surface waters of the Atlantic and Pacific Oceans. We found that Synechococcus partition the ocean into four distinct regimes distinguished by temperature, macronutrients and iron availability. Clades I and IV were prevalent in colder, mesotrophic waters; clades II, III and X dominated in the warm, oligotrophic open ocean; clades CRD1 and CRD2 were restricted to sites with low iron availability; and clades XV and XVI were only found in transitional waters at the edges of the other biomes. Overall, clade II was the most ubiquitous clade investigated and was the dominant clade in the largest biome, the oligotrophic open ocean. Co-occurring clades that occupy the same regime belong to distinct evolutionary lineages within Synechococcus, indicating that multiple ecotypes have evolved independently to occupy similar niches and represent examples of parallel evolution. We speculate that parallel evolution of ecotypes may be a common feature of diverse marine microbial communities that contributes to functional redundancy and the potential for resiliency.
Sohm, Jill A; Ahlgren, Nathan A; Thomson, Zachary J; Williams, Cheryl; Moffett, James W; Saito, Mak A; Webb, Eric A; Rocap, Gabrielle
2016-01-01
Marine picocyanobacteria, comprised of the genera Synechococcus and Prochlorococcus, are the most abundant and widespread primary producers in the ocean. More than 20 genetically distinct clades of marine Synechococcus have been identified, but their physiology and biogeography are not as thoroughly characterized as those of Prochlorococcus. Using clade-specific qPCR primers, we measured the abundance of 10 Synechococcus clades at 92 locations in surface waters of the Atlantic and Pacific Oceans. We found that Synechococcus partition the ocean into four distinct regimes distinguished by temperature, macronutrients and iron availability. Clades I and IV were prevalent in colder, mesotrophic waters; clades II, III and X dominated in the warm, oligotrophic open ocean; clades CRD1 and CRD2 were restricted to sites with low iron availability; and clades XV and XVI were only found in transitional waters at the edges of the other biomes. Overall, clade II was the most ubiquitous clade investigated and was the dominant clade in the largest biome, the oligotrophic open ocean. Co-occurring clades that occupy the same regime belong to distinct evolutionary lineages within Synechococcus, indicating that multiple ecotypes have evolved independently to occupy similar niches and represent examples of parallel evolution. We speculate that parallel evolution of ecotypes may be a common feature of diverse marine microbial communities that contributes to functional redundancy and the potential for resiliency. PMID:26208139
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
The Oceanographic Multipurpose Software Environment (OMUSE v1.0)
NASA Astrophysics Data System (ADS)
Pelupessy, Inti; van Werkhoven, Ben; van Elteren, Arjen; Viebahn, Jan; Candy, Adam; Portegies Zwart, Simon; Dijkstra, Henk
2017-08-01
In this paper we present the Oceanographic Multipurpose Software Environment (OMUSE). OMUSE aims to provide a homogeneous environment for existing or newly developed numerical ocean simulation codes, simplifying their use and deployment. In this way, numerical experiments that combine ocean models representing different physics or spanning different ranges of physical scales can be easily designed. Rapid development of simulation models is made possible through the creation of simple high-level scripts. The low-level core of the abstraction in OMUSE is designed to deploy these simulations efficiently on heterogeneous high-performance computing resources. Cross-verification of simulation models with different codes and numerical methods is facilitated by the unified interface that OMUSE provides. Reproducibility in numerical experiments is fostered by allowing complex numerical experiments to be expressed in portable scripts that conform to a common OMUSE interface. Here, we present the design of OMUSE as well as the modules and model components currently included, which range from a simple conceptual quasi-geostrophic solver to the global circulation model POP (Parallel Ocean Program). The uniform access to the codes' simulation state and the extensive automation of data transfer and conversion operations aids the implementation of model couplings. We discuss the types of couplings that can be implemented using OMUSE. We also present example applications that demonstrate the straightforward model initialization and the concurrent use of data analysis tools on a running model. We give examples of multiscale and multiphysics simulations by embedding a regional ocean model into a global ocean model and by coupling a surface wave propagation model with a coastal circulation model.
NASA Astrophysics Data System (ADS)
Venti, Nicholas L.; Billups, Katharina; Herbert, Timothy D.
2017-02-01
Alkenone mass accumulation rates (MARs) provide a proxy for export productivity in the northwestern Pacific (Ocean Drilling Program Site 1208) spanning the late Pliocene through early Pleistocene (3.0-1.8 Ma). We investigate changes in productivity associated with global cooling during the onset and expansion of Northern Hemisphere glaciation (NHG). Alkenone MARs vary on obliquity timescales throughout, but the amplitude increases at 2.75 Ma concurrent with the intensification of NHG and cooling of the sea surface by 3°C. The obliquity-scale variations in alkenone MARs parallel shipboard measurements of sediment color reflectance (%) with higher MARs significantly correlated (>95%) with darker (opal-rich) intervals. Variations in both lead benthic foraminiferal δ18O values by 1.5-2 kyr suggesting that export productivity may be a contributing factor, rather than a response, to the extent of continental glaciation. The biological pump is therefore a plausible mechanism for transferring atmospheric CO2 into the deep ocean during the onset of NHG and the ensuing obliquity-dominated climate regime. Obliquity-scale correlation between productivity and magnetic susceptibility is consistent with a link via westerly winds delivering terrigenous sediments and mixing the upper water column. Alkenone MARs also contain a 400 kyr modulation. Because this periodicity is a multiple of the residence time of carbon in the ocean, it may reflect inputs of new nutrients associated with eccentricity-forced changes in the terrestrial biosphere and weathering. We ascribe these findings to interactions between the East Asian winter monsoon and productivity in the North Pacific Ocean, perhaps contributing to Plio-Pleistocene climate change.
Parallel programming with Easy Java Simulations
NASA Astrophysics Data System (ADS)
Esquembre, F.; Christian, W.; Belloni, M.
2018-01-01
Nearly all of today's processors are multicore, and ideally programming and algorithm development utilizing the entire processor should be introduced early in the computational physics curriculum. Parallel programming is often not introduced because it requires a new programming environment and uses constructs that are unfamiliar to many teachers. We describe how we decrease the barrier to parallel programming by using a java-based programming environment to treat problems in the usual undergraduate curriculum. We use the easy java simulations programming and authoring tool to create the program's graphical user interface together with objects based on those developed by Kaminsky [Building Parallel Programs (Course Technology, Boston, 2010)] to handle common parallel programming tasks. Shared-memory parallel implementations of physics problems, such as time evolution of the Schrödinger equation, are available as source code and as ready-to-run programs from the AAPT-ComPADRE digital library.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele; Borovikov, Anna Y.; Suarez, Max
1999-01-01
A massively parallel ensemble Kalman filter (EnKF)is used to assimilate temperature data from the TOGA/TAO array and altimetry from TOPEX/POSEIDON into a Pacific basin version of the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. The EnKF is an approximate Kalman filter in which the error-covariance propagation step is modeled by the integration of multiple instances of a numerical model. An estimate of the true error covariances is then inferred from the distribution of the ensemble of model state vectors. This inplementation of the filter takes advantage of the inherent parallelism in the EnKF algorithm by running all the model instances concurrently. The Kalman filter update step also occurs in parallel by having each processor process the observations that occur in the region of physical space for which it is responsible. The massively parallel data assimilation system is validated by withholding some of the data and then quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The distributions of the forecast and analysis error covariances predicted by the ENKF are also examined.
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
Bilingual parallel programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; Overbeek, R.
1990-01-01
Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Robert; Bretherton, Chris; McFarquhar, Greg
2014-09-29
A workshop sponsored by the Department of Energy was convened at the University of Washington to discuss the state of knowledge of clouds, aerosols and air-sea interaction over the Southern Ocean and to identify strategies for reducing uncertainties in their representation in global and regional models. The Southern Ocean plays a critical role in the global climate system and is a unique pristine environment, yet other than from satellite, there have been sparse observations of clouds, aerosols, radiation and the air-sea interface in this region. Consequently, much is unknown about atmospheric and oceanographic processes and their linkage in this region.more » Approximately 60 scientists, including graduate students, postdoctoral fellows and senior researchers working in atmospheric and oceanic sciences at U.S. and foreign universities and government laboratories, attended the Southern Ocean Workshop. It began with a day of scientific talks, partly in plenary and partly in two parallel sessions, discussing the current state of the science for clouds, aerosols and air-sea interaction in the Southern Ocean. After the talks, attendees broke into two working groups; one focused on clouds and meteorology, and one focused on aerosols and their interactions with clouds. This was followed by more plenary discussion to synthesize the two working group discussions and to consider possible plans for organized activities to study clouds, aerosols and the air-sea interface in the Southern Ocean. The agenda and talk slides, including short summaries of the highlights of the parallel session talks developed by the session chars, are available at http://www.atmos.washington.edu/socrates/presentations/SouthernOceanPresentations/.« less
Ocean Drilling Program: Cruise Information
Morgan. Cruise Information The Ocean Drilling Program ended on 30 September 2003 and has been succeeded by the Integrated Ocean Drilling Program (IODP). The U.S. Implementing Organization (IODP-USIO ) (Consortium for Ocean Leadership, Lamont-Doherty Earth Observatory, and Texas A&M University) continues to
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-16
... Oceanic and Atmospheric Administration's Coastal Ocean Program (COP) provides direct financial assistance.... The statutory authority for COP is Public Law 102-567 Section 201 (Coastal Ocean Program). In addition... to file annual progress reports and a project final report using COP formats. All of these...
NASA Astrophysics Data System (ADS)
Pelz, M.; Hoeberechts, M.; McLean, M. A.; Riddell, D. J.; Ewing, N.; Brown, J. C.
2016-12-01
This presentation outlines the authentic research experiences created by Ocean Networks Canada's Ocean Sense program, a transformative education program that connects students and teachers with place-based, real-time data via the Internet. This program, developed in collaboration with community educators, features student-centric activities, clearly outlined learning outcomes, assessment tools and curriculum aligned content. Ocean Networks Canada (ONC), an initiative of the University of Victoria, develops, operates, and maintains cabled ocean observatory systems. Technologies developed on the world-leading NEPTUNE and VENUS observatories have been adapted for small coastal installations called "community observatories," which enable community members to directly monitor conditions in the local ocean environment. Data from these observatories are fundamental to lessons and activities in the Ocean Sense program. Marketed as Ocean Sense: Local observations, global connections, the program introduces middle and high school students to research methods in biology, oceanography and ocean engineering. It includes a variety of resources and opportunities to excite students and spark curiosity about the ocean environment. The program encourages students to connect their local observations to global ocean processes and the observations of students in other geographic regions. Connection to place and local relevance of the program is enhanced through an emphasis on Indigenous and place-based knowledge. The program promotes of cross-cultural learning with the inclusion of Indigenous knowledge of the ocean. Ocean Sense provides students with an authentic research experience by connecting them to real-time data, often within their own communities. Using the freely accessible data portal, students can curate the data they need from a range of instruments and time periods. Further, students are not restricted to their local community; if their question requires a greater range of data, they also have access to the other observatories in the network. Our presentation will explore the design, implementation and lessons learned from the ongoing development of the Ocean Sense program, from its inception to its current form today. Sample activities will be made available.
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Ocean Drilling Program: Science Operator
: www.odplegacy.org Integrated Ocean Drilling Program (IODP): www.iodp.org IODP U.S. Implementing Organization (IODP -USIO): www.iodp-usio.org The Ocean Drilling Program (ODP) was funded by the U.S. National Science Foundation and 22 international partners (JOIDES) to conduct basic research into the history of the ocean
North Pacific Mesoscale Coupled Air-Ocean Simulations Compared with Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koracin, Darko; Cerovecki, Ivana; Vellore, Ramesh
2013-04-11
Executive summary The main objective of the study was to investigate atmospheric and ocean interaction processes in the western Pacific and, in particular, effects of significant ocean heat loss in the Kuroshio and Kuroshio Extension regions on the lower and upper atmosphere. It is yet to be determined how significant are these processes are on climate scales. The understanding of these processes led us also to development of the methodology of coupling the Weather and Research Forecasting model with the Parallel Ocean Program model for western Pacific regional weather and climate simulations. We tested NCAR-developed research software Coupler 7 formore » coupling of the WRF and POP models and assessed its usability for regional-scale applications. We completed test simulations using the Coupler 7 framework, but implemented a standard WRF model code with options for both one- and two-way mode coupling. This type of coupling will allow us to seamlessly incorporate new WRF updates and versions in the future. We also performed a long-term WRF simulation (15 years) covering the entire North Pacific as well as high-resolution simulations of a case study which included extreme ocean heat losses in the Kuroshio and Kuroshio Extension regions. Since the extreme ocean heat loss occurs during winter cold air outbreaks (CAO), we simulated and analyzed a case study of a severe CAO event in January 2000 in detail. We found that the ocean heat loss induced by CAOs is amplified by additional advection from mesocyclones forming on the southern part of the Japan Sea. Large scale synoptic patterns with anomalously strong anticyclone over Siberia and Mongolia, deep Aleutian Low, and the Pacific subtropical ridge are a crucial setup for the CAO. It was found that the onset of the CAO is related to the breaking of atmospheric Rossby waves and vertical transport of vorticity that facilitates meridional advection. The study also indicates that intrinsic parameterization of the surface fluxes within the WRF model needs more evaluation and analysis.« less
Sea change: Charting the course for biogeochemical ocean time-series research in a new millennium
NASA Astrophysics Data System (ADS)
Church, Matthew J.; Lomas, Michael W.; Muller-Karger, Frank
2013-09-01
Ocean time-series provide vital information needed for assessing ecosystem change. This paper summarizes the historical context, major program objectives, and future research priorities for three contemporary ocean time-series programs: The Hawaii Ocean Time-series (HOT), the Bermuda Atlantic Time-series Study (BATS), and the CARIACO Ocean Time-Series. These three programs operate in physically and biogeochemically distinct regions of the world's oceans, with HOT and BATS located in the open-ocean waters of the subtropical North Pacific and North Atlantic, respectively, and CARIACO situated in the anoxic Cariaco Basin of the tropical Atlantic. All three programs sustain near-monthly shipboard occupations of their field sampling sites, with HOT and BATS beginning in 1988, and CARIACO initiated in 1996. The resulting data provide some of the only multi-disciplinary, decadal-scale determinations of time-varying ecosystem change in the global ocean. Facilitated by a scoping workshop (September 2010) sponsored by the Ocean Carbon Biogeochemistry (OCB) program, leaders of these time-series programs sought community input on existing program strengths and for future research directions. Themes that emerged from these discussions included: 1. Shipboard time-series programs are key to informing our understanding of the connectivity between changes in ocean-climate and biogeochemistry 2. The scientific and logistical support provided by shipboard time-series programs forms the backbone for numerous research and education programs. Future studies should be encouraged that seek mechanistic understanding of ecological interactions underlying the biogeochemical dynamics at these sites. 3. Detecting time-varying trends in ocean properties and processes requires consistent, high-quality measurements. Time-series must carefully document analytical procedures and, where possible, trace the accuracy of analyses to certified standards and internal reference materials. 4. Leveraged implementation, testing, and validation of autonomous and remote observing technologies at time-series sites provide new insights into spatiotemporal variability underlying ecosystem changes. 5. The value of existing time-series data for formulating and validating ecosystem models should be promoted. In summary, the scientific underpinnings of ocean time-series programs remain as strong and important today as when these programs were initiated. The emerging data inform our knowledge of the ocean's biogeochemistry and ecology, and improve our predictive capacity about planetary change.
NASA Astrophysics Data System (ADS)
Duffy, J. E.
2016-02-01
Biodiversity - the variety of functional types of organisms - is the engine of marine ecosystem processes, including productivity, nutrient cycling, and carbon sequestration. Biodiversity remains a black box in much of ocean science, despite wide recognition that effectively managing human interactions with marine ecosystems requires understanding both structure and functional consequences of biodiversity. Moreover, the inherent complexity of biological systems puts a premium on data-rich, comparative approaches, which are best met via collaborative networks. The Smithsonian Institution's MarineGEO program links a growing network of partners conducting parallel, comparative research to understand change in marine biodiversity and ecosystems, natural and anthropogenic drivers of that change, and the ecological processes mediating it. The focus is on nearshore, seabed-associated systems where biodiversity and human population are concentrated and interact most, yet which fall through the cracks of existing ocean observing programs. MarineGEO offers a standardized toolbox of research modules that efficiently capture key elements of biological diversity and its importance in ecological processes across a range of habitats. The toolbox integrates high-tech (DNA-based, imaging) and low-tech protocols (diver surveys, rapid assays of consumer activity) adaptable to differing institutional capacity and resources. The model for long-term sustainability involves leveraging in-kind support among partners, adoption of best practices wherever possible, engagement of students and citizen scientists, and benefits of training, networking, and global relevance as incentives for participation. Here I highlight several MarineGEO comparative research projects demonstrating the value of standardized, scalable assays and parallel experiments for measuring fish and invertebrate diversity, recruitment, benthic herbivory and generalist predation, decomposition, and carbon sequestration. Key remaining challenges include consensus on protocols; integration of historical data; data management and access; and informatics. These challenges are common to other fields and prospects for progress in the near future are good.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)
2001-01-01
A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.
NASA Astrophysics Data System (ADS)
Sushkevich, T. A.; Strelkov, S. A.; Maksakova, S. V.
2017-11-01
We are talking about the national achievements of the world level in theory of radiation transfer in the system atmosphere-oceans and about the modern scientific potential developing in Russia, which adequately provides a methodological basis for theoretical and computational studies of radiation processes and radiation fields in the natural environments with the use of supercomputers and massively parallel processing for problems of remote sensing and the climate of Earth. A model of the radiation field in system "clouds cover the atmosphere-ocean" to the separation of the contributions of clouds, atmosphere and ocean.
Ocean Drilling Science Plan to be released soon
NASA Astrophysics Data System (ADS)
Showstack, Randy
2011-04-01
The upcoming International Ocean Discovery Program, which is slated to operate from 2013 to 2023 and calls for an internationally funded program focused around four science themes, will pick up right where its predecessor, the Integrated Ocean Drilling Program, ends, explained Kiyoshi Suyehiro, president and chief executive officer of IODP, a convenient acronym that covers both programs. At a 5 April briefing at the 2011 European Geosciences Union General Assembly in Vienna, Austria, he outlined four general themes the new program will address. IODP involves 24 nations and utilizes different ocean drilling platforms that complement each other in drilling in different environments in the oceans.
The geological record of ocean acidification.
Hönisch, Bärbel; Ridgwell, Andy; Schmidt, Daniela N; Thomas, Ellen; Gibbs, Samantha J; Sluijs, Appy; Zeebe, Richard; Kump, Lee; Martindale, Rowan C; Greene, Sarah E; Kiessling, Wolfgang; Ries, Justin; Zachos, James C; Royer, Dana L; Barker, Stephen; Marchitto, Thomas M; Moyer, Ryan; Pelejero, Carles; Ziveri, Patrizia; Foster, Gavin L; Williams, Branwen
2012-03-02
Ocean acidification may have severe consequences for marine ecosystems; however, assessing its future impact is difficult because laboratory experiments and field observations are limited by their reduced ecologic complexity and sample period, respectively. In contrast, the geological record contains long-term evidence for a variety of global environmental perturbations, including ocean acidification plus their associated biotic responses. We review events exhibiting evidence for elevated atmospheric CO(2), global warming, and ocean acidification over the past ~300 million years of Earth's history, some with contemporaneous extinction or evolutionary turnover among marine calcifiers. Although similarities exist, no past event perfectly parallels future projections in terms of disrupting the balance of ocean carbonate chemistry-a consequence of the unprecedented rapidity of CO(2) release currently taking place.
Cloud identification using genetic algorithms and massively parallel computation
NASA Technical Reports Server (NTRS)
Buckles, Bill P.; Petry, Frederick E.
1996-01-01
As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user's manual was written and distributed nationwide to scientists whose work might benefit from its availability. Several papers, including two journal articles, were produced.
2008-03-01
this roughness is important for numerical modeling and prediction of the Arctic air-ice-ocean system, which will play a significant role as the US Navy...is important for numerical modeling and prediction of the Arctic air-ice-ocean system, which will play a significant role as the US Navy increases... Model 1 is based on a sequence of plane parallel layers each with a constant gradient whereas Model 2 is based on a series of flat layers of
NASA Oceanic Processes Program, fiscal year 1983
NASA Technical Reports Server (NTRS)
Nelson, R. M. (Editor); Pieri, D. C. (Editor)
1984-01-01
Accomplishments, activities, and plans are highlighted for studies of ocean circulation, air sea interaction, ocean productivity, and sea ice. Flight projects discussed include TOPEX, the ocean color imager, the advanced RF tracking system, the NASA scatterometer, and the pilot ocean data system. Over 200 papers generated by the program are listed.
77 FR 8219 - Coastal Zone Management Program: Illinois
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Coastal Zone Management Program: Illinois AGENCY: Office of Ocean and Coastal Resource Management (OCRM), National Oceanic and... of Decision (ROD) for Federal Approval of the Illinois Coastal Management Program (ICMP). SUMMARY...
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.
76 FR 80342 - Coastal Zone Management Program: Illinois
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Coastal Zone Management Program: Illinois AGENCY: Office of Ocean and Coastal Resource Management (OCRM), National Oceanic and... Environmental Impact Statement. SUMMARY: NOAA's Office of Ocean and Coastal Resource Management (OCRM) announces...
The structure and evolution of plankton communities
NASA Astrophysics Data System (ADS)
Longhurst, Alan R.
New understanding of the circulation of ancient oceans is not yet matched by progress in our understanding of their pelagic ecology, though it was the planktonic ecosystems that generated our offshore oil and gas reserves. Can we assume that present-day models of ecosystem function are also valid for ancient seas? This question is addressed by a study of over 4000 plankton samples to derive a comprehensive, global description of zooplankton community structure in modern oceans: this shows that copepods form only 50% of the biomass of all plankton, ranging from 70% in polar to 35% in tropical seas. Comparable figures are derived from 14 other taxonomic categories of zooplankton. For trophic groupings, the data indicate globally: geletinous predators - 14%; gelatinous herbivores - 4%; raptorial predators - 33%; macrofiltering herbivores - 20%; macrofiltering omnivores - 25%; and detritivores - 3%. A simple, idealized model for the modern pelagic ecosystem is derived from these percentages which indicates that metazooplankton are not the most important consumers of pico- and nano-plankton production which itself probably constitutes 90% of primary production in warm oceans. This model is then compared with candidate life-forms available in Palaeozoic and Mesozoic oceans to determine to what extent it is also valid for ancient ecosystems: it is concluded that it is probably unnecessary to postulate models fundamentally differing from it in order to accommodate the life-forms, both protozoic and metazoic, known to have populated ancient seas. Remarkably few life-forms have existed which cannot be paralleled in the modern ocean, which contains remarkably few life-forms which cannot be paralleled in the Palaeozoic ocean. As a first assumption, then, it is reasonable to assume that energy pathways were similar in ancient oceans to those we study today.
NASA Astrophysics Data System (ADS)
McLean, M. A.; Brown, J.; Hoeberechts, M.
2016-02-01
Ocean Networks Canada (ONC), an initiative of the University of Victoria, develops, operates, and maintains cabled ocean observatory systems. Technologies developed on the world-leading NEPTUNE and VENUS observatories have been adapted for small coastal installations called "community observatories," which enable community members to directly monitor conditions in the local ocean environment. In 2014, ONC pioneered an innovative educational program, Ocean Sense: Local observations, global connections, which introduces students and teachers to the technologies installed on community observatories. The program introduces middle and high school students to research methods in biology, oceanography and ocean engineering through hands-on activities. Ocean Sense includes a variety of resources and opportunities to excite students and spark curiosity about the ocean environment. The program encourages students to connect their local observations to global ocean processes and the observations of students in other geographic regions. The connection to place and local relevance of the program is further enhanced through an emphasis on Indigenous and place-based knowledge. ONC is working with coastal Indigenous communities in a collaborative process to include local knowledge, culture, and language in Ocean Sense materials. For this process to meaningful and culturally appropriate, ONC is relying on the guidance and oversight of Indigenous community educators and knowledge holders. Ocean Sense also includes opportunities for Indigenous youth and teachers in remote communities to connect in person, including an annual Ocean Science Symposium and professional development events for teachers. Building a program which embraces multiple perspectives is effective both in making ocean science more relevant to Indigenous students and in linking Indigenous knowledge and place-based knowledge to ocean science.
An interactive parallel programming environment applied in atmospheric science
NASA Technical Reports Server (NTRS)
vonLaszewski, G.
1996-01-01
This article introduces an interactive parallel programming environment (IPPE) that simplifies the generation and execution of parallel programs. One of the tasks of the environment is to generate message-passing parallel programs for homogeneous and heterogeneous computing platforms. The parallel programs are represented by using visual objects. This is accomplished with the help of a graphical programming editor that is implemented in Java and enables portability to a wide variety of computer platforms. In contrast to other graphical programming systems, reusable parts of the programs can be stored in a program library to support rapid prototyping. In addition, runtime performance data on different computing platforms is collected in a database. A selection process determines dynamically the software and the hardware platform to be used to solve the problem in minimal wall-clock time. The environment is currently being tested on a Grand Challenge problem, the NASA four-dimensional data assimilation system.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.
Present-day Circum-Antarctic Simulations using the POPSICLES Coupled Ice Sheet-Ocean Model
NASA Astrophysics Data System (ADS)
Asay-Davis, X.; Martin, D. F.; Price, S. F.; Maltrud, M. E.; Collins, W.
2014-12-01
We present POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period 1990 to 2010. Simulations are performed at 0.1o (~5 km) ocean resolution and with adaptive ice-sheet model resolution as fine as 500 m. We compare time-averaged melt rates below a number of major ice shelves with those reported by Rignot et al. (2013) as well as other recent studies. We also present seasonal variability and decadal trends in submarine melting from several Antarctic regions. Finally, we explore the influence on basal melting and system dynamics resulting from two different choices of climate forcing: a "normal-year" climatology and the CORE v. 2 forcing data (Large and Yeager 2008).POPSICLES couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), and the BISICLES ice-sheet model (Cornford et al., 2012). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). BISICLES makes use of adaptive mesh refinement and a 1st-order accurate momentum balance similar to the L1L2 model of Schoof and Hindmarsh (2009) to accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests (MISMIP-3D; Pattyn et al., 2013) and realistic configurations (Favier et al. 2014).A companion presentation, "Response of the Antarctic Ice Sheet to ocean forcing using the POPSICLES coupled ice sheet-ocean model" in session C024 covers the ice-sheet response to these melt rates in the coupled simulation. The figure shows eddy activity in the vertically integrated (barotropic) velocity nearly six years into a POPSICLES simulation of the Antarctic region.
Promoting Ocean Literacy through American Meteorological Society Programs
NASA Astrophysics Data System (ADS)
Passow, Michael; Abshire, Wendy; Weinbeck, Robert; Geer, Ira; Mills, Elizabeth
2017-04-01
American Meteorological Society Education Programs provide course materials, online and physical resources, educator instruction, and specialized training in ocean, weather, and climate sciences (https://www.ametsoc.org/ams/index.cfm/education-careers/education-program/k-12-teachers/). Ocean Science literacy efforts are supported through the Maury Project, DataStreme Ocean, and AMS Ocean Studies. The Maury Project is a summer professional development program held at the US Naval Academy designed to enhance effective teaching of the science, technology, engineering, and mathematics of oceanography. DataStreme Ocean is a semester-long course offered twice a year to participants nationwide. Created and sustained with major support from NOAA, DS Ocean explores key concepts in marine geology, physical and chemical oceanography, marine biology, and climate change. It utilizes electronically-transmitted text readings, investigations and current environmental data. AMS Ocean Studies provides complete packages for undergraduate courses. These include online textbooks, investigations manuals, RealTime Ocean Portal (course website), and course management system-compatible files. It can be offered in traditional lecture/laboratory, completely online, and hybrid learning environments. Assistance from AMS staff and other course users is available.
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
NASA Technical Reports Server (NTRS)
Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)
2001-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-01-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
Failure analysis of parameter-induced simulation crashes in climate models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.
2013-08-01
Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.
NASA Astrophysics Data System (ADS)
Martin, Daniel; Asay-Davis, Xylar; Cornford, Stephen; Price, Stephen; Ng, Esmond; Collins, William
2015-04-01
We present POPSICLES simulation results covering the full Antarctic Ice Sheet and the Southern Ocean spanning the period 1990 to 2010 resulting from two different choices of climate forcing: a 'normal-year' climatology and the CORE v. 2 interannual forcing data (Large and Yeager 2008). Simulations are performed at 0.1o (~5 km) ocean resolution and adaptive ice sheet resolution as fine as 500 m. We compare time-averaged melt rates below a number of major ice shelves with those reported by Rignot et al. (2013) as well as other recent studies. We also present seasonal variability and decadal melting trends from several Antarctic regions, along with the response of the ice shelves and consequent dynamics of the grounded ice sheet. POPSICLES couples the POP2x ocean model, a modified version of the Parallel Ocean Program (Smith and Gent, 2002), and the BISICLES ice-sheet model (Cornford et al., 2012). POP2x includes sub-ice-shelf circulation using partial top cells (Losch, 2008) and boundary layer physics following Holland and Jenkins (1999), Jenkins (2001), and Jenkins et al. (2010). Standalone POP2x output compares well with standard ice-ocean test cases (e.g., ISOMIP; Losch, 2008) and other continental-scale simulations and melt-rate observations (Kimura et al., 2013; Rignot et al., 2013). BISICLES makes use of adaptive mesh refinement and a 1st-order accurate momentum balance similar to the L1L2 model of Schoof and Hindmarsh (2009) to accurately model regions of dynamic complexity, such as ice streams, outlet glaciers, and grounding lines. Results of BISICLES simulations have compared favorably to comparable simulations with a Stokes momentum balance in both idealized tests (MISMIP-3d; Pattyn et al., 2013) and realistic configurations (Favier et al. 2014).
NASA Oceanic Processes Program, Fiscal Year 1981
NASA Technical Reports Server (NTRS)
1982-01-01
Summaries are included for Nimbus 7, Seasat, TIROS-N, Altimetry, Color Radiometry, in situ data collection systems, Synthetic Aperture Radar (SAR)/Open Ocean, SAR/Sea Ice, Scatterometry, National Oceanic Satellite System, Free Flying Imaging Radar Experiment, TIROS-N/Scatterometer and/or ocean color scanner, and Ocean Topography Experiment. Summaries of individual research projects sponsored by the Ocean Processes Program are given. Twelve investigations for which contracting services are provided by NOAA are included.
NASA Astrophysics Data System (ADS)
Varga, Robert J.; Horst, Andrew J.; Gee, Jeffrey S.; Karson, Jeffrey A.
2008-08-01
Rare, fault-bounded escarpments expose natural cross sections of ocean crust in several areas and provide an unparalleled opportunity to study the end products of tectonic and magmatic processes that operated at depth beneath oceanic spreading centers. We mapped the geologic structure of ocean crust produced at the East Pacific Rise (EPR) and now exposed along steep cliffs of the Pito Deep Rift near the northern edge of the Easter microplate. The upper oceanic crust in this area is typified by basaltic lavas underlain by a sheeted dike complex comprising northeast striking, moderately to steeply southeast dipping dikes. Paleomagnetic remanence of oriented blocks of dikes collected with both Alvin and Jason II indicate clockwise rotation of ˜61° related to rotation of the microplate indicating structural coupling between the microplate and crust of the Nazca Plate to the north. The consistent southeast dip of dikes formed as the result of tilting at the EPR shortly after their injection. Anisotropy of magnetic susceptibility of dikes provides well-defined magmatic flow directions that are dominantly dike-parallel and shallowly plunging. Corrected to their original EPR orientation, magma flow is interpreted as near-horizontal and parallel to the ridge axis. These data provide the first direct evidence from sheeted dikes in ocean crust for along-axis magma transport. These results also suggest that lateral transport in dikes is important even at fast spreading ridges where a laterally continuous subaxial magma chamber is present.
NASA Astrophysics Data System (ADS)
Williamson, V. A.; Pyrtle, A. J.
2004-12-01
How did the 2003 Minorities Striving and Pursuing Higher Degrees of Success (MS PHD'S) in Ocean Sciences Program customize evaluative methodology and instruments to align with program goals and processes? How is data captured to document cognitive and affective impact? How are words and numbers utilized to accurately illustrate programmatic outcomes? How is compliance with implicit and explicit funding regulations demonstrated? The 2003 MS PHD'S in Ocean Sciences Program case study provides insightful responses to each of these questions. MS PHD'S was developed by and for underrepresented minorities to facilitate increased and sustained participation in Earth system science. Key components of this initiative include development of a community of scholars sustained by face-to-face and virtual mentoring partnerships; establishment of networking activities between and among undergraduate, graduate, postgraduate students, scientists, faculty, professional organization representatives, and federal program officers; and provision of forums to address real world issues as identified by each constituent group. The evaluative case study of the 2003 MS PHD'S in Ocean Sciences Program consists of an analysis of four data sets. Each data set was aligned to document progress in the achievement of the following program goals: Goal 1: The MS PHD'S Ocean Sciences Program will successfully market, recruit, select, and engage underrepresented student and non-student participants with interest/ involvement in Ocean Sciences; Goal 2: The MS PHD'S Ocean Sciences Program will provide meaningful engagement for participants as determined by quantitative analysis of user-feedback; Goal 3: The MS PHD'S Ocean Sciences Program will provide meaningful engagement for participants as determined by qualitative analysis of user-feedback, and; Goal 4: The MS PHD'S Ocean Sciences Program will develop a constituent base adequate to demonstrate evidence of interest, value, need and sustainability in its vision, mission, goals and activities. In addition to the documentation of evaluative process, the case study also provides insight on the establishment of mutually supportive principal investigator and evaluator partnerships as necessary foundations for building effective teams. The study addresses frequently asked questions (FAQ's) on the formation and sustenance of partnerships among visionaries and evaluators and the impact of this partnership on the achievement of program outcomes.
Undergraduate Research Experience in Ocean/Marine Science (URE-OMS) with African Student Component
2011-01-01
The Undergraduate Research Experience in Ocean/Marine Science program supports active participation by underrepresented undergraduate students in remote sensing and Ocean/Marine Science research training activities. The program is based on a model for undergraduate research programs supported by the National Science Foundation . The
Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores
NASA Astrophysics Data System (ADS)
Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei
We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.
An object-oriented approach to nested data parallelism
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.; Chatterjee, Siddhartha
1994-01-01
This paper describes an implementation technique for integrating nested data parallelism into an object-oriented language. Data-parallel programming employs sets of data called 'collections' and expresses parallelism as operations performed over the elements of a collection. When the elements of a collection are also collections, then there is the possibility for 'nested data parallelism.' Few current programming languages support nested data parallelism however. In an object-oriented framework, a collection is a single object. Its type defines the parallel operations that may be applied to it. Our goal is to design and build an object-oriented data-parallel programming environment supporting nested data parallelism. Our initial approach is built upon three fundamental additions to C++. We add new parallel base types by implementing them as classes, and add a new parallel collection type called a 'vector' that is implemented as a template. Only one new language feature is introduced: the 'foreach' construct, which is the basis for exploiting elementwise parallelism over collections. The strength of the method lies in the compilation strategy, which translates nested data-parallel C++ into ordinary C++. Extracting the potential parallelism in nested 'foreach' constructs is called 'flattening' nested parallelism. We show how to flatten 'foreach' constructs using a simple program transformation. Our prototype system produces vector code which has been successfully run on workstations, a CM-2, and a CM-5.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
... Ocean and Coastal Resource Management (OCRM) announces a rescheduled site visit and time for a public... Management Programs and National Estuarine Research Reserves AGENCY: National Oceanic and Atmospheric Administration (NOAA), Office of Ocean and Coastal Resource Management, National Ocean Service, Commerce. ACTION...
The BLAZE language: A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, P.; Vanrosendale, J.
1985-01-01
A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.
Adapting high-level language programs for parallel processing using data flow
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1988-01-01
EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.
Collectively loading programs in a multiple program multiple data environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.
Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the programmore » needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.« less
The Global Ocean Observing System: One perspective
NASA Technical Reports Server (NTRS)
Wilson, J. Ron
1992-01-01
This document presents a possible organization for a Global Ocean Observing System (GOOS) within the Intergovernmental Oceanographic Commission and the joint ocean programs with the World Meteorological Organization. The document and the organization presented here is not intended to be definitive, complete or the best possible organization for such an observation program. It is presented at this time to demonstrate three points. The first point to be made is that an international program office for GOOS along the lines of the WOCE and TOGA IPOs is essential. The second point is that national programs will have to continue to collect data at the scale of WOCE plus TOGA and more. The third point is that there are many existing groups and committees within the IOC and joint IOC/WMO ocean programs that can contribute essential experience to and form part of the basis of a Global Ocean Observing System. It is particularly important to learn from what has worked and what has not worked in the past if a successful ocean observing system is to result.
NASA Astrophysics Data System (ADS)
Ombres, E. H.
2016-02-01
NOAA's Ocean Acidification Program (OAP) was created as a mandate of the 2009 Federal Ocean Acidification Research and Monitoring (FOARAM) Act and has been directly funding species response research since 2012. Although OA species response is a relatively young field of science, this program built on research already underway across NOAA. That research platform included experimental facilities in the Fishery Sciences Centers of the National Marine Fishery Service (NMFS), `wet' labs of Oceanic and Atmospheric Research (OAR), and the coral reef monitoring studies within the National Ocean Service (NOS). The diversity of research across NOAA allows the program to make interdisciplinary connections among chemists, biologists and oceanographers and creates a more comprehensive and robust approach to understanding species response to this change in the carbon cycle. To date, the program has studied a range of taxa including phytoplankton, molluscs, crustaceans, and fish. This poster describes representative results from the collection of OAP-funded species at nationwide NOAA facilities.
The BLAZE language - A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1987-01-01
A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.
MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program
NASA Astrophysics Data System (ADS)
Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.
2018-02-01
We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.
IOPA: I/O-aware parallelism adaption for parallel programs
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads. PMID:28278236
IOPA: I/O-aware parallelism adaption for parallel programs.
Liu, Tao; Liu, Yi; Qian, Chen; Qian, Depei
2017-01-01
With the development of multi-/many-core processors, applications need to be written as parallel programs to improve execution efficiency. For data-intensive applications that use multiple threads to read/write files simultaneously, an I/O sub-system can easily become a bottleneck when too many of these types of threads exist; on the contrary, too few threads will cause insufficient resource utilization and hurt performance. Therefore, programmers must pay much attention to parallelism control to find the appropriate number of I/O threads for an application. This paper proposes a parallelism control mechanism named IOPA that can adjust the parallelism of applications to adapt to the I/O capability of a system and balance computing resources and I/O bandwidth. The programming interface of IOPA is also provided to programmers to simplify parallel programming. IOPA is evaluated using multiple applications with both solid state and hard disk drives. The results show that the parallel applications using IOPA can achieve higher efficiency than those with a fixed number of threads.
NASA Astrophysics Data System (ADS)
Crane, N. L.; Wasser, A.; Weiss, T.; Sullivan, M.; Jones, A.
2004-12-01
Educators, policymakers, employers and other stakeholders in ocean and other geo-science fields face the continuing challenge of a lack of diversity in these fields. A particular challenge for educators and geo-science professionals promoting ocean sciences is to create programs that have broad access, including access for underrepresented youth. Experiential learning in environments such as intensive multi-day science and summer camps can be a critical captivator and motivator for young people. Our data suggest that youth, especially underrepresented youth, may benefit from exposure to the oceans and ocean science through intensive, sustained (eg more than just an afternoon), hands-on, science-based experiences. Data from the more than 570 youth who have participated in Camp SEA Lab's academically based experiential ocean science camp and summer programs provide compelling evidence for the importance of such programs in motivating young people. We have paid special attention to factors that might play a role in recruiting and retaining these young people in ocean science fields. Over 50% of program attendees were underrepresented youth and on scholarship, which gives us a closer look at the impact of such programs on youth who would otherwise not have the opportunity to participate. Both cognitive (knowledge) and affective (personal growth and motivation) indicators were assessed through surveys and questionnaires. Major themes drawn from the data for knowledge growth and personal growth in Camp SEA Lab youth attendees will be presented. These will be placed into the larger context of critical factors that enhance recruitment and retention in the geo-science pipeline. Successful strategies and challenges for involving families and broadening access to specialized programs such as Camp SEA Lab will also be discussed.
Earth and ocean dynamics program
NASA Technical Reports Server (NTRS)
Vonbun, F. O.
1976-01-01
The objectives and requirements of the Earth and Ocean Dynamics Programs are outlined along with major goals and experiments. Spaceborne as well as ground systems needed to accomplish program goals are listed and discussed along with program accomplishments.
2007-02-01
MPa and is constrained by calibrating the two electronic pressure gauges against a Heise gauge . Axial displacement during melt extraction is measured...105 (B 12), 28,411- 28,425, 2000. Cannat, M., et al., Proceedings of the Ocean Drilling Program, Initial Reports, Ocean Drilling Program, College...Kane transform zone (MARK), Proc. Ocean Drill . Program, Sci. Results, 153, 5-21, 1997. Karson, J.A., G. Thompson, S.E. Humphris, S.E. Edmond, J.M
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Code of Federal Regulations, 2012 CFR
2012-10-01
... consists of Puerto Rico, the Virgin Islands and that portion of the State of California which is located...; thence east along the 45th parallel to the Atlantic Ocean. When any of the above lines pass through a...
Code of Federal Regulations, 2011 CFR
2011-10-01
... consists of Puerto Rico, the Virgin Islands and that portion of the State of California which is located...; thence east along the 45th parallel to the Atlantic Ocean. When any of the above lines pass through a...
Code of Federal Regulations, 2013 CFR
2013-10-01
... consists of Puerto Rico, the Virgin Islands and that portion of the State of California which is located...; thence east along the 45th parallel to the Atlantic Ocean. When any of the above lines pass through a...
Code of Federal Regulations, 2014 CFR
2014-10-01
... consists of Puerto Rico, the Virgin Islands and that portion of the State of California which is located...; thence east along the 45th parallel to the Atlantic Ocean. When any of the above lines pass through a...
Code of Federal Regulations, 2010 CFR
2010-10-01
...; thence east along the 45th parallel to the Atlantic Ocean. When any of the above lines pass through a... consists of Puerto Rico, the Virgin Islands and that portion of the State of California which is located...
Methods for design and evaluation of parallel computating systems (The PISCES project)
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.; Wise, Robert; Haught, Mary JO
1989-01-01
The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.
76 FR 57022 - Coastal Zone Management Program: Illinois
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-15
... DEPARTMENT OF COMMERCE National Oceanic And Atmospheric Administration Coastal Zone Management Program: Illinois AGENCY: Office of Ocean and Coastal Resource Management (OCRM), National Oceanic and... Resource Management. The DEIS assesses the environmental impacts associated with approval of the Illinois...
Ocean energy program summary. Volume 2: Research summaries
NASA Astrophysics Data System (ADS)
1990-01-01
The oceans are the world's largest solar energy collector and storage system. Covering 71 percent of the earth's surface, this stored energy is realized as waves, currents, and thermal salinity gradients. The purpose of the Federal Ocean Energy Technology (OET) Program is to develop techniques that harness this ocean energy in a cost effective and environmentally acceptable manner. The OET Program seeks to develop ocean energy technology to a point where the commercial sector can assess whether applications of the technology are viable energy conversion alternatives or supplements to systems. Past studies conducted by the U.S. Department of Energy (DOE) have identified ocean thermal energy conversion (OTEC) as the largest potential contributor to United States energy supplies from the ocean resource. As a result, the OET Program concentrates on research to advance OTEC technology. Current program emphasis has shifted to open-cycle OTEC power system research because the closed-cycle OTEC system is at a more advanced stage of development and has already attracted industrial interest. During FY 1989, the OET Program focused primarily on the technical uncertainties associated with near-shore open-cycle OTEC systems ranging in size from 2 to 15 MW(sub e). Activities were performed under three major program elements: thermodynamic research and analysis, experimental verification and testing, and materials and structures research. These efforts addressed a variety of technical problems whose resolution is crucial to demonstrating the viability of open-cycle OTEC technology. This publications is one of a series of documents on the Renewable Energy programs sponsored by the U.S. Department of Energy. An overview of all the programs is available, entitled Programs in Renewable Energy.
Computer-aided programming for message-passing system; Problems and a solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M.Y.; Gajski, D.D.
1989-12-01
As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.
Parallel implementation of an adaptive and parameter-free N-body integrator
NASA Astrophysics Data System (ADS)
Pruett, C. David; Ingham, William H.; Herman, Ralph D.
2011-05-01
Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.
Nested ocean models: Work in progress
NASA Technical Reports Server (NTRS)
Perkins, A. Louise
1991-01-01
The ongoing work of combining three existing software programs into a nested grid oceanography model is detailed. The HYPER domain decomposition program, the SPEM ocean modeling program, and a quasi-geostrophic model written in England are being combined into a general ocean modeling facility. This facility will be used to test the viability and the capability of two-way nested grids in the North Atlantic.
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
76 FR 66309 - Pilot Program for Parallel Review of Medical Products; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS-3180-N2] Food and Drug Administration [Docket No. FDA-2010-N-0308] Pilot Program for Parallel Review of Medical... technologies to participate in a program of parallel FDA-CMS review. The document was published with an...
Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.
Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio
2014-07-05
A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems. Copyright © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
German, C. R.; Fornari, D. J.; Fryer, P.; Girguis, P. R.; Humphris, S. E.; Kelley, D. S.; Tivey, M.; Van Dover, C. L.; Von Damm, K.
2012-12-01
In 2013, Alvin returns to service after significant observational and operational upgrades supported by the NSF, NAVSEA & NOAA. Here we review highlights of the first half-century of deep submergence science conducted by Alvin, describe some of the most significant improvements for the new submarine and discuss the importance of these new capabilities for 21st century ocean science and education. Alvin has a long history of scientific exploration, discovery and intervention at the deep seafloor: in pursuit of hypothesis-driven research and in response to human impacts. One of Alvin's earliest achievements, at the height of the Cold War, was to help locate & recover an H-bomb in the Mediterranean, while the last dives completed, just ahead of the current refit, were to investigate the impacts of the Deep Water Horizon oil spill. Alvin has excelled in supporting a range of Earth & Life Science programs including, in the late 1970s, first direct observations and sampling of deep-sea hydrothermal vents and the unusual fauna supported by microbial chemosynthesis. The 1980s saw expansion of Alvin's dive areas to newly discovered hot-springs in the Atlantic & NE Pacific, Alvin's first dives to the wreck of RMS Titanic and its longest excursions away from WHOI yet, via Loihi Seamount (Hawaii) to the Mariana Trench. The 1990s saw Alvin's first event-response dives to sites where volcanic eruptions had just occurred at the East Pacific Rise & Juan de Fuca Ridge while the 2000s saw Alvin discover novel off-axis venting at Lost City. Observations from these dives fundamentally changed our views of volcanic and microbial processes within young ocean crust and even the origins of life! In parallel, new deep submergence capabilities, including manipulative experiments & sensor development, relied heavily on testing using Alvin. Recently, new work has focused on ocean margins where fluid flow from the seafloor results in the release of hydrocarbons and other chemical species that can sustain chemosynthetic seep ecosystems comparable to, and sometimes sharing species with, hot vents. What will Alvin's next 50 years discover? During 2011-12, Alvin has undergone a transformation, including a larger personnel sphere with more & larger viewports to provide improved overlapping fields of view for the pilot & observers. The new Alvin will be certified for operations to 4500m depth initially, but the new sphere will be 6500m-rated and planned future upgrades will ultimately allow the vehicle to dive that deep, enabling human access to 98% of the global ocean floor. This will allow the study of processes and dynamics of Earth's largest ecosystem (the abyssal plains) as well as margin and ridge environments and the overlying water column. Meantime, the current upgrades to Alvin already include a suite of scientific enhancements including new HD video & still imaging, sophisticated data acquisition systems for seafloor observations and mapping, a new work platform with greater payload capacity and improved observer ergonomics. The new Alvin is poised to play important roles in core Earth and Life science programs and to serve large-scale programs such as the Ocean Observatory Initiative (OOI) and the International Ocean Discovery Program (IODP). It will continue to attract, engage and inspire a new generation of scientists & students to explore and study the largest ecosystem on Earth, just as it has done throughout its first half century.
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++
NASA Technical Reports Server (NTRS)
Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis
1994-01-01
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.
Global Ocean Prediction with the HYbrid Coordinate Ocean Model, HYCOM
NASA Astrophysics Data System (ADS)
Chassignet, E.
A broad partnership of institutions is collaborating in developing and demonstrating the performance and application of eddy-resolving, real-time global and Atlantic ocean prediction systems using the the HYbrid Coordinate Ocean Model (HYCOM). These systems will be transitioned for operational use by both the U.S. Navy at the Naval Oceanographic Office (NAVOCEANO), Stennis Space Center, MS, and the Fleet Numerical Meteorology and Oceanography Centre (FNMOC), Monterey, CA, and by NOAA at the National Centers for Environmental Prediction (NCEP), Washington, D.C. These systems will run efficiently on a variety of massively parallel computers and will include sophisticated data assimilation techniques for assimilation of satellite altimeter sea surface height and sea surface temperature as well as in situ temperature, salinity, and float displacement. The Partnership addresses the Global Ocean Data Assimilation Experiment (GODAE) goals of three-dimensional (3D) depiction of the ocean state at fine resolution in real-time and provision of boundary conditions for coastal and regional models. An overview of the effort will be presented.
Using CLIPS in the domain of knowledge-based massively parallel programming
NASA Technical Reports Server (NTRS)
Dvorak, Jiri J.
1994-01-01
The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Evolving binary classifiers through parallel computation of multiple fitness cases.
Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni
2005-06-01
This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.
Implementations of BLAST for parallel computers.
Jülich, A
1995-02-01
The BLAST sequence comparison programs have been ported to a variety of parallel computers-the shared memory machine Cray Y-MP 8/864 and the distributed memory architectures Intel iPSC/860 and nCUBE. Additionally, the programs were ported to run on workstation clusters. We explain the parallelization techniques and consider the pros and cons of these methods. The BLAST programs are very well suited for parallelization for a moderate number of processors. We illustrate our results using the program blastp as an example. As input data for blastp, a 799 residue protein query sequence and the protein database PIR were used.
Programming parallel architectures: The BLAZE family of languages
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush
1988-01-01
Programming multiprocessor architectures is a critical research issue. An overview is given of the various approaches to programming these architectures that are currently being explored. It is argued that two of these approaches, interactive programming environments and functional parallel languages, are particularly attractive since they remove much of the burden of exploiting parallel architectures from the user. Also described is recent work by the author in the design of parallel languages. Research on languages for both shared and nonshared memory multiprocessors is described, as well as the relations of this work to other current language research projects.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
High-performance computing — an overview
NASA Astrophysics Data System (ADS)
Marksteiner, Peter
1996-08-01
An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.
A Shifting Baseline: Higher Degrees and Career Options for Ocean Scientists
NASA Astrophysics Data System (ADS)
Yoder, J. A.; Briscoe, M. G.; Glickson, D.; Roberts, S.; Spinrad, R. W.
2016-02-01
As for other fields of science, a Ph.D. degree in the ocean sciences no longer guarantees an academic position. In fact, recent studies show that while most earning a Ph.D. in the ocean sciences today may start in academia as a postdoc, an undetermined number of postdocs may not move into university faculty positions or comparable positions at basic research institutions. Although the data are few, some believe that most of those now earning Ph.D. degrees in ocean science are eventually employed outside of academia. Changes to the career path for those entering ocean science graduate programs today is both a challenge and an opportunity for graduate programs. Some graduates of course do continue in academia. For those students who are determined to follow that path, graduate programs need to prepare them for that choice. On the other hand, graduate programs also have an obligation to provide students with the information they need to make educated career decisions - there are interesting career choices other than academia for those earning a Ph.D. or finishing with a terminal M.S. degree. Furthermore, graduate programs need to encourage students to think hard about their career expectations early in their graduate program to ensure they acquire the skills needed to keep career options open. This talk will briefly review some of the recent studies related to the career paths of those who recently acquired a Ph.D. in ocean sciences and other fields; describe possible career options for those who enter ocean science graduate programs; encourage more attention on the career possibilities of a terminal ocean science M.S. degree perhaps combined with another higher degree in a different field; and discuss the skills a graduate student can acquire that increase the breadth of career path opportunities.
The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit
NASA Technical Reports Server (NTRS)
Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete;
1998-01-01
Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Open Ocean Internal Waves, South China Sea
NASA Technical Reports Server (NTRS)
1989-01-01
These open ocean internal waves were seen in the south China Sea (19.5N, 114.5E). These sets of internal waves most likely coincide with tidal periods about 12 hours apart. The wave length (distance from crest to crest) varies between 1.5 and 5.0 miles and the crest lengths stretch across and beyond this photo for over 75 miles. At lower right, the surface waves are moving at a 30% angle to the internal waves, with parallel low level clouds.
Design and analysis of a global sub-mesoscale and tidal dynamics admitting virtual ocean.
NASA Astrophysics Data System (ADS)
Menemenlis, D.; Hill, C. N.
2016-02-01
We will describe the techniques used to realize a global kilometerscale ocean model configuration that includes representation of sea-ice and tidal excitation, and spans scales from planetary gyres to internal tides. A simulation using this model configuration provides a virtual ocean that admits some sub-mesoscale dynamics and tidal energetics not normally represented in global calculations. This extends simulated ocean behavior beyond broadly quasi-geostrophic flows and provides a preliminary example of a next generation computational approach to explicitly probing the interactions between instabilities that are usually parameterized and dominant energetic scales in the ocean. From previous process studies we have ascertained that this can lead to a qualitative improvement in the realism of many significant processes including geostrophic eddy dynamics, shelf-break exchange and topographic mixing. Computationally we exploit high-degrees of parallelism in both numerical evaluation and in recording model state to persistent disk storage. Together this allows us to compute and record a full three-dimensional model trajectory at hourly frequency for a timeperiod of 5 months with less than 9 million core hours of parallel computer time, using the present generation NASA Ames Research Center facilities. We have used this capability to create a 5 month trajectory archive, sampled at high spatial and temporal frequency for an ocean configuration that is initialized from a realistic data-assimilated state and driven with reanalysis surface forcing from ECMWF. The resulting database of model state provides a novel virtual laboratory for exploring coupling across scales in the ocean, and for testing ideas on the relationship between small scale fluxes and large scale state. The computation is complemented by counterpart computations that are coarsened two and four times respectively. In this presentation we will review the computational and numerical technologies employed and show how the high spatio-temporal frequency archive of model state can provide a new and promising tool for researching richer ocean dynamics at scale. We will also outline how computations of this nature could be combined with next generation computer hardware plans to help inform important climate process questions.
Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior
NASA Technical Reports Server (NTRS)
Gelenbe, Erol
1988-01-01
An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.
Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng
2013-10-24
Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.
NASA Technical Reports Server (NTRS)
Hockney, George; Lee, Seungwon
2008-01-01
A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.
15 CFR 923.96 - Grant amendments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation Grants...
15 CFR 923.96 - Grant amendments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation Grants...
Climate Ocean Modeling on a Beowulf Class System
NASA Technical Reports Server (NTRS)
Cheng, B. N.; Chao, Y.; Wang, P.; Bondarenko, M.
2000-01-01
With the growing power and shrinking cost of personal computers. the availability of fast ethernet interconnections, and public domain software packages, it is now possible to combine them to build desktop parallel computers (named Beowulf or PC clusters) at a fraction of what it would cost to buy systems of comparable power front supercomputer companies. This led as to build and assemble our own sys tem. specifically for climate ocean modeling. In this article, we present our experience with such a system, discuss its network performance, and provide some performance comparison data with both HP SPP2000 and Cray T3E for an ocean Model used in present-day oceanographic research.
A survey of parallel programming tools
NASA Technical Reports Server (NTRS)
Cheng, Doreen Y.
1991-01-01
This survey examines 39 parallel programming tools. Focus is placed on those tool capabilites needed for parallel scientific programming rather than for general computer science. The tools are classified with current and future needs of Numerical Aerodynamic Simulator (NAS) in mind: existing and anticipated NAS supercomputers and workstations; operating systems; programming languages; and applications. They are divided into four categories: suggested acquisitions, tools already brought in; tools worth tracking; and tools eliminated from further consideration at this time.
Backtracking and Re-execution in the Automatic Debugging of Parallelized Programs
NASA Technical Reports Server (NTRS)
Matthews, Gregory; Hood, Robert; Johnson, Stephen; Leggett, Peter; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we describe a new approach using relative debugging to find differences in computation between a serial program and a parallel version of th it program. We use a combination of re-execution and backtracking in order to find the first difference in computation that may ultimately lead to an incorrect value that the user has indicated. In our prototype implementation we use static analysis information from a parallelization tool in order to perform the backtracking as well as the mapping required between serial and parallel computations.
15 CFR 923.91 - State responsibility.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation... development of the State's coastal management program. The designee need not be that entity designated by the...
15 CFR 923.91 - State responsibility.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation... development of the State's coastal management program. The designee need not be that entity designated by the...
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
15 CFR 923.84 - Routine program changes.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Amendments to and Termination of Approved Management...
15 CFR 923.84 - Routine program changes.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Amendments to and Termination of Approved Management...
NASA Astrophysics Data System (ADS)
Nachman, C.
2017-12-01
As ice conditions change and ocean temperatures continue to rise, the potential for living marine resources to migrate farther north and for vessels to journey north with them is expanding. To date, the central Arctic Ocean (CAO) has remained relatively unexposed to human activities, including commercial fishing. However, as conditions continue to change, the potential for expansion of fishing fleets exists. In July 2015, the five Arctic coastal states signed a declaration concerning the prevention of unregulated high seas fishing in the CAO. Recognizing the need to involve additional nations with interests in the Arctic region, in December 2015, the five Arctic coastal states, along with China, the European Union, Japan, Iceland, and Korea, began a process to negotiate a binding agreement to prevent unregulated fishing in the high seas of the CAO. A key underlying goal of the negotiations is to reach agreement that nations would establish a joint program of scientific research and monitoring to better understand the CAO ecosystem and whether fish stocks might exist there that could be harvested on a sustainable basis and the possible impacts of such fisheries on the ecosystems. The data collected through the international joint science program will compose a key piece of the decision-making at the policy level regarding establishing appropriate measures or organizations to manage fishing in the CAO should the science indicate potentials for commercial fishing in the CAO. Since the beginning of these high-level negotiations, the policy makers have consistently agreed that conducting collaborative science is the primary way to determine whether sustainable commercial fishing could one day occur in the region. I will highlight the policy negotiation process and parallel science meetings to date to demonstrate how science can influence policy to prevent a fishing disaster.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-17
... Damage Assessment, Remediation, and Restoration Program for Fiscal Year 2011 AGENCY: National Oceanic and..., Remediation, and Restoration Program for Fiscal Year 2011. SUMMARY: The National Oceanic and Atmospheric Administration's (NOAA's) Damage Assessment, Remediation, and Restoration Program (DARRP) is announcing new...
NASA Astrophysics Data System (ADS)
Becel, A.; Carton, H. D.; Shillington, D. J.
2017-12-01
The most heterogeneous, porous and permeable layer within a subducting oceanic crust is the uppermost layer called Layer 2A. This layer, made of extrusive basalts, forms at the ridge axis and persists as a thin ( 600 m) low-velocity cap in old crust. Nearing the trench axis, when oceanic plate bends, normal faults can be formed or reactivated at the outer-rise allowing a more vigorous hydrothermal circulation to resume within this layer. Porosity and heterogeneity within this layer are important to assess because these parameters might have a profound impact on subduction zone processes. However, conventional refraction data quality is rarely good enough to look into detail into the properties of the uppermost oceanic layer. Here we use 2D marine long-offset multi-channel seismic (MCS) reflection data collected offshore of the Alaska Peninsula during the ALEUT Program. The dataset was acquired aboard the R/V Marcus Langseth with a 636-channels, 8-km long streamer. We present initial results from three 140 km long profiles across the 52-56Myr old incoming Pacific oceanic crust formed at fast spreading rate: two perpendicular margin and one parallel margin profiles. Those profiles are located outboard of the Shumagin gaps. Outboard of this subduction zone segment, abundant bending related normal faults are imaged and concentrated within 50-60 km of the trench. Long-offset MCS data exhibit a prominent triplication that includes postcritical reflections and turning waves within the upper crust at offsets larger than 3 km. The triplication suggests the presence of a velocity discontinuity within the upper oceanic crust. We follow a systematic and uniform approach to extract upper crustal post-critical reflections and add them to them to the vertical incidence MCS images. Images reveal small-scale variations in the thickness of the Layer 2A and the strength of its base along the profiles. The second step consists of the downward continuation followed by travel-time modeling of the long streamer data. The downward continuation of the shots and receivers appears to be essential to unravel the refracted energy in the upper crust and is used to determine the detailed velocity-depth structure.
NASA Astrophysics Data System (ADS)
Li, Linghan; McClean, Julie L.; Miller, Arthur J.; Eisenman, Ian; Hendershott, Myrl C.; Papadopoulos, Caroline A.
2014-12-01
The seasonal cycle of sea ice variability in the Bering Sea, together with the thermodynamic and dynamic processes that control it, are examined in a fine resolution (1/10°) global coupled ocean/sea-ice model configured in the Community Earth System Model (CESM) framework. The ocean/sea-ice model consists of the Los Alamos National Laboratory Parallel Ocean Program (POP) and the Los Alamos Sea Ice Model (CICE). The model was forced with time-varying reanalysis atmospheric forcing for the time period 1970-1989. This study focuses on the time period 1980-1989. The simulated seasonal-mean fields of sea ice concentration strongly resemble satellite-derived observations, as quantified by root-mean-square errors and pattern correlation coefficients. The sea ice energy budget reveals that the seasonal thermodynamic ice volume changes are dominated by the surface energy flux between the atmosphere and the ice in the northern region and by heat flux from the ocean to the ice along the southern ice edge, especially on the western side. The sea ice force balance analysis shows that sea ice motion is largely associated with wind stress. The force due to divergence of the internal ice stress tensor is large near the land boundaries in the north, and it is small in the central and southern ice-covered region. During winter, which dominates the annual mean, it is found that the simulated sea ice was mainly formed in the northern Bering Sea, with the maximum ice growth rate occurring along the coast due to cold air from northerly winds and ice motion away from the coast. South of St Lawrence Island, winds drive the model sea ice southwestward from the north to the southwestern part of the ice-covered region. Along the ice edge in the western Bering Sea, model sea ice is melted by warm ocean water, which is carried by the simulated Bering Slope Current flowing to the northwest, resulting in the S-shaped asymmetric ice edge. In spring and fall, similar thermodynamic and dynamic patterns occur in the model, but with typically smaller magnitudes and with season-specific geographical and directional differences.
Microbial life in cold, hydrologically active oceanic crustal fluids
NASA Astrophysics Data System (ADS)
Meyer, J. L.; Jaekel, U.; Girguis, P. R.; Glazer, B. T.; Huber, J. A.
2012-12-01
It is estimated that at least half of Earth's microbial biomass is found in the deep subsurface, yet very little is known about the diversity and functional roles of these microbial communities due to the limited accessibility of subseafloor samples. Ocean crustal fluids, which may have a profound impact on global nutrient cycles given the large volumes of water moving through the crustal aquifer, are particularly difficult to sample. Access to uncontaminated ocean crustal fluids is possible with CORK (Circulation Obviation Retrofit Kit) observatories, installed through the Integrated Ocean Drilling Program (IODP). Here we present the first microbiological characterization of the formation fluids from cold, oxygenated igneous crust at North Pond on the western flank of the Mid Atlantic Ridge. Fluids were collected from two CORKs installed at IODP boreholes 1382A and 1383C and include fluids from three different depth horizons within oceanic crust. Collection of borehole fluids was monitored in situ using an oxygen optode and solid-state voltammetric electrodes. In addition, discrete samples were analyzed on deck using a comparable lab-based system as well as a membrane-inlet mass spectrometer to quantify all dissolved volatiles up to 200 daltons. The instruments were operated in parallel and both in situ and shipboard geochemical measurements point to a highly oxidized fluid, revealing an apparent slight depletion of oxygen in subsurface fluids (~215μM) relative to bottom seawater (~245μM). We were unable to detect reduced hydrocarbons, e.g. methane. Cell counts indicated the presence of roughly 2 x 10^4 cells per ml in all fluid samples, and DNA was extracted and amplified for the identification of both bacterial and archaeal community members. The utilization of ammonia, nitrate, dissolved inorganic carbon, and acetate was measured using stable isotopes, and oxygen consumption was monitored to provide an estimate of the rate of respiration per cell per day. These results provide the first dataset describing the diversity of microbes present in cold, oxygenated ocean crustal fluids and the biogeochemical processes they mediate in the subseafloor.
A full year of snow on sea ice observations and simulations - Plans for MOSAiC 2019/20
NASA Astrophysics Data System (ADS)
Nicolaus, M.; Geland, S.; Perovich, D. K.
2017-12-01
The snow cover on sea on sea ice dominates many exchange processes and properties of the ice covered polar oceans. It is a major interface between the atmosphere and the sea ice with the ocean underneath. Snow on sea ice is known for its extraordinarily large spatial and temporal variability from micro scales and minutes to basin wide scales and decades. At the same time, snow cover properties and even snow depth distributions are among the least known and most difficult to observe climate variables. Starting in October 2019 and ending in October 2020, the international MOSAiC drift experiment will allow to observe the evolution of a snow pack on Arctic sea ice over a full annual cycle. During the drift with one ice floe along the transpolar drift, we will study snow processes and interactions as one of the main topics of the MOSAiC research program. Thus we will, for the first time, be able to perform such studies on seasonal sea ice and relate it to previous expeditions and parallel observations at different locations. Here we will present the current status of our planning of the MOSAiC snow program. We will summarize the latest implementation ideas to combine the field observations with numerical simulations. The field program will include regular manual observations and sampling on the main floe of the central observatory, autonomous recordings in the distributed network, airborne observations in the surrounding of the central observatory, and retrievals of satellite remote sensing products. Along with the field program, numerical simulations of the MOSAiC snow cover will be performed on different scales, including large-scale interaction with the atmosphere and the sea ice. The snow studies will also bridge between the different disciplines, including physical, chemical, biological, and geochemical measurements, samples, and fluxes. The main challenge of all measurements will be to accomplish the description of the full annual cycle.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Failed oceanic transform models: experience of shaking the tree
NASA Astrophysics Data System (ADS)
Gerya, Taras
2017-04-01
In geodynamics, numerical modeling is often used as a trial-and-error tool, which does not necessarily requires full understanding or even a correct concept for a modeled phenomenon. Paradoxically, in order to understand an enigmatic process one should simply try to model it based on some initial assumptions, which must not even be correct… The reason is that our intuition is not always well "calibrated" for understanding of geodynamic phenomena, which develop on space- and timescales that are very different from our everyday experience. We often have much better ideas about physical laws governing geodynamic processes than on how these laws should interact on geological space- and timescales. From this prospective, numerical models, in which these physical laws are self-consistently implemented, can gradually calibrate our intuition by exploring what scenarios are physically sensible and what are not. I personally went through this painful learning path many times and one noteworthy example was my 3D numerical modeling of oceanic transform faults. As I understand in retrospective, my initial literature-inspired concept of how and why transform faults form and evolve was thermomechanically inconsistent and based on two main assumptions (btw. both were incorrect!): (1) oceanic transforms are directly inherited from the continental rifting and breakup stages and (2) they represent plate fragmentation structures having peculiar extension-parallel orientation due to the stress rotation caused by thermal contraction of the oceanic lithosphere. During one year (!) of high-resolution thermomechanical numerical experiments exploring various physics (including very computationally demanding thermal contraction) I systematically observed how my initially prescribed extension-parallel weak transform faults connecting ridge segments rotated away from their original orientation and get converted into oblique ridge sections… This was really an epic failure! However, at the very same time, some pseudo-2D "side-models" with initial strait ridge and ad-hock strain weakened rheology, which were run for curiosity, suddenly showed spontaneous development of ridge curvature… Fraction of these models showed spontaneous development of orthogonal ridge-transform patterns by rotation of oblique ridge sections toward extension-parallel direction to accommodate asymmetric plate accretion. The later was controlled by detachment faults stabilized by strain weakening. Further exploration of these "side-models" resulted in complete changing of my concept for oceanic transforms: they are not plate fragmentation but rather plate growth structures stabilized by continuous plate accretion and rheological weakening of deforming rocks (Gerya, 2010, 2013). The conclusion is - keep shaking the tree and banana will fall… Gerya, T. (2010) Dynamical instability produces transform faults at mid-ocean ridges. Science, 329, 1047-1050. Gerya, T.V. (2013) Three-dimensional thermomechanical modeling of oceanic spreading initiation and evolution. Phys. Earth Planet. Interiors, 214, 35-52.
An OpenACC-Based Unified Programming Model for Multi-accelerator Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S
2015-01-01
This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.
PMEL Contributions to the OceanSITES Program
2006-09-01
System and international research programs. PMEL is a major contribu- tor to OceanSITES in the context of the Tropical Ocean At- mosphere/ Triangle ...include five TAO moorings, the KEO mooring, and non- PMEL moorings off of Hawaii and Bermuda (Fig. 1, Table 3). The prototype for the moored CO2 system was
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems.
Stone, John E; Gohara, David; Shi, Guochun
2010-05-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures.
Crustal structure of the Agulhas Ridge (South Atlantic Ocean): Formation above a hotspot?
NASA Astrophysics Data System (ADS)
Jokat, Wilfried; Hagen, Claudia
2017-10-01
The southern South Atlantic Ocean contains several features believed to document the traces of hotspot volcanism during the early formation of the ocean basin, namely the Agulhas Ridge and the Cape Rise seamounts located in the southeast Atlantic between 36°S and 50°S. The Agulhas Ridge parallels the Agulhas-Falkland Fracture Zone, one of the major transform zones of the world. The morphology of the ridge changes dramatically from two parallel segments in the southwest, to the broad plateau-like Agulhas Ridge in the northeast. Because the crustal fabric of the ridge is unknown relating its evolution to hotspots in the southeast Atlantic is an open question. During the RV Polarstern cruise ANT-XXIII-5 seismic reflection and refraction data were collected along a 370 km long profile with 8 Ocean Bottom Stations to investigate its crustal fabric. The profile extends in NNE direction from the Agulhas Basin, 60 km south of the Agulhas Ridge, and continues into the Cape Basin crossing the southernmost of the Cape Rise seamounts. In the Cape Basin we found a crustal thickness of 5.5-7.5 km, and a velocity distribution typical for oceanic crust. The Cape Rise seamounts, however, show a higher velocity in comparison to the surrounding oceanic crust and the Agulhas Ridge. Underplated material is evident below the southernmost of the Cape Rise seamounts. It also has a 5-8% higher density compared to the Agulhas Plateau. The seismic velocities of the Agulhas Ridge are lower, the crustal thickness is approximately 14 km, and age dating of dredge samples from its top provides clear evidence of rejuvenated volcanism at around 26 Ma. Seismic data indicate that although the Cape Rise seamounts formed above a mantle thermal anomaly it had a limited areal extent, whereas the hotspot material that formed the Agulhas Ridge likely erupted along a fracture zone.
NASA Astrophysics Data System (ADS)
Pelz, M. S.; Ewing, N.; Hoeberechts, M.; Riddell, D. J.; McLean, M. A.; Brown, J. C. K.
2015-12-01
Ocean Networks Canada (ONC) uses education and communication to inspire, engage and educate via innovative "meet them where they are, and take them where they need to go" programs. ONC data are accessible via the internet allowing for the promotion of programs wherever the learners are located. We use technologies such as web portals, mobile apps and citizen science to share ocean science data with many different audiences. Here we focus specifically on one of ONC's most innovative programs: community observatories and the accompanying Ocean Sense program. The approach is based on equipping communities with the same technology enabled on ONC's large cabled observatories. ONC operates the world-leading NEPTUNE and VENUS cabled ocean observatories and they collect data on physical, chemical, biological, and geological aspects of the ocean over long time periods, supporting research on complex Earth processes in ways not previously possible. Community observatories allow for similar monitoring on a smaller scale, and support STEM efforts via a teacher-led program: Ocean Sense. This program, based on local observations and global connections improves data-rich teaching and learning via visualization tools, interactive plotting interfaces and lesson plans for teachers that focus on student inquiry and exploration. For example, students use all aspects of STEM by accessing, selecting, and interpreting data in multiple dimensions, from their local community observatories to the larger VENUS and NEPTUNE networks. The students make local observations and global connections in all STEM areas. The first year of the program with teachers and students who use this innovative technology is described. Future community observatories and their technological applications in education, communication and STEM efforts are also described.
NASA Astrophysics Data System (ADS)
Bergondo, D. L.; Mrakovcich, K. L.; Vlietstra, L.; Tebeau, P.; Verlinden, C.; Allen, L. A.; James, R.
2016-02-01
The US Coast Guard Academy, an undergraduate military Academy, in New London CT, provides STEM education programs to the local community that engage the public on hot topics in ocean sciences. Outreach efforts include classroom, lab, and field-based activities at the Academy as well as at local schools. In one course, we partner with a STEM high school collecting fish and environmental data on board a research vessel and subsequently students present the results of their project. In another course, cadets develop and present interactive demonstrations of marine science to local school groups. In addition, the Academy develops In another course, cadets develop and present interactive demonstrations of marine science to local school groups. In addition, the Academy develops and/or participates in outreach programs including Science Partnership for Innovation in Learning (SPIL), Women in Science, Physics of the Sea, and the Ocean Exploration Trust Honors Research Program. As part of the programs, instructors and cadets create interactive and collaborative activities that focus on hot topics in ocean sciences such as oil spill clean-up, ocean exploration, tsunamis, marine biodiversity, and conservation of aquatic habitats. Innovative science demonstrations such as real-time interactions with the Exploration Vessel (E/V) Nautilus, rotating tank simulations of ocean circulation, wave tank demonstrations, and determining what materials work best to contain and clean-up oil, are used to enhance ocean literacy. Children's books, posters and videos are some creative ways students summarize their understanding of ocean sciences and marine conservation. Despite time limitations of students and faculty, and challenges associated with securing funding to keep these programs sustainable, the impact of the programs is overwhelmingly positive. We have built stronger relationships with local community, enhanced ocean literacy, facilitated communication and mentorship between young students and scientists, and encouraged interest of underrepresented minorities in STEM education.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Impact of the Indonesian Throughflow on the Atlantic Meridional Overturning Circulation
NASA Astrophysics Data System (ADS)
Le Bars, Dewi; Dijkstra, Henk
2014-05-01
Understanding the mechanisms controlling the strength and variability of the Atlantic Meridional Overturning Circulation (AMOC) is one of the main topics of climate science and in particular physical oceanography. Current simple representations of the global ocean overturning separates the surface return flow to the Atlantic basin into a cold water path through the Drake Passage and a warm water path through the Indonesian Throughflow and Agulhas leakage. The relative importance of these two paths has been investigated in non-eddying ocean models. In these models the Agulhas retroflection cannot be modelled properly, which leads to an important overestimation of the Agulhas leakage. Furthermore, it seems that the in these models the relation between the meridional density gradient and the overturning strength is greatly simplified and changes significantly when eddies are resolved (Den Toom et al. 2013). As a result, the impact of the Pacific-Indian Oceans exchange through the Indonesian Throughflow on the AMOC is still unknown. To investigate this question we run a state-of-the-art ocean model, the Parallel Ocean Program (POP), globally, at eddy resolving resolution (0.1º). Using climatological forcing from the CORE dataset we perform two simulations of 110 years, a control experiment with realistic coastlines and one in which the Indonesian Passages are closed. Results show that, for a closed Indonesian Throughflow, the Indian Ocean cools down but its salinity increases. The Agulhas leakage reduces also by 3Sv (Le Bars et al. 2013) and the net effect on the south Atlantic is a cooling down and decrease salinity. The anomalies propagate slowly northward and a significant decrease of the AMOC is found at 26ºN after 50 years. This decrease AMOC also leads to reduced northward heat flux in the Atlantic. These processes are investigated with a detailed analysis of the heat and freshwater balances in the Atlantic-Arctic region and in the region south of 34ºS where Drake Passage waters meet Indian Ocean waters and influence the density filed of the whole Atlantic basin. Den Toom, M., H. Dijkstra, W. Weijer, M. Hecht, M. Maltrud, and E. van Sebille, 2013: Response of a Strongly Eddying Global Ocean to North Atlantic Freshwater Perturbations. J. Phys. Oceanogr. doi:10.1175/JPO-D-12-0155.1, in press. Le Bars, D., Dijkstra, H. a. and De Ruijter, W. P. M.: Impact of the Indonesian Throughflow on Agulhas leakage, Ocean Sci., 9(5), 773-785, doi:10.5194/os-9-773-2013, 2013.
Efficient partitioning and assignment on programs for multiprocessor execution
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1993-01-01
The general problem studied is that of segmenting or partitioning programs for distribution across a multiprocessor system. Efficient partitioning and the assignment of program elements are of great importance since the time consumed in this overhead activity may easily dominate the computation, effectively eliminating any gains made by the use of the parallelism. In this study, the partitioning of sequentially structured programs (written in FORTRAN) is evaluated. Heuristics, developed for similar applications are examined. Finally, a model for queueing networks with finite queues is developed which may be used to analyze multiprocessor system architectures with a shared memory approach to the problem of partitioning. The properties of sequentially written programs form obstacles to large scale (at the procedure or subroutine level) parallelization. Data dependencies of even the minutest nature, reflecting the sequential development of the program, severely limit parallelism. The design of heuristic algorithms is tied to the experience gained in the parallel splitting. Parallelism obtained through the physical separation of data has seen some success, especially at the data element level. Data parallelism on a grander scale requires models that accurately reflect the effects of blocking caused by finite queues. A model for the approximation of the performance of finite queueing networks is developed. This model makes use of the decomposition approach combined with the efficiency of product form solutions.
NASA Astrophysics Data System (ADS)
Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.
2015-09-01
Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.
A mechanism for efficient debugging of parallel programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.P.; Choi, J.D.
1988-01-01
This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less
OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems
Stone, John E.; Gohara, David; Shi, Guochun
2010-01-01
We provide an overview of the key architectural features of recent microprocessor designs and describe the programming model and abstractions provided by OpenCL, a new parallel programming standard targeting these architectures. PMID:21037981
Genetic algorithms using SISAL parallel programming language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejada, S.
1994-05-06
Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.
An Expert System for the Development of Efficient Parallel Code
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit
2004-01-01
We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.
Ocean dumping is regulated by the Marine Protection, Research and Sanctuaries Act (MPRSA). Learn about ocean dumping regulation including what materials can and cannot be dumped, the Ocean Dumping Management Program, and MPRSA history and accomplishments.
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
Myths in funding ocean research at the National Science Foundation
NASA Astrophysics Data System (ADS)
Duce, Robert A.; Benoit-Bird, Kelly J.; Ortiz, Joseph; Woodgate, Rebecca A.; Bontempi, Paula; Delaney, Margaret; Gaines, Steven D.; Harper, Scott; Jones, Brandon; White, Lisa D.
2012-12-01
Every 3 years the U.S. National Science Foundation (NSF), through its Advisory Committee on Geosciences, forms a Committee of Visitors (COV) to review different aspects of the Directorate for Geosciences (GEO). This year a COV was formed to review the Biological Oceanography (BO), Chemical Oceanography (CO), and Physical Oceanography (PO) programs in the Ocean Section; the Marine Geology and Geophysics (MGG) and Integrated Ocean Drilling Program (IODP) science programs in the Marine Geosciences Section; and the Ocean Education and Ocean Technology and Interdisciplinary Coordination (OTIC) programs in the Integrative Programs Section of the Ocean Sciences Division (OCE). The 2012 COV assessed the proposal review process for fiscal year (FY) 2009-2011, when 3843 proposal actions were considered, resulting in 1141 awards. To do this, COV evaluated the documents associated with 206 projects that were randomly selected from the following categories: low-rated proposals that were funded, high-rated proposals that were funded, low-rated proposals that were declined, high-rated proposals that were declined, some in the middle (53 awarded, 106 declined), and all (47) proposals submitted to the Rapid Response Research (RAPID) funding mechanism. NSF provided additional data as requested by the COV in the form of graphs and tables. The full COV report, including graphs and tables, is available at http://www.nsf.gov/geo/acgeo_cov.jsp.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
...), Joint Subcommittee on Ocean Science and Technology (JSOST), National Research Council report on Marine p... ideas for effective strategies for Federal, State, and local officials to use to address the potential... particularly suited to gathering information about acidification of ocean waters? ii. Are there new programs...
NASA Astrophysics Data System (ADS)
Plankis, Brian J.
The purpose of the study was to examine the effects of technology-infused issue investigations on high school students' environmental and ocean literacies. This study explored the effects of a new educational enrichment program termed Connecting the Ocean, Reefs, Aquariums, Literacy, and Stewardship (CORALS) on high school science students. The study utilized a mixed methods approach combining a quantitative quasi-experimental pre-post test design with qualitative case studies. The CORALS program is a new educational program that combines materials based on the Investigating and Evaluating Environmental Issues and Actions (IEEIA) curriculum program with the digital storytelling process. Over an 18-week period four high school science teachers and their approximately 169 students investigated environmental issues impacting coral reefs through the IEEIA framework. An additional approximately 224 students, taught by the same teachers, were the control group exposed to standard curriculum. Students' environmental literacy was measured through the Secondary School Environmental Literacy Instrument (SSELI) and students' ocean literacy was measured through the Students' Ocean Literacy Viewpoints and Engagement (SOLVE) instrument. Two classrooms were selected as case studies and examined through classroom observations and student and teacher interviews. The results indicated the CORALS program increased the knowledge of ecological principles, knowledge of environmental problems/issues, and environmental attitudes components of environmental literacy for the experimental group students. For ocean literacy, the experimental group students' scores increased for knowledge of ocean literacy principles, ability to identify oceanic environmental problems, and attitudes concerning the ocean. The SSELI measure of Responsible Environmental Behaviors (REB) was found to be significant for the interaction of teacher and class type (experimental or control). The students for Teachers A and B reported a statistically significant increase in the self-reported REB subscales of ecomanagement and consumer/economic action. This indicates the students reported an increase in the REBs they could change within their lifestyles. This study provides baseline data in an area where few quality studies exist to date. Recommendations for practice and administration of the research study instruments are explored. Recommendations for further research include CORALS program modifications, revising the instruments utilized, and what areas of students' environmental and ocean literacies warrant further exploration.
78 FR 67128 - Coral Reef Conservation Program; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Coral Reef Conservation Program; Meeting AGENCY: Coral Reef Conservation Program, Office of Ocean and Coastal Resource Management... meeting of the U.S. Coral Reef Task Force (USCRTF). The meeting will be held in Christiansted, U.S. Virgin...
Graduate Training Program in Ocean Engineering. Final Report.
ERIC Educational Resources Information Center
Frey, Henry R.
Activities during the first three years of New York University's Ocean Engineering Program are described including the development of new courses and summaries of graduate research projects. This interdepartmental program at the master's level includes aeronautics, chemical engineering, metallurgy, and physical oceanography. Eleven courses were…
Solving Integer Programs from Dependence and Synchronization Problems
1993-03-01
DEFF.NSNE Solving Integer Programs from Dependence and Synchronization Problems Jaspal Subhlok March 1993 CMU-CS-93-130 School of Computer ScienceT IC...method Is an exact and efficient way of solving integer programming problems arising in dependence and synchronization analysis of parallel programs...7/;- p Keywords: Exact dependence tesing, integer programming. parallelilzng compilers, parallel program analysis, synchronization analysis Solving
Why does near ridge extensional seismicity occur primarily in the Indian Ocean?
NASA Technical Reports Server (NTRS)
Stein, Seth; Cloetingh, Sierd; Wortel, Rinus; Wiens, Douglas A.
1987-01-01
It is argued that though thermoelastic stresses provide a low level background in all plates, the data favoring their contributing significantly to the stress field and seismicity in the young oceanic lithosphere may be interpreted in terms of stresses resulting from individual plate geometry and local boundary effects. The dramatic concentration of extensional seismicity in the Central Indian Ocean region is shown to be consistent with finite element results for the intraplate stress incorporating the effects of the Himalayan collision and the various subduction zones. Most of the data for both ridge-parallel extension and depth stratification are provided by earthquakes in this area, and it is suggested that these effects may be due more to the regional stress.
Recent volcano monitoring in Costa Rica
Thorpe, R.; Brown, G.; Rymer, H.; Barritt, S.; Randal, M.
1985-01-01
The Costa Rican volacno Rincon de la Vieja is loosely but mysteriously translated as the "Old Lady's Corner." It consists of six volcanic centers that form a remote elongated ridge standing some 1300m above the surrounding terraine. Geologically speaking, the Guanacaste province of northern Costa Rica consists of a series of composite volcanic cones built on a shield of ignimbrites (welded and unwelded ash flows) of Pliocene-Pleistocene age (up to 2 million years old), that themselves lie on basement crust of Cretaceious-Tertiary age (up to 90 million years old). the active volcanoes are aligned on a northwest-southeast axis parallel to the Middle American oceanic trench in the Pacific Ocean that is the site of subduction of hte Cocos oceanic plate underneath Central America.
Fast 3D Surface Extraction 2 pages (including abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.
Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less
The FORCE - A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.
The FORCE: A highly portable parallel programming language
NASA Technical Reports Server (NTRS)
Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger
1989-01-01
Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
NASA Astrophysics Data System (ADS)
Karson, J. A.
2017-11-01
Unlike most of the Mid-Atlantic Ridge, the North America/Eurasia plate boundary in Iceland lies above sea level where magmatic and tectonic processes can be directly investigated in subaerial exposures. Accordingly, geologic processes in Iceland have long been recognized as possible analogs for seafloor spreading in the submerged parts of the mid-ocean ridge system. Combining existing and new data from across Iceland provides an integrated view of this active, mostly subaerial plate boundary. The broad Iceland plate boundary zone includes segmented rift zones linked by transform fault zones. Rift propagation and transform fault migration away from the Iceland hotspot rearrange the plate boundary configuration resulting in widespread deformation of older crust and reactivation of spreading-related structures. Rift propagation results in block rotations that are accommodated by widespread, rift-parallel, strike-slip faulting. The geometry and kinematics of faulting in Iceland may have implications for spreading processes elsewhere on the mid-ocean ridge system where rift propagation and transform migration occur.
Ocean Filmmaking Camp @ Duke Marine Lab: Building Community with Ocean Science for a Better World
NASA Astrophysics Data System (ADS)
De Oca, M.; Noll, S.
2016-02-01
A democratic society requires that its citizens are informed of everyday's global issues. Out of all issues those related to ocean conservation can be hard to grasp for the general public and especially so for disadvantaged racial and ethnic groups. Opportunity-scarce communities generally have more limited access to the ocean and to science literacy programs. The Ocean Filmmaking Camp @ Duke Marine Lab (OFC@DUML) is an effort to address this gap at the level of high school students in a small coastal town. We designed a six-week summer program to nurture the talents of high school students from under-represented communities in North Carolina with training in filmmaking, marine science and conservation. Our science curriculum is especially designed to present the science in a locally and globally-relevant context. Class discussions, field trips and site visits develop the students' cognitive abilities while they learn the value of the natural environment they live in. Through filmmaking students develop their voice and their media literacy, while connecting with their local community, crossing class and racial barriers. By the end of the summer this program succeeds in encouraging students to engage in the democratic process on ocean conservation, climate change and other everyday affairs affecting their local communities. This presentation will cover the guiding principles followed in the design of the program, and how this high impact-low cost program is implemented. In its first year the program was co-directed by a graduate student and a local high school teacher, who managed more than 20 volunteers with a total budget of $1,500. The program's success was featured in the local newspaper and Duke University's Environment Magazine. This program is an example of how ocean science can play a part in building a better world, knitting diverse communities into the fabric of the larger society with engaged and science-literate citizens living rewarding lives.
Characterizing and Mitigating Work Time Inflation in Task Parallel Programs
Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; ...
2013-01-01
Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMA systems.more » Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less
Distributed and parallel Ada and the Ada 9X recommendations
NASA Technical Reports Server (NTRS)
Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.
1992-01-01
Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.
The Artistic Oceanographer Program
ERIC Educational Resources Information Center
Haley, Sheean T.; Dyhrman, Sonya T.
2009-01-01
The Artistic Oceanographer Program (AOP) was designed to engage elementary school students in ocean sciences and to illustrate basic fifth-grade science and art standards with ocean-based examples. The program combines short science lessons, hands-on observational science, and art, and focuses on phytoplankton, the tiny marine organisms that form…
Implementation and performance of parallel Prolog interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, S.; Kale, L.V.; Balkrishna, R.
1988-01-01
In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.
NASA Astrophysics Data System (ADS)
Lance, V. P.; DiGiacomo, P. M.; Ondrusek, M.; Stengel, E.; Soracco, M.; Wang, M.
2016-02-01
The NOAA/STAR ocean color program is focused on "end-to-end" production of high quality satellite ocean color products. In situ validation of satellite data is essential to produce the high quality, "fit for purpose" ocean color products that support users and applications in all NOAA line offices, as well as external (both applied and research) users. The first NOAA/OMAO (Office of Marine and Aviation Operations) sponsored research cruise dedicated to VIIRS SNPP validation was completed aboard the NOAA Ship Nancy Foster in November 2014. The goals and objectives of the 2014 cruise are highlighted in the recently published NOAA/NESDIS Technical Report. A second dedicated validation cruise is planned for December 2015 and will have been completed by the time of this meeting. The goals and objectives of the 2015 cruise will be discussed in the presentation. Participants and observations made will be reported. The NOAA Ocean Color Calibration/Validation (Cal/Val) team also works collaboratively with others programs. A recent collaboration with the NOAA Ocean Acidification program on the East Coast Ocean Acidification (ECOA) cruise during June-July 2015, where biogeochemical and optical measurements were made together, allows for the leveraging of in situ observations for satellite validation and for their use in the development of future ocean acidification satellite products. Datasets from these cruises will be formally archived at NOAA and Digital Object Identifier (DOI) numbers will be assigned. In addition, the NOAA Coast/OceanWatch Program is working to establish a searchable database. The beta version will begin with cruise data and additional in situ calibration/validation related data collected by the NOAA Ocean Color Cal/Val team members. A more comprehensive searchable NOAA database, with contributions from other NOAA ocean observation platforms and cruise collaborations is envisioned. Progress on these activities will be reported.
NASA Astrophysics Data System (ADS)
Frickenhaus, Stephan; Hiller, Wolfgang; Best, Meike
The portable software FoSSI is introduced that—in combination with additional free solver software packages—allows for an efficient and scalable parallel solution of large sparse linear equations systems arising in finite element model codes. FoSSI is intended to support rapid model code development, completely hiding the complexity of the underlying solver packages. In particular, the model developer need not be an expert in parallelization and is yet free to switch between different solver packages by simple modifications of the interface call. FoSSI offers an efficient and easy, yet flexible interface to several parallel solvers, most of them available on the web, such as PETSC, AZTEC, MUMPS, PILUT and HYPRE. FoSSI makes use of the concept of handles for vectors, matrices, preconditioners and solvers, that is frequently used in solver libraries. Hence, FoSSI allows for a flexible treatment of several linear equations systems and associated preconditioners at the same time, even in parallel on separate MPI-communicators. The second special feature in FoSSI is the task specifier, being a combination of keywords, each configuring a certain phase in the solver setup. This enables the user to control a solver over one unique subroutine. Furthermore, FoSSI has rather similar features for all solvers, making a fast solver intercomparison or exchange an easy task. FoSSI is a community software, proven in an adaptive 2D-atmosphere model and a 3D-primitive equation ocean model, both formulated in finite elements. The present paper discusses perspectives of an OpenMP-implementation of parallel iterative solvers based on domain decomposition methods. This approach to OpenMP solvers is rather attractive, as the code for domain-local operations of factorization, preconditioning and matrix-vector product can be readily taken from a sequential implementation that is also suitable to be used in an MPI-variant. Code development in this direction is in an advanced state under the name ScOPES: the Scalable Open Parallel sparse linear Equations Solver.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
Environmental programs for ocean thermal energy conversion (OTEC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, P.
1981-07-01
The environmental research effort in support of the US Department of Energy's Ocean Thermal Energy Conversion (OTEC) program has the goal of providing documented information on the effect of proposed operations on the ocean and the effect of oceanic conditions on the plant. The associated environment program consists of archival studies in potential areas serial oceanographic cruises to sites or regions of interest, studies from various fixed platforms at sites, and compilation of such information for appropriate legal compliance and permit requirements and for use in progressive design of OTEC plants. Site/regions investigated are south of Mobile and west ofmore » Tampa, Gulf of Mexico; Punta Tuna, Puerto Rico; St. Croix, Virgin Islands; Kahe Point, Oahu and Keahole Point, Hawaii, Hawaiian Islands; and off the Brazilian south Equatorial Coast. Four classes of environmental concerns identified are: redistribution of oceanic properties (ocean water mixing, impingement/entrainment etc.); chemical pollution (biocides, working fluid leaks, etc.); structural effects (artificial reef, aggregation, nesting/migration, etc.); socio-legal-economic (worker safety, enviromaritime law, etc.).« less
Mantle flow through a tear in the Nazca slab inferred from shear wave splitting
NASA Astrophysics Data System (ADS)
Lynner, Colton; Anderson, Megan L.; Portner, Daniel E.; Beck, Susan L.; Gilbert, Hersh
2017-07-01
A tear in the subducting Nazca slab is located between the end of the Pampean flat slab and normally subducting oceanic lithosphere. Tomographic studies suggest mantle material flows through this opening. The best way to probe this hypothesis is through observations of seismic anisotropy, such as shear wave splitting. We examine patterns of shear wave splitting using data from two seismic deployments in Argentina that lay updip of the slab tear. We observe a simple pattern of plate-motion-parallel fast splitting directions, indicative of plate-motion-parallel mantle flow, beneath the majority of the stations. Our observed splitting contrasts previous observations to the north and south of the flat slab region. Since plate-motion-parallel splitting occurs only coincidentally with the slab tear, we propose mantle material flows through the opening resulting in Nazca plate-motion-parallel flow in both the subslab mantle and mantle wedge.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Remote sensing of chlorophyll in an atmosphere-ocean environment: a theoretical study.
Kattawar, G W; Humphreys, T J
1976-01-01
A Monte Carlo program was written to compute the effect of chlorophyll on the ratio of upwelling to down-welling radiance and irradiance as a function of wavelength, height above the ocean, and depth within the ocean. This program simulates the actual physical situation, since a real atmospheric model was used, i.e., one that contained both aerosol and Rayleigh scattering as well as ozone absorption. The complete interaction of the radiation field with the ocean was also taken into account. The chlorophyll was assumed to be uniformly mixed in the ocean and was also assumed to act only as an absorbing agent. For the ocean model both scattering and absorption by hydrosols was included. Results have been obtained for both a very clear ocean and a medium turbid ocean. Recommendations are made for optimum techniques for remotely sensing chlorophyll both in situ and in vitro.
NASA Astrophysics Data System (ADS)
Pelz, M.; Hoeberechts, M.; Hale, C.; McLean, M. A.
2017-12-01
This presentation describes Ocean Networks Canada's (ONC) Youth Science Ambassador Program. The Youth Science Ambassadors are a growing network of youth in Canadian coastal communities whose role is to connect ocean science, ONC data, and Indigenous knowledge. By directly employing Indigenous youth in communities in which ONC operates monitoring equipment, ONC aims to encourage wider participation and interest in ocean science and exploration. Further, the Youth Science Ambassadors act as role models and mentors to other local youth by highlighting connections between Indigenous and local knowledge and current marine science efforts. Ocean Networks Canada, an initiative of the University of Victoria, develops, operates, and maintains cabled ocean observatory systems. These include technologies developed on the world-leading NEPTUNE and VENUS observatories as well as community observatories in the Arctic and coastal British Columbia. These observatories, large and small, enable communities, users, scientists, teachers, and students to monitor real-time and historical data from the local marine environment from anywhere on the globe. Youth Science Ambassadors are part of the Learning and Engagement team whose role includes engaging Indigenous communities and schools in ocean science through ONC's K-12 Ocean Sense education program. All of the data collected by ONC are freely available over the Internet for non-profit use, including disaster planning, community-based decision making, and education. The Youth Science Ambassadors support collaboration with Indigenous communities and schools by facilitating educational programming, encouraging participation in ocean data collection and analysis, and fostering interest in ocean science. In addition, the Youth Science Ambassadors support community collaboration in decision-making for instrument deployment locations and identify ways in which ONC can help to address any areas of concern raised by the community. This presentation will share the successes and challenges of the Youth Science Ambassador program in engaging both rural and urban Indigenous communities. We will share activities and experiences, discuss how we have adapted to meet the needs of each community, and outline ideas we have for the future development of the program.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration ENVIRONMENTAL PROTECTION AGENCY Coastal Nonpoint Pollution Control Program: Intent To Find That Oregon Has Failed To Submit an Approvable Coastal Nonpoint Pollution Control Program AGENCY: National Oceanic and Atmospheric Administration...
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.
NASA Astrophysics Data System (ADS)
Straneo, F.
2017-12-01
The widespread speed up of Greenland's glaciers, over the last two decades, was unpredicted, revealing major gaps in our understanding of how ice sheets respond to a changing climate. Increased submarine melting at the edge of glaciers has emerged as a key trigger - indicating that glacier/ocean exchanges must be accounted for in ice sheet variability reconstructions and predictions. In parallel, the increasing freshwater discharge into the ocean, associated with Greenland's ice loss, has the potential to impact the North Atlantic's circulation and climate. Thus glacier/ocean exchanges are also relevant to understanding drivers of past and future changes in the North Atlantic Ocean's circulation. Here, I present recent findings from observations collected at the edge of several Greenland glaciers that reveal how melting is caused by intrusions of warm, subtropical waters into the fjords and enhanced by the release of surface melt hundreds of meters below sea level. Similarly, hydrographic and tracer data collected at the glaciers' margins, and within the glacial fjords, reveal how Greenland meltwater are exported in the form of highly diluted glacially modified waters, often subsurface, and temporally lagged with respect to the meltwater release. These findings underline the need for improved representation of ice/ocean exchanges in models in order understand and predict the ice sheet's impact on the ocean and the ocean's impact on the ice sheet.
NASA Astrophysics Data System (ADS)
Stanley, V.; Schoephoester, P.; Lodge, R. W. D.
2016-12-01
The widespread speed up of Greenland's glaciers, over the last two decades, was unpredicted, revealing major gaps in our understanding of how ice sheets respond to a changing climate. Increased submarine melting at the edge of glaciers has emerged as a key trigger - indicating that glacier/ocean exchanges must be accounted for in ice sheet variability reconstructions and predictions. In parallel, the increasing freshwater discharge into the ocean, associated with Greenland's ice loss, has the potential to impact the North Atlantic's circulation and climate. Thus glacier/ocean exchanges are also relevant to understanding drivers of past and future changes in the North Atlantic Ocean's circulation. Here, I present recent findings from observations collected at the edge of several Greenland glaciers that reveal how melting is caused by intrusions of warm, subtropical waters into the fjords and enhanced by the release of surface melt hundreds of meters below sea level. Similarly, hydrographic and tracer data collected at the glaciers' margins, and within the glacial fjords, reveal how Greenland meltwater are exported in the form of highly diluted glacially modified waters, often subsurface, and temporally lagged with respect to the meltwater release. These findings underline the need for improved representation of ice/ocean exchanges in models in order understand and predict the ice sheet's impact on the ocean and the ocean's impact on the ice sheet.
Performance Evaluation in Network-Based Parallel Computing
NASA Technical Reports Server (NTRS)
Dezhgosha, Kamyar
1996-01-01
Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.
NASA Technical Reports Server (NTRS)
Hicks, K.; Steele, W.
1974-01-01
The SEASAT program will provide scientific and economic benefits from global remote sensing of the ocean's dynamic and physical characteristics. The program as presently envisioned consists of: (1) SEASAT A; (2) SEASAT B; and (3) Operational SEASAT. This economic assessment was to identify, rationalize, quantify and validate the economic benefits evolving from SEASAT. These benefits will arise from improvements in the operating efficiency of systems that interface with the ocean. SEASAT data will be combined with data from other ocean and atmospheric sampling systems and then processed through analytical models of the interaction between oceans and atmosphere to yield accurate global measurements and global long range forecasts of ocean conditions and weather.
WFIRST: Science from the Guest Investigator and Parallel Observation Programs
NASA Astrophysics Data System (ADS)
Postman, Marc; Nataf, David; Furlanetto, Steve; Milam, Stephanie; Robertson, Brant; Williams, Ben; Teplitz, Harry; Moustakas, Leonidas; Geha, Marla; Gilbert, Karoline; Dickinson, Mark; Scolnic, Daniel; Ravindranath, Swara; Strolger, Louis; Peek, Joshua; Marc Postman
2018-01-01
The Wide Field InfraRed Survey Telescope (WFIRST) mission will provide an extremely rich archival dataset that will enable a broad range of scientific investigations beyond the initial objectives of the proposed key survey programs. The scientific impact of WFIRST will thus be significantly expanded by a robust Guest Investigator (GI) archival research program. We will present examples of GI research opportunities ranging from studies of the properties of a variety of Solar System objects, surveys of the outer Milky Way halo, comprehensive studies of cluster galaxies, to unique and new constraints on the epoch of cosmic re-ionization and the assembly of galaxies in the early universe.WFIRST will also support the acquisition of deep wide-field imaging and slitless spectroscopic data obtained in parallel during campaigns with the coronagraphic instrument (CGI). These parallel wide-field imager (WFI) datasets can provide deep imaging data covering several square degrees at no impact to the scheduling of the CGI program. A competitively selected program of well-designed parallel WFI observation programs will, like the GI science above, maximize the overall scientific impact of WFIRST. We will give two examples of parallel observations that could be conducted during a proposed CGI program centered on a dozen nearby stars.
NASA Astrophysics Data System (ADS)
Centurioni, Luca
2017-04-01
The Global Drifter Program is the principal component of the Global Surface Drifting Buoy Array, a branch of NOAA's Global Ocean Observing System and a scientific project of the Data Buoy Cooperation Panel (DBCP). The DBCP is an international program coordinating the use of autonomous data buoys to observe atmospheric and oceanographic conditions over ocean areas where few other measurements are taken. The Global Drifter Program maintains an array of over 1,250 Lagrangian drifters, reporting in near real-time and designed measure 15 m depth Lagrangian currents, sea surface temperature (SST) and sea level atmospheric pressure (SLP), among others, to fulfill the needs to observe the air-sea interface at temporal and spatial scales adequate to support short to medium-range weather forecasting, ocean state estimates and climate science. This overview talk will discuss the main achievements of the program, the main impacts for satellite SST calibration and validation, for numerical weather prediction, and it will review the main scientific findings based on the use of Lagrangian currents. Finally, we will present new developments in Lagrangian drifter technology, which include special drifters designed to measure sea surface salinity, wind and directional wave spectra. New opportunities for expanding the scope of the Global Drifter Program will be discussed.
Parallelized direct execution simulation of message-passing parallel programs
NASA Technical Reports Server (NTRS)
Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.
1994-01-01
As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.
NPOESS Preparatory Project Validation Program for Ocean Data Products from VIIRS
NASA Astrophysics Data System (ADS)
Arnone, R.; Jackson, J. M.
2009-12-01
The National Polar-orbiting Operational Environmental Satellite Suite (NPOESS) Program, in partnership with National Aeronautical Space Administration (NASA), will launch the NPOESS Preparatory Project (NPP), a risk reduction and data continuity mission, prior to the first operational NPOESS launch. The NPOESS Program, in partnership with Northrop Grumman Aerospace Systems (NGAS), will execute the NPP Validation program to ensure the data products comply with the requirements of the sponsoring agencies. Data from the NPP Visible/Infrared Imager/Radiometer Suite (VIIRS) will be used to produce Environmental Data Records (EDR's) of Ocean Color/Chlorophyll and Sea Surface Temperature. The ocean Cal/Val program is designed to address an “end to end” capability from sensor to end product and is developed based on existing ongoing government satellite ocean remote sensing capabilities that are currently in use with NASA research and Navy and NOAA operational products. Therefore, the plan focuses on the extension of known reliable methods and capabilities currently used with the heritage sensors that will be extended to the NPP and NPOESS ocean product Cal/Val effort. This is not a fully “new” approach but it is designed to be the most reliable and cost effective approach to developing an automated Cal/Val system for VIIRS while retaining highly accurate procedures and protocols. This presentation will provide an overview of the approaches, data and schedule for the validation of the NPP VIIRS Ocean environmental data products.
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
NASA Astrophysics Data System (ADS)
Kim, Daeyeong; Katayama, Ikuo; Michibayashi, Katsuyoshi; Tsujimori, Tatsuki
2013-09-01
Investigations of microstructures are crucial if we are to understand the seismic anisotropy of subducting oceanic crust, and here we report on our systematic fabric analyses of glaucophane, lawsonite, and epidote in naturally deformed blueschists from the Diablo Range and Franciscan Complex in California, and the Hida Mountains in Japan. Glaucophanes in the analyzed samples consist of very fine grains that are well aligned along the foliation and have high aspect ratios and strong crystal preferred orientations (CPOs) characterized by a (1 0 0)[0 0 1] pattern. These characteristics, together with a bimodal distribution of grain sizes from some samples, possibly indicate the occurrence of dynamic recrystallization for glaucophane. Although lawsonite and epidote display high aspect ratios and a strong CPO of (0 0 1)[0 1 0], the occurrence of straight grain boundaries and euhedral crystals indicates that rigid body rotation was the dominant deformation mechanism. The P-wave (AVP) and S-wave (AVS) seismic anisotropies of glaucophane (AVP = 20.4%, AVS = 11.5%) and epidote (AVP = 9.0%, AVS = 8.0%) are typical of the crust; consequently, the fastest propagation of P-waves is parallel to the [0 0 1] maxima, and the polarization of S-waves parallel to the foliation can form a trench-parallel seismic anisotropy owing to the slowest VS polarization being normal to the subducting slab. The seismic anisotropy of lawsonite (AVP = 9.6%, AVS = 19.9%) is characterized by the fast propagation of P-waves subnormal to the lawsonite [0 0 1] maxima and polarization of S-waves perpendicular to the foliation and lineation, which can generate a trench-normal anisotropy. The AVS of lawsonite blueschist (5.6-9.2%) is weak compared with that of epidote blueschist (8.4-11.1%). Calculations of the thickness of the anisotropic layer indicate that glaucophane and lawsonite contribute to the trench-parallel and trench-normal seismic anisotropy beneath NE Japan, but not to that beneath the Ryukyu arc. Our results demonstrate, therefore, that lawsonite has a strong influence on seismic velocities in the oceanic crust, and that lawsonite might be the cause of complex anisotropic patterns in subduction zones.
Programming Probabilistic Structural Analysis for Parallel Processing Computer
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Chamis, Christos C.; Murthy, Pappu L. N.
1991-01-01
The ultimate goal of this research program is to make Probabilistic Structural Analysis (PSA) computationally efficient and hence practical for the design environment by achieving large scale parallelism. The paper identifies the multiple levels of parallelism in PSA, identifies methodologies for exploiting this parallelism, describes the development of a parallel stochastic finite element code, and presents results of two example applications. It is demonstrated that speeds within five percent of those theoretically possible can be achieved. A special-purpose numerical technique, the stochastic preconditioned conjugate gradient method, is also presented and demonstrated to be extremely efficient for certain classes of PSA problems.
SeaWinds Global Coverage with Detail of Hurricane Floyd
NASA Technical Reports Server (NTRS)
1999-01-01
The distribution of ocean surface winds over the Atlantic Ocean, based on September 1999 data from NASA's SeaWinds instrument on the QuikScat satellite, shows wind direction (white streamlines) at a resolution of 25 kilometers (15.5 miles), superimposed on the color image indicating wind speed.Over the ocean, the strong (seen in violet) trade winds blow steadily from the cooler subtropical oceans to warm waters just north of the equator. The air rises over these warm waters and sinks in the subtropics at the horse latitudes. Low wind speeds are indicated in blue. In the mid-latitudes, the high vorticity caused by the rotation of the Earth generates the spirals of weather systems. The North Atlantic is dominated by a high-pressure system, whose anti-cyclonic (clockwise) flow creates strong winds blowing parallel to the coast of Spain and Morocco. This creates strong ocean upwelling and cold temperature. Hurricane Floyd, with its high winds (yellow), is clearly visible west of the Bahamas. Tropical depression Gert is seen as it was forming in the tropical mid-Atlantic (as an anti-clockwise spiral); it later developed into a full-blown hurricane.Because the atmosphere is largely transparent to microwaves, SeaWinds is able to cover 93 percent of the global oceans, under both clear and cloudy conditions, in a single day, with the capability of a synoptic view of the ocean. The high resolution of the data also gives detailed description of small and intense weather systems, like Hurricane Floyd. The image in the insert is based on data specially produced at 12.5 kilometers (7.7 miles). In the insert, white arrows of wind vector are imposed on the color image of wind speed. The insert represents a 3-degree area occupied by Hurricane Floyd. After these data were acquired, Hurricane Floyd turned north. Its strength and proximity to the Atlantic coast of the U.S. caused the largest evacuation of citizens in U.S. history. Its landfall on September 16, 1999 resulted in severe flooding and devastation in the Carolinas. The high-resolution SeaWinds data provided an opportunity to monitor and study this hurricane.NASA's Earth Science Enterprise is a long-term research and technology program designed to examine Earth's land, oceans, atmosphere, ice and life as a total integrated system. JPL is a division of the California Institute of Technology, Pasadena, CA.NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Performance Implications of Synchronization Support for Parallel FORTRAN Programs
1991-06-17
applications we used in this study are BDNA and FLO52. BDNA is a molecular dy- I namics simulator for biomolecules in water and it uses ordinary...parallelism structures and loop granularity. In the BDNA program, most of the parallel loops are not nested and the iterations are 200-1000 instructions long...are of concern. The BDNA curve in Figure 21 shows that for this program only 17% of all 32 I I 100 BDNA -4 FLO52 -I 80 3 CumuilatQe percentage of3
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
2012-12-01
identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency
NASA Astrophysics Data System (ADS)
Pelz, M.; Heesemann, M.; Hoeberechts, M.
2017-12-01
This presentation outlines the pilot year of Girls' Remotely Operated Ocean Vehicle Exploration or GROOVE, a hands-on learning program created collaboratively with education partners Ocean Networks Canada and St. Margaret's School (Victoria, BC, Canada). The program features student-led activities, authentic student experiences, clearly outlined learning outcomes, teacher and student self-assessment tools, and curriculum-aligned content. Presented through the lens of STEM, students build a modified Seaperch ROV and explore and research thematic scientific concepts such as buoyancy, electronic circuitry, and deep-sea exploration. Further, students learn engineering skills such as isotropic scaling, soldering, and assembly as they build their ROV. Ocean Networks Canada (ONC), an initiative of the University of Victoria, develops, operates, and maintains cabled ocean observatory systems. These include technologies developed on the world-leading NEPTUNE and VENUS observatories and the ever-expanding network of community observatories in the Arctic and coastal British Columbia. These observatories, large and small, enable communities, users, scientists, teachers, and students to monitor real-time and historical data from the local marine environment from anywhere on the globe. GROOVE, Girls' Remotely Operated Ocean Vehicle Exploration, is ONC's newest educational program and is related to their foundational program K-12 Ocean Sense educational program. This presentation will share our experiences developing, refining, and assessing our efforts to implement GROOVE using a train-the-trainer model aimed at formal and informal K-12 educators. We will highlight lessons learned from multiple perspectives (students, participants, developers, and mentors) with the intent of informing future education and outreach initiatives.
Seasat-A and the commercial ocean community
NASA Technical Reports Server (NTRS)
Montgomery, D. R.; Wolff, P.
1977-01-01
The Seasat-A program has been initiated as a 'proof-of-concept' mission to evaluate the effectiveness of remotely sensing oceanology and related meteorological phenomena from a satellite platform in space utilizing sensors developed on previous space and aircraft test programs. The sensors include three active microwave sensors; a radar altimeter, a windfield scatterometer, and a synthetic aperture radar. A passive scanning multifrequency microwave radiometer, visual and infrared radiometer are also included. All weather, day-night measurements of sea surface temperature, surface wind speed/direction and sea state and directional wave spectra will be made. Two key programs are planned for data utilization with users during the mission. Foremost is a program with the commercial ocean community to test the utility of Seasat-A data and to begin the transfer of ocean remote sensing technology to the civil sector. A second program is a solicitation of investigations, led by NOAA, to involve the ocean science community in a series of scientific investigations.
NASA Astrophysics Data System (ADS)
The Ocean Research Institute of the University of Tokyo and the National Science Foundation (NSF) have signed a Memorandum of Understanding for cooperation in the Ocean Drilling Program (ODP). The agreement calls for Japanese participation in ODP and an annual contribution of $2.5 million in U.S. currency for the project's 9 remaining years, according to NSF.ODP is an international project whose mission is to learn more about the formation and development of the earth through the collection and examination of core samples from beneath the ocean. The program uses the drillship JOIDES Resolution, which is equipped with laboratories and computer facilities. The Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES), an international group of scientists, provides overall science planning and program advice regarding ODP's science goals and objectives.
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
Undergraduate Research Experience in Ocean/Marine Science (URE-OMS)
2003-09-30
The URE-Ocean/Marine Science program supports active research participation by undergraduate students in remote sensing and GIS. The program is based on a model for undergraduate research programs supported by the National Science Foundation . URE project features mentors, research projects, and professional development opportunities. It is the long-term goal
Concepts of Concurrent Programming
1990-04-01
to the material presented. Carriero89 Carriero, N., and Gelernter, D. " How to Write Parallel Programs : A Guide to the Perplexed." ACM...between the architectures on which programs can be executed and the application domains from which problems are drawn. Our goal is to show how programs ...Sept. 1989), 251-510. Abstract: There are four papers: 1. Programming Languages for Distributed Computing Systems (52); 2. How to Write Parallel
NavP: Structured and Multithreaded Distributed Parallel Programming
NASA Technical Reports Server (NTRS)
Pan, Lei; Xu, Jingling
2006-01-01
This slide presentation reviews some of the issues around distributed parallel programming. It compares and contrast two methods of programming: Single Program Multiple Data (SPMD) with the Navigational Programming (NAVP). It then reviews the distributed sequential computing (DSC) method and the methodology of NavP. Case studies are presented. It also reviews the work that is being done to enable the NavP system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... Integrated Ocean Observing System (IOOS) Program publishes this notice on behalf of the Interagency Ocean... process mandated by the Integrated Coastal and Ocean Observation System Act of 2009 (ICOOS Act). The IOOC... Ocean Observing System (System). DATES: Written, faxed or e-mailed comments must be received no later...
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
On program restructuring, scheduling, and communication for parallel processor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polychronopoulos, Constantine D.
1986-08-01
This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less
Satellite Ocean Color Validation Using Merchant Ships. Chapter 10
NASA Technical Reports Server (NTRS)
Frouin, Robert; Cutchin, David L.; Deschamps, Pierre-Yves
2001-01-01
A collaborative measurement program for evaluating satellite-derived ocean color has been developed based on ships of opportunity (merchant, oceanographic) and specific instrumentation, the SIMBAD radiometer. The purpose of the measurement program is to complement, in a cost-effective way, dedicated evaluation experiments at sea, which are expensive, cannot be carried out over the full range of expected oceanic and atmospheric conditions, and generally provide a few match-ups. Ships participate in the program on a volunteer basis or at a very small cost, and measurement procedures do not interfere with other ship activities. The SIMBAD radiometer is a portable, easy-to-operate instrument that measures the basic ocean color variables, namely aerosol optical thickness and water-leaving radiance, in typical spectral bands of ocean-color sensors, i.e., 443, 490, 560, 670, and 870 nm. Measuring these variables at the time of satellite overpass is usually sufficient to verify satellite-derived ocean color and to evaluate atmospheric correction algorithms. Any ordinary crew can learn quickly how to make measurements. Importantly, the ship is not required to stop, making it possible to collect data along regular routes traveled by merchant ships in the world's oceans.
Overseas trip report, CV 990 underflight mission. [Norwegian Sea, Greenland ice sheet, and Alaska
NASA Technical Reports Server (NTRS)
Gloersen, P.; Crawford, J.; Hardis, L.
1980-01-01
The scanning microwave radiometer-7 simulator, the ocean temperature scanner, and an imaging scatterometer/altimeter operating at 14 GHz were carried onboard the NASA CV-990 over open oceans, sea ice, and continental ice sheets to gather surface truth information. Data flights were conducted over the Norwegian Sea to map the ocean polar front south and west of Bear Island and to transect several Nimbus-7 footprints in a rectangular pattern parallel to the northern shoreline of Norway. Additional flights were conducted to obtain correlative data on the cryosphere parameters and characteristics of the Greenland ice sheet, and study the frozen lakes near Barrow. The weather conditions and flight path way points for each of the nineteen flights are presented in tables and maps.
Parallel Optimization of an Earth System Model (100 Gigaflops and Beyond?)
NASA Technical Reports Server (NTRS)
Drummond, L. A.; Farrara, J. D.; Mechoso, C. R.; Spahr, J. A.; Chao, Y.; Katz, S.; Lou, J. Z.; Wang, P.
1997-01-01
We are developing an Earth System Model (ESM) to be used in research aimed to better understand the interactions between the components of the Earth System and to eventually predict their variations. Currently, our ESM includes models of the atmosphere, oceans and the important chemical tracers therein.
Lower Cretaceous smarl turbidites of the Argo Abyssal Plain, Indian Ocean
Dumoulin, Julie A.; Stewart, Sondra K.; Kennett, Diana; Mazzullo, Elsa K.
1992-01-01
Sediments recovered during Ocean Drilling Program (ODP) Leg 123 from the Argo Abyssal Plain (AAP) consist largely of turbidites derived from the adjacent Australian continental margin. The oldest abundant turbidites are Valanginian-Aptian in age and have a mixed (smarl) composition; they contain subequal amounts of calcareous and siliceous biogenic components, as well as clay and lesser quartz. Most are thin-bedded, fine sand to mud-sized, and best described by Stow and Piper's model (1984) for fine-grained biogenic turbidites. Thicker (to 3 m), coarser-grained (medium-to-coarse sand-sized) turbidites fit Bouma's model (1962) for sandy turbidites; these generally are base-cut-out (BCDE, BDE) sequences, with B-division parallel lamination as the dominant structure. Parallel laminae most commonly concentrate quartz and/or calcispheres vs. lithic clasts or clay, but distinctive millimeter to centimeter-thick, radiolarian-rich laminae occur in both fine and coarse-grained Valanginian-Hauterivian turbidites.AAP turbidites were derived from relatively deep parts of the continental margin (outer shelf, slope, or rise) that lay below the photic zone, but above the calcite compensation depth (CCD). Biogenic components are largely pelagic (calcispheres, foraminifers, radiolarians, nannofossils); lesser benthic foraminifers are characteristic of deep-water (abyssal to bathyal) environments. Abundant nonbiogenic components are mostly clay and clay clasts; smectite is the dominant clay species, and indicates a volcanogenic provenance, most likely the Triassic-Jurassic volcanic suite exposed along the northern Exmouth Plateau.Lower Cretaceous smarl turbidites were generated during eustatic lowstands and may have reached the abyssal plain via Swan Canyon, a submarine canyon thought to have formed during the Late Jurassic. In contrast to younger AAP turbidites, however, Lower Cretaceous turbidites are relatively fine-grained and do not contain notably older reworked fossils. Early in its history, the northwest Australian margin provided mainly contemporaneous slope sediment to the AAP; marginal basins adjacent to the continent trapped most terrigenous detritus, and pronounced canyon incisement did not occur until Late Cretaceous and, especially, Cenozoic time.
Hydrologic overlay maps of the Cape Canaveral Quadrangle, Florida
Frazee, James M.; Laughlin, Charles P.
1979-01-01
Brevard County is an area of some 1,300 square miles located on the east coast of central Florida. The Cape Canaveral quadrangle, in central Brevard, includes part of the Merritt Island National Wildlife Refuge, John F. Kennedy Space Center (NASA), and Cape Canaveral Air Force Station. The eastern part of the quadrangle is occupied by the Atlantic Ocean and the western part by estuarine waters of the Banana River. Topography is characterized by numerous elongate sand dumes, with altitudes up to 10 feet or greater, which roughly parallel the estuary and ocean.
Modelling parallel programs and multiprocessor architectures with AXE
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.
1991-01-01
AXE, An Experimental Environment for Parallel Systems, was designed to model and simulate for parallel systems at the process level. It provides an integrated environment for specifying computation models, multiprocessor architectures, data collection, and performance visualization. AXE is being used at NASA-Ames for developing resource management strategies, parallel problem formulation, multiprocessor architectures, and operating system issues related to the High Performance Computing and Communications Program. AXE's simple, structured user-interface enables the user to model parallel programs and machines precisely and efficiently. Its quick turn-around time keeps the user interested and productive. AXE models multicomputers. The user may easily modify various architectural parameters including the number of sites, connection topologies, and overhead for operating system activities. Parallel computations in AXE are represented as collections of autonomous computing objects known as players. Their use and behavior is described. Performance data of the multiprocessor model can be observed on a color screen. These include CPU and message routing bottlenecks, and the dynamic status of the software.
1993-12-01
graduate education required for Ocean Facilities Program (OFP) officers in the Civil Engineer Corps (CEC) of the United States Navy. For the purpose...determined by distributing questionnaires to all officers in the OFP. Statistical analyses of numerical data and judgmental3 analysis of professional...45 B. Ocean Facility Program Officer Graduate Education Questionnaire ....... 47 C. Summary of Questionnaire Responses
Web Based Parallel Programming Workshop for Undergraduate Education.
ERIC Educational Resources Information Center
Marcus, Robert L.; Robertson, Douglass
Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…
SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws
NASA Technical Reports Server (NTRS)
Cooke, Daniel; Rushton, Nelson
2013-01-01
With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
The inverse problem: Ocean tides derived from earth tide observations
NASA Technical Reports Server (NTRS)
Kuo, J. T.
1978-01-01
Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.
NASA Astrophysics Data System (ADS)
Tao, C.; Liang, J.; Zhang, H.; Li, H.; Egorov, I. V.; Liao, S.
2016-12-01
The Dragon Horn Area (49.7°E), is located at the west end of the EW trending Segment 28 of Southwest Indian Ridge between Indomed and Gallieni FZ. The segment is characterized by highly asymmetric topography. The northern flank is deeper and develops typical parallel linear fault escarpments. Meanwhile, the southern flank, where the Dragon Horn lies, is shallower and bears corrugations. The indicative corrugated surface which extends some 5×5 km was interpreted to be of Dragon Flag OCC origin (Zhao et al., 2013). Neo-volcanic ridge extends along the middle of the rifted valley and is bounded by two non-transform offsets to the east and west. Our investigations revealed 6 hydrothermal fields/anomalies in this area, including 2 confirmed sulfide fields, 1 carbonate field, and 3 inferred hydrothermal anomalies based on methane and turbidity data from 2016 AUV survey. Longqi-1(Dragon Flag) vent system lies to the northwest edge of Dragon Flag OCC. It is one of the largest hydrothermal venting systems along Mid-Ocean Ridges, with maximum temperature at vent site DFF6 of 'M zone' up to 379.3 °C (Tao et al, 2016). Massive sulfides (49.73 °E, 37.78 °S) were sampled 10 km east to Longqi-1, representing independent hydrothermal activities controlled by respective local structures. According to geological mapping and interpretation, both sulfide fields are located on the hanging wall of the Dragon Flag OCC detachment. Combined with the inferred hydrothermal anomaly to the east of the massive sulfide site, we suppose that they are controlled by different fault phases during the detachment of oceanic core complex. Moreover, consolidated carbonate sediments were widely observed and sampled on the corrugated surface and its west side, they are proposed to be precipitated during the serpentinization of ultramafic rocks, representing low-temperature hydrothermal process. These hydrothermal activities, distributed within 20km, may be controlled by the same Dragon Flag OCC. Acknowledgement This work was supported by National Basic Research Program of China (973 Program) under contract No. 2012CB417305, China Ocean Mineral Resources R & D Association "Twelfth Five-Year" Major Program under contract No. DY125-11-R-01 and DY125-11-R-05
Parallel computation with the force
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1985-01-01
A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.
Oceanic Circulation. A Programmed Unit of Instruction.
ERIC Educational Resources Information Center
Marine Maritime Academy, Castine.
This booklet contains a programmed lesson on oceanic circulation. It is designed to allow students to progress through the subject at their own speed. Since it is written in linear format, it is suggested that students proceed through the program from "frame" to succeeding "frame." Instructions for students on how to use the booklet are included.…
Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.
2003-01-01
Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.
Motion in the north Iceland volcanic rift zone accommodated by bookshelf faulting
NASA Astrophysics Data System (ADS)
Green, Robert G.; White, Robert S.; Greenfield, Tim
2014-01-01
Along mid-ocean ridges the extending crust is segmented on length scales of 10-1,000km. Where rift segments are offset from one another, motion between segments is accommodated by transform faults that are oriented orthogonally to the main rift axis. Where segments overlap, non-transform offsets with a variety of geometries accommodate shear motions. Here we use micro-seismic data to analyse the geometries of faults at two overlapping rift segments exposed on land in north Iceland. Between the rift segments, we identify a series of faults that are aligned sub-parallel to the orientation of the main rift. These faults slip through left-lateral strike-slip motion. Yet, movement between the overlapping rift segments is through right-lateral motion. Together, these motions induce a clockwise rotation of the faults and intervening crustal blocks in a motion that is consistent with a bookshelf-faulting mechanism, named after its resemblance to a tilting row of books on a shelf. The faults probably reactivated existing crustal weaknesses, such as dyke intrusions, that were originally oriented parallel to the main rift and have since rotated about 15° clockwise. Reactivation of pre-existing, rift-parallel weaknesses contrasts with typical mid-ocean ridge transform faults and is an important illustration of a non-transform offset accommodating shear motion between overlapping rift segments.
76 FR 62808 - Pilot Program for Parallel Review of Medical Products
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-11
... voluntary participation in the pilot program, as well as the guiding principles the Agencies intend to... 57045), parallel review is intended to reduce the time between FDA marketing approval and CMS national...
Algorithms and programming tools for image processing on the MPP
NASA Technical Reports Server (NTRS)
Reeves, A. P.
1985-01-01
Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.
What electrical measurements can say about changes in fault systems.
Madden, T R; Mackie, R L
1996-01-01
Earthquake zones in the upper crust are usually more conductive than the surrounding rocks, and electrical geophysical measurements can be used to map these zones. Magnetotelluric (MT) measurements across fault zones that are parallel to the coast and not too far away can also give some important information about the lower crustal zone. This is because the long-period electric currents coming from the ocean gradually leak into the mantle, but the lower crust is usually very resistive and very little leakage takes place. If a lower crustal zone is less resistive it will be a leakage zone, and this can be seen because the MT phase will change as the ocean currents leave the upper crust. The San Andreas Fault is parallel to the ocean boundary and close enough to have a lot of extra ocean currents crossing the zone. The Loma Prieta zone, after the earthquake, showed a lot of ocean electric current leakage, suggesting that the lower crust under the fault zone was much more conductive than normal. It is hard to believe that water, which is responsible for the conductivity, had time to get into the lower crustal zone, so it was probably always there, but not well connected. If this is true, then the poorly connected water would be at a pressure close to the rock pressure, and it may play a role in modifying the fluid pressure in the upper crust fault zone. We also have telluric measurements across the San Andreas Fault near Palmdale from 1979 to 1990, and beginning in 1985 we saw changes in the telluric signals on the fault zone and east of the fault zone compared with the signals west of the fault zone. These measurements were probably seeing a better connection of the lower crust fluids taking place, and this may result in a fluid flow from the lower crust to the upper crust. This could be a factor in changing the strength of the upper crust fault zone. PMID:11607664
Detachment Fault Behavior Revealed by Micro-Seismicity at 13°N, Mid-Atlantic Ridge
NASA Astrophysics Data System (ADS)
Parnell-Turner, R. E.; Sohn, R. A.; MacLeod, C. J.; Peirce, C.; Reston, T. J.; Searle, R. C.
2016-12-01
Under certain tectono-magmatic conditions, crustal accretion and extension at slow-spreading mid-ocean ridges is accommodated by low-angle detachment faults. While it is now generally accepted that oceanic detachments initiate on steeply dipping faults that rotate to low-angles at shallow depths, many details of their kinematics remain unknown. Debate has continued between a "continuous" model, where a single, undulating detachment surface underlies an entire ridge segment, and a "discrete" (or discontinuous) model, where detachments are spatially restricted and ephemeral. Here we present results from a passive microearthquake study of detachment faulting at the 13°N region of the Mid-Atlantic Ridge. This study is one component of a joint US-UK seismic study to constrain the sub-surface structure and 3-dimensional geometry of oceanic detachment faults. We detected over 300,000 microearthquakes during a 6-month deployment of 25 ocean bottom seismographs. Events are concentrated in two 1-2 km wide ridge-parallel bands, located between the prominent corrugated detachment fault surface at 13°20'N and the present-day spreading axis, separated by a 1-km wide patch of reduced seismicity. These two bands are 7-8 km in length parallel to the ridge and are clearly limited in spatial extent to the north and south. Events closest to the axis are generally at depths of 6-8 km, while those nearest to the oceanic detachment fault are shallower, at 4-6 km. There is an overall trend of deepening seismicity northwards, with events occurring progressively deeper by 4 km over an along-axis length of 8 km. Events are typically very small, and range in local magnitude from ML -1 to 3. Focal mechanisms indicate two modes of deformation, with extension nearest to the axis and compression at shallower depths near to the detachment fault termination.
NASA Astrophysics Data System (ADS)
Zodiatis, George; Radhakrishnan, Hari; Lardner, Robin; Hayes, Daniel; Gertman, Isaac; Menna, Milena; Poulain, Pierre-Marie
2014-05-01
The general anticlockwise circulation along the coastline of the Eastern Mediterranean Levantine Basin was first proposed by Nielsen in 1912. Half a century later the schematic of the circulation in the area was enriched with sub-basin flow structures. In late 1980s, a more detailed picture of the circulation composed of eddies, gyres and coastal-offshore jets was defined during the POEM cruises. In 2005, Millot and Taupier-Letage have used SST satellite imagery to argue for a simpler pattern similar to the one proposed almost a century ago. During the last decade, renewed in-situ multi-platforms investigations under the framework of CYBO, CYCLOPS, NEMED, GROOM, HaiSec and PERSEUS projects, as well the development of the operational ocean forecasts and hindcasts in the framework of the MFS, ECOOP, MERSEA and MyOcean projects, have made possible to obtain an improved, higher spatial and temporal resolution picture of the circulation in the area. After some years of scientific disputes on the circulation pattern of the region, the new in-situ data sets and the operational numerical simulations confirm the relevant POEM results. The existing POM-based Cyprus Coastal Ocean Forecasting System (CYCOFOS), downscaling the MyOcean MFS, has been providing operational forecasts in the Eastern Mediterranean Levantine Basin region since early 2002. Recently, Radhakrishnan et al. (2012) parallelized the CYCOFOS hydrodynamic flow model using MPI to improve the accuracy of predictions while reducing the computational time. The parallel flow model is capable of modeling the Eastern Mediterranean Levantine Basin flow at a resolution of 500 m. The model was run in hindcast mode during which the innovations were computed using the historical data collected using gliders and cruises. Then, DD-OceanVar (D'Amore et al., 2013), a data assimilation tool based on 3DVAR developed by CMCC was used to compute the temperature and salinity field corrections. Numerical modeling results after the data assimilation will be presented.
Telling Your Story: Ocean Scientists in the K-12 Classroom
NASA Astrophysics Data System (ADS)
McWilliams, H.
2006-12-01
Most scientists and engineers are accustomed to presenting their research to colleagues or lecturing college or graduate students. But if asked to speak in front of a classroom full of elementary school or junior high school students, many feel less comfortable. TERC, as part of its work with The Center for Ocean Sciences Education Excellence-New England (COSEE-NE) has designed a workshop to help ocean scientists and engineers develop skills for working with K-12 teachers and students. We call this program: Telling Your Story (TYS). TYS has been offered 4 times over 18 months for a total audience of approximately 50 ocean scientists. We will discuss the rationale for the program, the program outline, outcomes, and what we have learned. ne.net/edu_project_3/index.php
15 CFR 923.95 - Approval of applications.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation Grants § 923.95 Approval of applications. (a) The application for a grant by any coastal State which...
15 CFR 923.93 - Eligible implementation costs.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation... pursuant to section 6217 of the Coastal Zone Act Reauthorization Amendments of 1990. When in doubt as to...
15 CFR 923.93 - Eligible implementation costs.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation... pursuant to section 6217 of the Coastal Zone Act Reauthorization Amendments of 1990. When in doubt as to...
15 CFR 923.95 - Approval of applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Applications for Program Development or Implementation Grants § 923.95 Approval of applications. (a) The application for a grant by any coastal State which...
15 CFR 930.95 - Guidance provided by the State agency.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL... assistance provision within the management program generally describing the geographic area (e.g., coastal... Director as a program change. Listed activities may have different geographic location descriptions...
15 CFR 930.95 - Guidance provided by the State agency.
Code of Federal Regulations, 2014 CFR
2014-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL... assistance provision within the management program generally describing the geographic area (e.g., coastal... Director as a program change. Listed activities may have different geographic location descriptions...
15 CFR 930.95 - Guidance provided by the State agency.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL... assistance provision within the management program generally describing the geographic area (e.g., coastal... Director as a program change. Listed activities may have different geographic location descriptions...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-25
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Proposed Information Collection; Comment Request; Fisheries Finance Program Requirements AGENCY: National Oceanic and Atmospheric Administration (NOAA), Commerce. ACTION: Notice. SUMMARY: The Department of Commerce, as part of its continuing...
Execution models for mapping programs onto distributed memory parallel computers
NASA Technical Reports Server (NTRS)
Sussman, Alan
1992-01-01
The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.
Program Correctness, Verification and Testing for Exascale (Corvette)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Koushik; Iancu, Costin; Demmel, James W
The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Trace-Driven Debugging of Message Passing Programs
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Lopez, Louis; Bailey, David (Technical Monitor)
1998-01-01
In this paper we report on features added to a parallel debugger to simplify the debugging of parallel message passing programs. These features include replay, setting consistent breakpoints based on interprocess event causality, a parallel undo operation, and communication supervision. These features all use trace information collected during the execution of the program being debugged. We used a number of different instrumentation techniques to collect traces. We also implemented trace displays using two different trace visualization systems. The implementation was tested on an SGI Power Challenge cluster and a network of SGI workstations.
Exploiting Symmetry on Parallel Architectures.
NASA Astrophysics Data System (ADS)
Stiller, Lewis Benjamin
1995-01-01
This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, S.; Zacharia, T.; Baltas, N.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less
Near-Inertial and Thermal Upper Ocean Response to Atmospheric Forcing in the North Atlantic Ocean
2010-06-01
meridional transport of heat (Hoskins and Valdes, 1990). Formation of North Atlantic Subtropical Mode Water is thought to take place during the...North Atlantic Ocean MIT/WHOI Joint Program in Oceanography/ Applied Ocean Science and Engineering Massachusetts Institute of Technology Woods Hole...Oceanographic Institution MITIWHOI 2010-16 Near-inertial and Thermal Upper Ocean Response to Atmospheric Forcing in the North Atlantic Ocean by
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2002-01-01
Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.
NASA Astrophysics Data System (ADS)
Evangelidis, C. P.
2017-12-01
The segmentation and differentiation of subducting slabs have considerable effects on mantle convection and tectonics. The Hellenic subduction zone is a complex convergent margin with strong curvature and fast slab rollback. The upper mantle seismic anisotropy in the region is studied focusing at its western and eastern edges in order to explore the effects of possible slab segmentation on mantle flow and fabrics. Complementary to new SKS shear-wave splitting measurements in regions not adequately sampled so far, the source-side splitting technique is applied to constrain the depth of anisotropy and to densify measurements. In the western Hellenic arc, a trench-normal subslab anisotropy is observed near the trench. In the forearc domain, source-side and SKS measurements reveal a trench-parallel pattern. This indicates subslab trench-parallel mantle flow, associated with return flow due to the fast slab rollback. The passage from continental to oceanic subduction in the western Hellenic zone is illustrated by a forearc transitional anisotropy pattern. This indicates subslab mantle flow parallel to a NE-SW smooth ramp that possibly connects the two subducted slabs. A young tear fault initiated at the Kefalonia Transform Fault is likely not entirely developed, as this trench-parallel anisotropy pattern is observed along the entire western Hellenic subduction system, even following this horizontal offset between the two slabs. At the eastern side of the Hellenic subduction zone, subslab source-side anisotropy measurements show a general trench-normal pattern. These are associated with mantle flow through a possible ongoing tearing of the oceanic lithosphere in the area. Although the exact geometry of this slab tear is relatively unknown, SKS trench-parallel measurements imply that the tear has not reached the surface yet. Further exploration of the Hellenic subduction system is necessary; denser seismic networks should be deployed at both its edges in order to achieve a more definite image of the structure and geodynamics of this area.
NASA Astrophysics Data System (ADS)
de Wet, P. D.; Bentsen, M.; Bethke, I.
2016-02-01
It is well-known that, when comparing climatological parameters such as ocean temperature and salinity to the output of an Earth System Model (ESM), the model exhibits biases. In ESMs with an isopycnic ocean component, such as NorESM, insufficient vertical mixing is thought to be one of the causes of such differences between observational and model data. However, enhancing the vertical mixing of the model's ocean component not only requires increasing the energy input, but also sound physical reasoning for doing so. Various authors have shown that the action of atmospheric winds on the ocean's surface is a major source of energy input into the upper ocean. However, due to model and computational constraints, oceanic processes linked to surface winds are incompletely accounted for. Consequently, despite significantly contributing to the energy required to maintain ocean stratification, most ESMs do not directly make provision for this energy. In this study we investigate the implementation of a routine in which the energy from work done on oceanic near-inertial motions is calculated in an offline slab model. The slab model, which has been well-documented in the literature, runs parallel to but independently from the ESM's ocean component. It receives wind fields with a frequency higher than that of the coupling frequency, allowing it to capture the fluctuations in the winds on shorter time scales. The additional energy calculated thus is then passed to the ocean component, avoiding the need for increased coupling between the components of the ESM. Results show localised reduction in, amongst others, the salinity and temperature biases of NorESM, confirming model sensitivity to wind-forcing and points to the need for better representation of surface processes in ESMs.
The parallel programming of voluntary and reflexive saccades.
Walker, Robin; McSorley, Eugene
2006-06-01
A novel two-step paradigm was used to investigate the parallel programming of consecutive, stimulus-elicited ('reflexive') and endogenous ('voluntary') saccades. The mean latency of voluntary saccades, made following the first reflexive saccades in two-step conditions, was significantly reduced compared to that of voluntary saccades made in the single-step control trials. The latency of the first reflexive saccades was modulated by the requirement to make a second saccade: first saccade latency increased when a second voluntary saccade was required in the opposite direction to the first saccade, and decreased when a second saccade was required in the same direction as the first reflexive saccade. A second experiment confirmed the basic effect and also showed that a second reflexive saccade may be programmed in parallel with a first voluntary saccade. The results support the view that voluntary and reflexive saccades can be programmed in parallel on a common motor map.
Turnover time of fluorescent dissolved organic matter in the dark global ocean.
Catalá, Teresa S; Reche, Isabel; Fuentes-Lema, Antonio; Romera-Castillo, Cristina; Nieto-Cid, Mar; Ortega-Retuerta, Eva; Calvo, Eva; Álvarez, Marta; Marrasé, Cèlia; Stedmon, Colin A; Álvarez-Salgado, X Antón
2015-01-29
Marine dissolved organic matter (DOM) is one of the largest reservoirs of reduced carbon on Earth. In the dark ocean (>200 m), most of this carbon is refractory DOM. This refractory DOM, largely produced during microbial mineralization of organic matter, includes humic-like substances generated in situ and detectable by fluorescence spectroscopy. Here we show two ubiquitous humic-like fluorophores with turnover times of 435±41 and 610±55 years, which persist significantly longer than the ~350 years that the dark global ocean takes to renew. In parallel, decay of a tyrosine-like fluorophore with a turnover time of 379±103 years is also detected. We propose the use of DOM fluorescence to study the cycling of resistant DOM that is preserved at centennial timescales and could represent a mechanism of carbon sequestration (humic-like fraction) and the decaying DOM injected into the dark global ocean, where it decreases at centennial timescales (tyrosine-like fraction).
Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.
2000-01-01
Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.
NASA Astrophysics Data System (ADS)
Maffione, Marco; van Hinsbergen, Douwe; de Gelder, Giovanni; van der Goes, Freek; Morris, Antony
2017-04-01
Formation of new subduction zones represents one of the cornerstones of plate tectonics, yet both the kinematics and geodynamics governing this process remain enigmatic. A major subduction initiation event occurred in the Late Cretaceous, within the Neo-Tethys Ocean between Gondwana and Eurasia. Supra-subduction zone (SSZ) ophiolites (i.e., emerged fragments of ancient oceanic lithosphere accreted at supra-subduction spreading centers) were generated during this subduction event, and are today distributed in the eastern Mediterranean region along three E-W trending ophiolitic belts. Current models associate these ophiolite belts to simultaneous initiation of multiple, E-W trending subduction zones at 95 Ma. Here we report paleospreading direction data obtained from paleomagnetic analysis of sheeted dyke sections from seven Neo-Tethyan ophiolites of Turkey, Cyprus, and Syria, demonstrating that these ophiolites formed at NNE-SSW striking ridges parallel to the newly formed subduction zones. This subduction system was step-shaped and composed of NNE-SSW and ESE-WNW segments. The eastern subduction segment invaded the SW Mediterranean, leading to a radial obduction pattern similar to the Banda arc. Emplacement age constraints indicate that this subduction system formed close to the Triassic passive and paleo-transform margins of the Anatolide-Tauride continental block. Because the original Triassic-Jurassic Neo-Tethyan spreading ridge must have already subducted below the Pontides before the Late Cretaceous, we infer that the Late Cretaceous Neo-Tethyan subduction system started within ancient lithosphere, along NNE-SSW oriented fracture zones and faults parallel to the E-W trending passive margins. This challenges current concepts suggesting that subduction initiation occurs along active intra-oceanic plate boundaries.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
...The Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid Services (CMS) (the Agencies) are announcing the extension of the ``Pilot Program for Parallel Review of Medical Products.'' The Agencies have decided to continue the program as currently designed for an additional period of 2 years from the date of publication of this notice.
Equatorial Wave Line, Pacific Ocean
1993-01-19
STS054-95-042 (13-19 Jan 1993) --- The Equatorial Pacific Ocean is represented in this 70mm view. The international oceanographic research community is presently conducting a program called Joint Global Ocean Flux Study (JGOFS) to study the global ocean carbon budget. A considerable amount of effort within this program is presently being focused on the Equatorial Pacific Ocean because of the high annual average biological productivity. The high productivity is the result of nearly constant easterly winds causing cool, nutrient-rich water to well up at the equator. In this view of the sun glint pattern was photographed at about 2 degrees north latitude, 103 degrees west longitude, as the Space Shuttle passed over the Equatorial Pacific. The long narrow line is the equatorial front, which defines the boundary between warm surface equatorial water and cool, recently upwelled water. Such features are of interest to the JGOFS researchers and it is anticipated that photographs such as this will benefit the JGOFS program.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This document reviews the Pacific Region Junior Science and Humanities Symposium (PJSHS) program for 2003-2004 which is a 10-month, precollege student research program held in Japan. The theme is AtmosphereThe Other Ocean. The program includes a one-week symposium of student delegates who have completed research projects in the sciences or have…
Integrated Task and Data Parallel Programming
NASA Technical Reports Server (NTRS)
Grimshaw, A. S.
1998-01-01
This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
Integrated Task And Data Parallel Programming: Language Design
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; West, Emily A.
1998-01-01
his research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers '95 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program m. Additional 1995 Activities During the fall I collaborated with Andrew Grimshaw and Adam Ferrari to write a book chapter which will be included in Parallel Processing in C++ edited by Gregory Wilson. I also finished two courses, Compilers and Advanced Compilers, in 1995. These courses complete my class requirements at the University of Virginia. I have only my dissertation research and defense to complete.
Bookshelf faulting and transform motion between rift segments of the Northern Volcanic Zone, Iceland
NASA Astrophysics Data System (ADS)
Green, R. G.; White, R. S.; Greenfield, T. S.
2013-12-01
Plate spreading is segmented on length scales from 10 - 1,000 kilometres. Where spreading segments are offset, extensional motion has to transfer from one segment to another. In classical plate tectonics, mid-ocean ridge spreading centres are offset by transform faults, but smaller 'non-transform' offsets exist between slightly overlapping spreading centres which accommodate shear by a variety of geometries. In Iceland the mid-Atlantic Ridge is raised above sea level by the Iceland mantle plume, and is divided into a series of segments 20-150 km long. Using microseismicity recorded by a temporary array of 26 three-component seismometers during 2009-2012 we map bookshelf faulting between the offset Askja and Kverkfjöll rift segments in north Iceland. The micro-earthquakes delineate a series of sub-parallel strike-slip faults. Well constrained fault plane solutions show consistent left-lateral motion on fault planes aligned closely with epicentral trends. The shear couple across the transform zone causes left-lateral slip on the series of strike-slip faults sub-parallel to the rift fabric, causing clockwise rotations about a vertical axis of the intervening rigid crustal blocks. This accommodates the overall right-lateral transform motion in the relay zone between the two overlapping volcanic rift segments. The faults probably reactivated crustal weaknesses along the dyke intrusion fabric (parallel to the rift axis) and have since rotated ˜15° clockwise into their present orientation. The reactivation of pre-existing rift-parallel weaknesses is in contrast with mid-ocean ridge transform faults, and is an important illustration of a 'non-transform' offset accommodating shear between overlapping spreading segments.
NASA Astrophysics Data System (ADS)
Baines, A. Graham; Cheadle, Michael J.; Dick, Henry J. B.; Hosford Scheirer, Allegra; John, Barbara E.; Kusznir, Nick J.; Matsumoto, Takeshi
2003-12-01
Atlantis Bank is an anomalously uplifted oceanic core complex adjacent to the Atlantis II transform, on the southwest Indian Ridge, that rises >3 km above normal seafloor of the same age. Models of flexural uplift due to detachment faulting can account for ˜1 km of this uplift. Postdetachment normal faults have been observed during submersible dives and on swath bathymetry. Two transform-parallel, large-offset (hundreds of meters) normal faults are identified on the eastern flank of Atlantis Bank, with numerous smaller faults (tens of meters) on the western flank. Flexural uplift associated with this transform-parallel normal faulting is consistent with gravity data and can account for the remaining anomalous uplift of Atlantis Bank. Extension normal to the Atlantis II transform may have occurred during a 12 m.y. period of transtension initiated by a 10° change in spreading direction ca. 19.5 Ma. This extension may have produced the 120-km-long transverse ridge of which Atlantis Bank is a part, and is consistent with stress reorientation about a weak transform fault.
Baines, A.G.; Cheadle, Michael J.; Dick, H.J.B.; Scheirer, A.H.; John, Barbara E.; Kusznir, N.J.; Matsumoto, T.
2003-01-01
Atlantis Bank is an anomalously uplifted oceanic core complex adjacent to the Atlantis II transform, on the southwest Indian Ridge, that rises >3 km above normal seafloor of the same age. Models of flexural uplift due to detachment faulting can account for ???1 km of this uplift. Postdetachment normal faults have been observed during submersible dives and on swath bathymetry. Two transform-parallel, large-offset (hundreds of meters) normal faults are identified on the eastern flank of Atlantis Bank, with numerous smaller faults (tens of meters) on the western flank. Flexural uplift associated with this transform-parallel normal faulting is consistent with gravity data and can account for the remaining anomalous uplift of Atlantis Bank. Extension normal to the Atlantis II transform may have occurred during a 12 m.y. period of transtension initiated by a 10?? change in spreading direction ca. 19.5 Ma. This extension may have produced the 120-km-long transverse ridge of which Atlantis Bank is a part, and is consistent with stress reorientation about a weak transform fault.
Automatic Management of Parallel and Distributed System Resources
NASA Technical Reports Server (NTRS)
Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.
1990-01-01
Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Describing, using 'recognition cones'. [parallel-series model with English-like computer program
NASA Technical Reports Server (NTRS)
Uhr, L.
1973-01-01
A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.
PISCES: An environment for parallel scientific computation
NASA Technical Reports Server (NTRS)
Pratt, T. W.
1985-01-01
The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-07-01
This report summarizes the results of FY 1981 National Oceanic and Atmospheric Administration (NOAA) monitoring and research efforts under Title II of the Marine Protection, Research, and Sanctuaries Act of 1972 (P.L. 92-532). Section 201 of Title II assigns responsibility to the Department of Commerce for a comprehensive and continuing program of monitoring and research regarding the effects of dumping material into ocean waters, coastal waters, and the Great Lakes. Section 202 of Title II directs the Secretary of Commerce, in consultation with other appropriate parts of the U.S. Government, to 'initiate a comprehensive and continuing program of research withmore » respect to the possible long-range effects of pollution, overfishing, and man-induced changes of ocean ecosystems.' The legislation also directs the Secretary of Commerce to report the findings from the monitoring and research programs to the Congress at least once a year. There are intrinsic difficulties, however, in distinguishing 'long-range' effects from the 'acute' effects of ocean dumping, or more generally of marine pollution. In response to these considerations and to the responsibilities assigned to NOAA under the National Ocean Pollution Planning Act (P.L. 95-273), NOAA has consolidated and coordinated its research efforts in these areas to make the overall program more cost-effective and productive.« less
Eigensolver for a Sparse, Large Hermitian Matrix
NASA Technical Reports Server (NTRS)
Tisdale, E. Robert; Oyafuso, Fabiano; Klimeck, Gerhard; Brown, R. Chris
2003-01-01
A parallel-processing computer program finds a few eigenvalues in a sparse Hermitian matrix that contains as many as 100 million diagonal elements. This program finds the eigenvalues faster, using less memory, than do other, comparable eigensolver programs. This program implements a Lanczos algorithm in the American National Standards Institute/ International Organization for Standardization (ANSI/ISO) C computing language, using the Message Passing Interface (MPI) standard to complement an eigensolver in PARPACK. [PARPACK (Parallel Arnoldi Package) is an extension, to parallel-processing computer architectures, of ARPACK (Arnoldi Package), which is a collection of Fortran 77 subroutines that solve large-scale eigenvalue problems.] The eigensolver runs on Beowulf clusters of computers at the Jet Propulsion Laboratory (JPL).
NASA Astrophysics Data System (ADS)
Hashimoto, Y.; Tobin, H. J.; Knuth, M.
2010-12-01
In this study, we focused on the porosity and compressional wave velocity of marine sediments to examine the physical properties of the slope apron and the accreted sediments. This approach allows us to identify characteristic variations between sediments being deposited onto the active prism and those deposited on the oceanic plate and then carried into the prism during subduction. For this purpose we conducted ultrasonic compressional wave velocity measurements on the obtained core samples with pore pressure control. Site C0001 in the Nankai Trough Seismogenic Zone Experiment transect of the Integrated Ocean Drilling Program is located in the hanging wall of the midslope megasplay thrust fault in the Nankai subduction zone offshore of the Kii peninsula (SW Japan), penetrating an unconformity at ˜200 m depth between slope apron sediments and the underlying accreted sediments. We used samples from Site C0001. Compressional wave velocity from laboratory measurements ranges from ˜1.6 to ˜2.0 km/s at hydrostatic pore pressure conditions estimated from sample depth. The compressional wave velocity-porosity relationship for the slope apron sediments shows a slope almost parallel to the slope for global empirical relationships. In contrast, the velocity-porosity relationship for the accreted sediments shows a slightly steeper slope than that of the slope apron sediments at 0.55 of porosity. This higher slope in the velocity-porosity relationship is found to be characteristic of the accreted sediments. Textural analysis was also conducted to examine the relationship between microstructural texture and acoustic properties. Images from micro-X-ray CT indicated a homogeneous and well-sorted distribution of small pores both in shallow and in deeper sections. Other mechanisms such as lithology, clay fraction, and abnormal fluid pressure were found to be insufficient to explain the higher velocity for accreted sediments. The higher slope in velocity-porosity relationship for accreted sediments can be explained by weak cementation, critical porosity or differences in loading history.
Parallelization of elliptic solver for solving 1D Boussinesq model
NASA Astrophysics Data System (ADS)
Tarwidi, D.; Adytia, D.
2018-03-01
In this paper, a parallel implementation of an elliptic solver in solving 1D Boussinesq model is presented. Numerical solution of Boussinesq model is obtained by implementing a staggered grid scheme to continuity, momentum, and elliptic equation of Boussinesq model. Tridiagonal system emerging from numerical scheme of elliptic equation is solved by cyclic reduction algorithm. The parallel implementation of cyclic reduction is executed on multicore processors with shared memory architectures using OpenMP. To measure the performance of parallel program, large number of grids is varied from 28 to 214. Two test cases of numerical experiment, i.e. propagation of solitary and standing wave, are proposed to evaluate the parallel program. The numerical results are verified with analytical solution of solitary and standing wave. The best speedup of solitary and standing wave test cases is about 2.07 with 214 of grids and 1.86 with 213 of grids, respectively, which are executed by using 8 threads. Moreover, the best efficiency of parallel program is 76.2% and 73.5% for solitary and standing wave test cases, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Paralex: An Environment for Parallel Programming in Distributed Systems
1991-12-07
distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance
Exploring the Oceans With OOI and IODP: A New Partnership in Education and Outreach
NASA Astrophysics Data System (ADS)
Gröschel, H.; Robigou, V.; Whitman, J.; Jagoda, S. K.; Randle, D.
2003-12-01
The Ocean Observatories Initiative (OOI), a new program supported by the National Science Foundation (NSF), will investigate ocean and Earth processes using deep-sea and coastal observatories, as well as a lithospheric plate-scale cabled observatory that spans most of the geological and oceanographic processes of our planet. October 2003 marked the beginning of the Integrated Ocean Drilling Program (IODP), the third phase of a scientific ocean drilling effort known for its international cooperation, multidisciplinary research, and technological innovation. A workshop exploring the scientific, technical, and educational linkages between OOI and IODP was held in July 2003. Four scientific thematic groups discussed and prioritized common goals of the two programs, and identified experiments and technologies needed to achieve these objectives. The Education and Outreach (E&O) group attended the science sessions and presented seed ideas on activities for all participants to discuss and evaluate. A multidisciplinary dialogue between E&O facilitators, research scientists, and technology specialists was initiated. OOI/IODP participants support the recommendation of the IODP Education Workshop (May 2003) that the IODP and US Science Support Program (USSSP)-successor program have clear commitments to education and outreach. Specific organizational recommendations for OOI/IODP are: (1) E&O should have equal status with science and engineering in the OOI management/planning structure, and enjoy adequate staffing at a US program office; (2) an E&O Advisory Committee of scientists, engineers, technology experts, and educators should be established to develop and implement a viable, vibrant E&O plan; (3) E&O staff and advisors should (a) provide assistance to researchers in fulfilling E&O proposal requirements from preparation to review stages, (b) promote submittal of proposals to government agencies specifically for OOI/IODP-related E&O activities, and (c) identify and foster partners, networks, and funding opportunities. Specific E&O strategies include: (1) present observatory science and ocean drilling content, and the sense of discovery and international cooperation unique to OOI/IODP, to a broad audience; (2) develop and maintain an effective website with distinct resources for K-20 educators, students, and the public; (3) provide pre-service, in-service, and in-residence programs for K-12 teachers that are synergistic with national and local education standards; (4) focus K-12 education efforts on middle school students in grades 5-8; (5) continue and expand existing, successful Ocean Drilling Program activities for undergraduate and graduate students and educators; and (6) try to avoid redundancy with existing E&O efforts within the ocean sciences community by adopting successful models and exploring partnership opportunities with other NSF-funded ocean science education centers and initiatives.
Seasonal Atmospheric and Oceanic Predictions
NASA Technical Reports Server (NTRS)
Roads, John; Rienecker, Michele (Technical Monitor)
2003-01-01
Several projects associated with dynamical, statistical, single column, and ocean models are presented. The projects include: 1) Regional Climate Modeling; 2) Statistical Downscaling; 3) Evaluation of SCM and NSIPP AGCM Results at the ARM Program Sites; and 4) Ocean Forecasts.
Global Observations and Understanding of the General Circulation of the Oceans
NASA Technical Reports Server (NTRS)
1984-01-01
The workshop was organized to: (1) assess the ability to obtain ocean data on a global scale that could profoundly change our understanding of the circulation; (2) identify the primary and secondary elements needed to conduct a World Ocean Circulation Experiment (WOCE); (3) if the ability is achievable, to determine what the U.S. role in such an experiment should be; and (4) outline the steps necessary to assure that an appropriate program is conducted. The consensus of the workshop was that a World Ocean Circulation Experiment appears feasible, worthwhile, and timely. Participants did agree that such a program should have the overall goal of understanding the general circulation of the global ocean well enough to be able to predict ocean response and feedback to long-term changes in the atmosphere. The overall goal, specific objectives, and recommendations for next steps in planning such an experiment are included.
Antarctica and global change research
NASA Astrophysics Data System (ADS)
Weller, Gunter; Lange, Manfred
1992-03-01
The Antarctic, including the continent and Southern Ocean with the subantarctic islands, is a critical area in the global change studies under the International Geosphere-Biosphere Program (IGBP) and the World Climate Research Program (WCRP). Major scientific problems include the impacts of climate warming, the ozone hole, and sea level changes. Large-scale interactions between the atmosphere, ice, ocean, and biota in the Antarctic affect the entire global system through feedbacks, biogeochemical cycles, deep-ocean circulation, atmospheric transport of heat, moisture, and pollutants, and changes in ice mass balances. Antarctica is also a rich repository of paleoenvironmental information in its ice sheet and its ocean and land sediments.
Interfacing Computer Aided Parallelization and Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)
2003-01-01
When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.
NASA Technical Reports Server (NTRS)
Vonbun, F. O.
1972-01-01
The application of time and frequency standards to the Earth and Ocean Physics Applications Program (EOPAP) is discussed. The goals and experiments of the EOPAP are described. Methods for obtaining frequency stability and time synchronization are analyzed. The orbits, trajectories, and characteristics of the satellites used in the program are reported.
The Food Service Manager; A Study of the Need for a Food Service Management Program in Ocean County.
ERIC Educational Resources Information Center
Ocean County Coll., Toms River, NJ.
Ocean County College conducted a feasibility study for the purpose of determining whether there was a need for a food service management program within its service area and to ascertain an estimate of the potential student pool for such a program. Surveys were sent to 243 restaurants and institutions and were administered to students from county…
LDRD final report on massively-parallel linear programming : the parPCx system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less
Emerging Methods and Systems for Observing Life in the Sea
NASA Astrophysics Data System (ADS)
Chavez, F.; Pearlman, J.; Simmons, S. E.
2016-12-01
There is a growing need for observations of life in the sea at time and space scales consistent with those made for physical and chemical parameters. International programs such as the Global Ocean Observing System (GOOS) and Marine Biodiversity Observation Networks (MBON) are making the case for expanded biological observations and working diligently to prioritize essential variables. Here we review past, present and emerging systems and methods for observing life in the sea from the perspective of maintaining continuous observations over long time periods. Methods that rely on ships with instrumentation and over-the-side sample collections will need to be supplemented and eventually replaced with those based from autonomous platforms. Ship-based optical and acoustic instruments are being reduced in size and power for deployment on moorings and autonomous vehicles. In parallel a new generation of low power, improved resolution sensors are being developed. Animal bio-logging is evolving with new, smaller and more sophisticated tags being developed. New genomic methods, capable of assessing multiple trophic levels from a single water sample, are emerging. Autonomous devices for genomic sample collection are being miniaturized and adapted to autonomous vehicles. The required processing schemes and methods for these emerging data collections are being developed in parallel with the instrumentation. An evolving challenge will be the integration of information from these disparate methods given that each provides their own unique view of life in the sea.
NASA Technical Reports Server (NTRS)
Chao, Benjamin F.; Chen, J. L.; Johnson, T.; Au, A. Y.
1998-01-01
Hydrological mass transport in the geophysical fluids of the atmosphere-hydrosphere-solid Earth surface system can excite Earth's rotational variations in both length-of-day and polar motion. These effects can be computed in terms of the hydrological angular momentum by proper integration of global meteorological data. We do so using the 40-year NCEP data and the 18-year NASA GEOS-1 data, where the precipitation and evapotranspiration budgets are computed via the water mass balance of the atmosphere based on Oki et al.'s (1995) algorithm. This hydrological mass redistribution will also cause geocenter motion and changes in Earth's gravitational field, which are similarly computed using the same data sets. Corresponding geodynamic effects due to the oceanic mass transports (i.e. oceanic angular momentum and ocean-induced geocenter/gravity changes) have also been computed in a similar manner. We here compare two independent sets of the result from: (1) non-steric ocean surface topography observations based on Topex/Poseidon, and (2) the model output of the mass field by the Parallel Ocean Climate Model. Finally, the hydrological and the oceanic time series are combined in an effort to better explain the observed non-atmospheric effects. The latter are obtained by subtracting the atmospheric angular momentum from Earth rotation observations, and the atmosphere- induced geocenter/gravity effects from corresponding geodetic observations, both using the above-mentioned atmospheric data sets.
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
Burkhart, Diane N; Lischka, Terri A
2011-04-01
Students in colleges of osteopathic medicine have several options when considering postdoctoral training programs. In addition to training programs approved solely by the American Osteopathic Association or accredited solely by the Accreditation Council for Graduate Medical Education (ACGME), students can pursue programs accredited by both organizations (ie, dually accredited programs) or osteopathic programs that occur side-by-side with ACGME programs (ie, parallel programs). In the present article, we report on the availability and growth of these 2 training options and describe their benefits and drawbacks for trainees and the osteopathic medical profession as a whole.
NASA Astrophysics Data System (ADS)
Courrèges, E.; Vially, R.; Roest, W. R.; Patriat, M.; Patriat, P.; Loubrieu, B.; Lecomte, J.-C.; Schaming, M.; Schmitz, J.; Maia, M.
2009-04-01
France ratified the United Nations Convention on the Law of the Sea in 1996, and has since undertaken an ambitious program of bathymetric and seismic data acquisition (EXTRAPLAC Program) to support claims for the extension of the legal continental shelf, in accordance with Article 76 of this convention. For this purpose, three oceanographic surveys took place on board of the R/V Marion Dufresne II, operated by the French Polar Institute, on the Kerguelen Plateau, in the Southern Indian Ocean: MD137-Kergueplac1 (February 2004), MD150-Kergueplac2 (October 2005) and MD165-Kergueplac3 (January 2008). Thus, more than 20 000 km of multibeam bathymetric, magnetic and gravimetric profiles, and almost 6 000 km of seismic profiles where acquired during a total of 62 days of survey in the study area. Ifremer's "rapid seismic" system was used, comprised of 4 guns and a 24 trace digital streamer, operated at speeds up to 10 knots. In addition to its use for the Extraplac Program, the data set issued from these surveys provides the opportunity to improve our knowledge of the structure of the Kerguelen Plateau and more particularly of its complex margins. In this poster, we show different kinds of data. The high resolution bathymetry (200 m grid) data set allows us to specify the irregular morphology of the sea floor in the north Kerguelen Plateau region, characterised by ridges and volcano chains that intersect the oceanic basin on its NE edge. The seismic profiles show that the acoustic basement of the plateau is not much tectonised, and displays a very smooth texture, clearly contrasting it from typical oceanic basement. Both along the edge of the plateau and in the abyssal plain, sediments have variable thicknesses. The sediments on the margin of the plateau are up to 1200 meters thick and display irregular crisscross patterns, suggesting the presence of important bottom currents. An important concentration of new magnetic data, in a key area (Northern Kerguelen Platerau) and at a key period (Oligocene), helps us understand the setting up of the oceanic plateau and the kinematics reconstructions between Antarctica and Australia plates. We focused on the northeastern margin of the Kerguelen Plateau, from 77E30 up to the Amsterdam Saint-Paul fracture zone, where the South East Indian Ridge (SEIR) shows a large offset toward the Kerguelen Plateau. On a larger scale, the opening between Kerguelen Plateau and Broken Ridge has kept very morphologically homologous margins on each side of the SEIR: in the Southern Kerguelen Plateau, the magnetic anomalies are regular, parallel to the SEIR and also to the morphological boundary of the plateau. In contrast, the northeastern margin of the Northern Kerguelen Plateau is not much explored and its interpretation less obvious, because volcanic masses overlie discordantly the oceanic crust. We compiled the magnetic anomalies pickings, integrating data from recent Kergueplac surveys and previous studies, to identify with more confidence the oldest anomalies at the plateau margin. We used it for reconstructions at different stages (from a8o to initial opening), realised with rotation poles of Cande & Stocks (2004). These reconstructions confirm that kinematic in the whole SE Indian Ocean, and in particular in the Northern part of the Kerguelen Plateau, remains unresolved. In particular, we discuss both the fit between Kerguelen and Broken Ridge and the implication for the opening between Australia and Antarctica as well as the possible junctions along the Amsterdam fracture zone with the Crozet basin where the spreading rate was much faster before A18.
The Ocean as a Unique Therapeutic Environment: Developing a Surfing Program
ERIC Educational Resources Information Center
Clapham, Emily D.; Armitano, Cortney N.; Lamont, Linda S.; Audette, Jennifer G.
2014-01-01
Educational aquatic programming offers necessary physical activity opportunities to children with disabilities and the benefits of aquatic activities are more pronounced for children with disabilities than for their able-bodied peers. Similar benefits could potentially be derived from surfing in the ocean. This article describes an adapted surfing…
Only One Ocean: Marine Science Activities for Grades 5-8. Teacher's Guide.
ERIC Educational Resources Information Center
Halversen, Catherine; Strang, Craig
This guide was designed by the Marine Activities, Resources & Education (MARE) Program through the Great Explorations in Math and Science (GEMS) ongoing curriculum development program for middle school students. This GEMS guide addresses the concepts of the interconnectedness of the ocean basins, respect for organisms, oceanography, physical…
Ecological Condition of Coastal Ocean Waters along the U.S. Western Continental Shelf: 2003
The western National Coastal Assessment program of EPA, in conjunction with the NOAA National Ocean Service, west coast states (WA, OR, and CA), and the Southern California Coastal Water Research Project Bight ’03 program, assessed the ecological condition of soft sediment habita...
NASA Technical Reports Server (NTRS)
Vaughn, Charles R.
1993-01-01
This Technical Memorandum is a user's manual with additional program documentation for the computer program PREROWS2.EXE. PREROWS2 works with data collected by an ocean wave spectrometer that uses radar (ROWS) as an active remote sensor. The original ROWS data acquisition subsystem was replaced with a PC in 1990. PREROWS2.EXE is a compiled QuickBasic 4.5 program that unpacks the recorded data, displays various variables, and provides for copying blocks of data from the original 8mm tape to a PC file.
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
Charon Toolkit for Parallel, Implicit Structured-Grid Computations: Functional Design
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the actual tool descriptions, we will present the considerations that led to the particular design choices. Many of these are motivated by the requirement that Charon must be useful within the traditional computational environments of Fortran 77 and C. Only the Fortran 77 syntax will be presented in this report.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Exploiting loop level parallelism in nonprocedural dataflow programs
NASA Technical Reports Server (NTRS)
Gokhale, Maya B.
1987-01-01
Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.
GOCI Level-2 Processing Improvements and Cloud Motion Analysis
NASA Technical Reports Server (NTRS)
Robinson, Wayne D.
2015-01-01
The Ocean Biology Processing Group has been working with the Korean Institute of Ocean Science and Technology (KIOST) to process geosynchronous ocean color data from the GOCI (Geostationary Ocean Color Instrument) aboard the COMS (Communications, Ocean and Meteorological Satellite). The level-2 processing program, l2gen has GOCI processing as an option. Improvements made to that processing are discussed here as well as a discussion about cloud motion effects.
Tolerant (parallel) Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Bailey, David H. (Technical Monitor)
1997-01-01
In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.
Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P
2014-10-30
Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.
New NAS Parallel Benchmarks Results
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)
1997-01-01
NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.
Exploring types of play in an adapted robotics program for children with disabilities.
Lindsay, Sally; Lam, Ashley
2018-04-01
Play is an important occupation in a child's development. Children with disabilities often have fewer opportunities to engage in meaningful play than typically developing children. The purpose of this study was to explore the types of play (i.e., solitary, parallel and co-operative) within an adapted robotics program for children with disabilities aged 6-8 years. This study draws on detailed observations of each of the six robotics workshops and interviews with 53 participants (21 children, 21 parents and 11 programme staff). Our findings showed that four children engaged in solitary play, where all but one showed signs of moving towards parallel play. Six children demonstrated parallel play during all workshops. The remainder of the children had mixed play types play (solitary, parallel and/or co-operative) throughout the robotics workshops. We observed more parallel and co-operative, and less solitary play as the programme progressed. Ten different children displayed co-operative behaviours throughout the workshops. The interviews highlighted how staff supported children's engagement in the programme. Meanwhile, parents reported on their child's development of play skills. An adapted LEGO ® robotics program has potential to develop the play skills of children with disabilities in moving from solitary towards more parallel and co-operative play. Implications for rehabilitation Educators and clinicians working with children who have disabilities should consider the potential of LEGO ® robotics programs for developing their play skills. Clinicians should consider how the extent of their involvement in prompting and facilitating children's engagement and play within a robotics program may influence their ability to interact with their peers. Educators and clinicians should incorporate both structured and unstructured free-play elements within a robotics program to facilitate children's social development.
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
USDA-ARS?s Scientific Manuscript database
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
From silk to satellite: half a century of ocean colour anomalies in the Northeast Atlantic.
Raitsos, Dionysios E; Pradhan, Yaswant; Lavender, Samantha J; Hoteit, Ibrahim; McQuatters-Gollop, Abigail; Reid, Phillip C; Richardson, Anthony J
2014-07-01
Changes in phytoplankton dynamics influence marine biogeochemical cycles, climate processes, and food webs, with substantial social and economic consequences. Large-scale estimation of phytoplankton biomass was possible via ocean colour measurements from two remote sensing satellites - the Coastal Zone Colour Scanner (CZCS, 1979-1986) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS, 1998-2010). Due to the large gap between the two satellite eras and differences in sensor characteristics, comparison of the absolute values retrieved from the two instruments remains challenging. Using a unique in situ ocean colour dataset that spans more than half a century, the two satellite-derived chlorophyll-a (Chl-a) eras are linked to assess concurrent changes in phytoplankton variability and bloom timing over the Northeast Atlantic Ocean and North Sea. Results from this unique re-analysis reflect a clear increasing pattern of Chl-a, a merging of the two seasonal phytoplankton blooms producing a longer growing season and higher seasonal biomass, since the mid-1980s. The broader climate plays a key role in Chl-a variability as the ocean colour anomalies parallel the oscillations of the Northern Hemisphere Temperature (NHT) since 1948. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Maeda, T.; Furumura, T.; Noguchi, S.; Takemura, S.; Iwai, K.; Lee, S.; Sakai, S.; Shinohara, M.
2011-12-01
The fault rupture of the 2011 Tohoku (Mw9.0) earthquake spread approximately 550 km by 260 km with a long source rupture duration of ~200 s. For such large earthquake with a complicated source rupture process the radiation of seismic wave from the source rupture and initiation of tsunami due to the coseismic deformation is considered to be very complicated. In order to understand such a complicated process of seismic wave, coseismic deformation and tsunami, we proposed a unified approach for total modeling of earthquake induced phenomena in a single numerical scheme based on a finite-difference method simulation (Maeda and Furumura, 2011). This simulation model solves the equation of motion of based on the linear elastic theory with equilibrium between quasi-static pressure and gravity in the water column. The height of tsunami is obtained from this simulation as a vertical displacement of ocean surface. In order to simulate seismic waves, ocean acoustics, coseismic deformations, and tsunami from the 2011 Tohoku earthquake, we assembled a high-resolution 3D heterogeneous subsurface structural model of northern Japan. The area of simulation is 1200 km x 800 km and 120 km in depth, which have been discretized with grid interval of 1 km in horizontal directions and 0.25 km in vertical direction, respectively. We adopt a source-rupture model proposed by Lee et al. (2011) which is obtained by the joint inversion of teleseismic, near-field strong motion, and coseismic deformation. For conducting such a large-scale simulation, we fully parallelized our simulation code based on a domain-partitioning procedure which achieved a good speed-up by parallel computing up to 8192 core processors with parallel efficiency of 99.839%. The simulation result demonstrates clearly the process in which the seismic wave radiates from the complicated source rupture over the fault plane and propagating in heterogeneous structure of northern Japan. Then, generation of tsunami from coseismic ground deformation at sea floor due to the earthquake and propagation is also well demonstrated . The simulation also demonstrates that a very large slip up to 40 m at shallow plate boundary near the trench pushes up sea floor with source rupture propagation, and the highly elevated sea surface gradually start propagation as tsunamis due to the gravity. The result of simulation of vertical-component displacement waveform matches the ocean-bottom pressure gauge record which is installed just above the source fault area (Maeda et al., 2011) very consistently. Strong reverberation of the ocean-acoustic waves between sea surface and sea bottom particularly near the Japan Trench for long time after the source rupture ends is confirmed in the present simulation. Accordingly, long wavetrains of high-frequency ocean acoustic waves is developed and overlap to later tsunami waveforms as we found in the observations.
Converging Oceaniac Internal Waves, Somalia, Africa
1988-10-03
The arculate fronts of these apparently converging internal waves off the northeast coast of Somalia (11.5N, 51.5E) probably were produced by interaction with two parallel submarine canyons off the Horn of Africa. Internal waves are packets of tidally generated waves traveling within the ocean at varying depths and are not detectable by any surface disturbance.
Jr. Hunt
1995-01-01
Marbled Murrelets (Brachyramphus marmoratus) occupy nearshore waters in the eastern North Pacific Ocean from central California to the Aleutian Islands. The offshore marine ecology of these waters is dominated by a series of currents roughly parallel to the coast that determine marine productivity of shelf waters by influencing the rate of nutrient...
Ocean Thermal Energy Conversion (OTEC) program. FY 1977 program summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1978-01-01
An overview is given of the ongoing research, development, and demonstration efforts. Each of the DOE's Ocean Thermal Energy Conversion projects funded during fiscal year 1977 (October 1, 1976 through September 30, 1977) is described and each project's status as of December 31, 1977 is reflected. These projects are grouped as follows: program support, definition planning, engineering development, engineering test and evaluation, and advanced research and technology. (MHR)
International organisation of ocean programs: Making a virtue of necessity
NASA Technical Reports Server (NTRS)
Mcewan, Angus
1992-01-01
When faced with the needs of climate prediction, a sharp contrast is revealed between existing networks for the observation of the atmosphere and for the ocean. Even the largest and longest-serving ocean data networks were created for their value to a specific user (usually with a defence, fishing or other maritime purpose) and the major compilations of historical data have needed extensive scientific input to reconcile the differences and deficiencies of the various sources. Vast amounts of such data remain inaccessible or unusable. Observations for research purposes have been generally short lived and funded on the basis of single initiatives. Even major programs such as FGGE, TOGA and WOCE have been driven by the dedicated interest of a surprisingly small number of individuals, and have been funded from a wide variety of temporary allocations. Recognising the global scale of ocean observations needed for climate research, international cooperation and coordination is an unavoidable necessity, resulting in the creation of such bodies as the Committee for Climatic Changes and the Ocean (CCCO), with the tasks of: (1) defining the scientific elements of research and ocean observation which meet the needs of climate prediction and amelioration; (2) translating these elements into terms of programs, projects or requirements that can be understood and participated in by individual nations and marine agencies; and (3) the sponsorship of specialist groups to facilitate the definition of research programs, the implementation of cooperative international activity and the dissemination of results.
NASA Astrophysics Data System (ADS)
Clarkston, B. E.; Garza, C.
2016-02-01
The problem of improving diversity within the Ocean Sciences workforce—still underperforming relative to other scientific disciplines—can only be addressed by first recruiting and engaging a more diverse student population into the discipline, then retaining them in the workforce. California State University, Monterey Bay (CSUMB) is home to the Monterey Bay Regional Ocean Science Research Experiences for Undergraduates (REU) program. As an HSI with strong ties to multiple regional community colleges and other Predominantly Undergraduate Institutions (PUIs) in the CSU system, the Monterey Bay REU is uniquely positioned to address the crucial recruitment and engagement of a diverse student body. Eleven sophomore and junior-level undergraduate students are recruited per year from academic institutions where research opportunities in STEM are limited and from groups historically underrepresented in the Ocean Sciences, including women, underrepresented minorities, persons with disabilities, and veterans. During the program, students engage in a 10-week original research project guided by a faculty research mentor in one of four themes: Oceanography, Marine Biology and Ecology, Ocean Engineering, and Marine Geology. In addition to research, students develop scientific self-efficacy and literacy skills through rigorous weekly professional development workshops in which they practice critical thinking, ethical decision-making, peer review, writing and oral communication skills. These workshops include tangible products such as an NSF-style proposal paper, Statement of Purpose and CV modelled for the SACNAS Travel Award Application, research abstract, scientific report and oral presentation. To help retain students in Ocean Sciences, students build community during the REU by living together in the CSUMB dormitories; post-REU, students stay connected through an online facebook group, LinkedIn page and group webinars. To date, the REU has supported 22 students in two cohorts (2014, 2015) and here we present successes, challenges and lessons learned for an innovative program designed to recruit, engage and prepare students for Ocean Science careers.
Multiprocessor smalltalk: Implementation, performance, and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pallas, J.I.
1990-01-01
Multiprocessor Smalltalk demonstrates the value of object-oriented programming on a multiprocessor. Its implementation and analysis shed light on three areas: concurrent programming in an object oriented language without special extensions, implementation techniques for adapting to multiprocessors, and performance factors in the resulting system. Adding parallelism to Smalltalk code is easy, because programs already use control abstractions like iterators. Smalltalk's basic control and concurrency primitives (lambda expressions, processes and semaphores) can be used to build parallel control abstractions, including parallel iterators, parallel objects, atomic objects, and futures. Language extensions for concurrency are not required. This implementation demonstrates that it is possiblemore » to build an efficient parallel object-oriented programming system and illustrates techniques for doing so. Three modification tools-serialization, replication, and reorganization-adapted the Berkeley Smalltalk interpreter to the Firefly multiprocessor. Multiprocessor Smalltalk's performance shows that the combination of multiprocessing and object-oriented programming can be effective: speedups (relative to the original serial version) exceed 2.0 for five processors on all the benchmarks; the median efficiency is 48%. Analysis shows both where performance is lost and how to improve and generalize the experimental results. Changes in the interpreter to support concurrency add at most 12% overhead; better access to per-process variables could eliminate much of that. Changes in the user code to express concurrency add as much as 70% overhead; this overhead could be reduced to 54% if blocks (lambda expressions) were reentrant. Performance is also lost when the program cannot keep all five processors busy.« less
NASA Astrophysics Data System (ADS)
Bellerby, Tim
2015-04-01
PM (Parallel Models) is a new parallel programming language specifically designed for writing environmental and geophysical models. The language is intended to enable implementers to concentrate on the science behind the model rather than the details of running on parallel hardware. At the same time PM leaves the programmer in control - all parallelisation is explicit and the parallel structure of any given program may be deduced directly from the code. This paper describes a PM implementation based on the Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) standards, looking at issues involved with translating the PM parallelisation model to MPI/OpenMP protocols and considering performance in terms of the competing factors of finer-grained parallelisation and increased communication overhead. In order to maximise portability, the implementation stays within the MPI 1.3 standard as much as possible, with MPI-2 MPI-IO file handling the only significant exception. Moreover, it does not assume a thread-safe implementation of MPI. PM adopts a two-tier abstract representation of parallel hardware. A PM processor is a conceptual unit capable of efficiently executing a set of language tasks, with a complete parallel system consisting of an abstract N-dimensional array of such processors. PM processors may map to single cores executing tasks using cooperative multi-tasking, to multiple cores or even to separate processing nodes, efficiently sharing tasks using algorithms such as work stealing. While tasks may move between hardware elements within a PM processor, they may not move between processors without specific programmer intervention. Tasks are assigned to processors using a nested parallelism approach, building on ideas from Reyes et al. (2009). The main program owns all available processors. When the program enters a parallel statement then either processors are divided out among the newly generated tasks (number of new tasks < number of processors) or tasks are divided out among the available processors (number of tasks > number of processors). Nested parallel statements may further subdivide the processor set owned by a given task. Tasks or processors are distributed evenly by default, but uneven distributions are possible under programmer control. It is also possible to explicitly enable child tasks to migrate within the processor set owned by their parent task, reducing load unbalancing at the potential cost of increased inter-processor message traffic. PM incorporates some programming structures from the earlier MIST language presented at a previous EGU General Assembly, while adopting a significantly different underlying parallelisation model and type system. PM code is available at www.pm-lang.org under an unrestrictive MIT license. Reference Ruymán Reyes, Antonio J. Dorta, Francisco Almeida, Francisco de Sande, 2009. Automatic Hybrid MPI+OpenMP Code Generation with llc, Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science Volume 5759, 185-195
NASA Astrophysics Data System (ADS)
Mosher, Stephen G.; Audet, Pascal; L'Heureux, Ivan
2014-07-01
Tectonic plate reorganization at a subduction zone edge is a fundamental process that controls oceanic plate fragmentation and capture. However, the various factors responsible for these processes remain elusive. We characterize seismic anisotropy of the upper mantle in the Explorer region at the northern limit of the Cascadia subduction zone from teleseismic shear wave splitting measurements. Our results show that the mantle flow field beneath the Explorer slab is rotating anticlockwise from the convergence-parallel motion between the Juan de Fuca and the North America plates, re-aligning itself with the transcurrent motion between the Pacific and North America plates. We propose that oceanic microplate fragmentation is driven by slab stretching, thus reorganizing the mantle flow around the slab edge and further contributing to slab weakening and increase in buoyancy, eventually leading to cessation of subduction and microplate capture.
NASA Technical Reports Server (NTRS)
Capobianco, Christopher J.; Jones, John H.; Drake, Michael J.
1993-01-01
Low-temperature metal-silicate partition coefficients are extrapolated to magma ocean temperatures. If the low-temperature chemistry data is found to be applicable at high temperatures, an important assumption, then the results indicate that high temperature alone cannot account for the excess siderophile element problem of the upper mantle. For most elements, a rise in temperature will result in a modest increase in siderophile behavior if an iron-wuestite redox buffer is paralleled. However, long-range extrapolation of experimental data is hazardous when the data contains even modest experimental errors. For a given element, extrapolated high-temperature partition coefficients can differ by orders of magnitude, even when data from independent studies is consistent within quoted errors. In order to accurately assess siderophile element behavior in a magma ocean, it will be necessary to obtain direct experimental measurements for at least some of the siderophile elements.
NASA Astrophysics Data System (ADS)
Liu, Jun; Hu, Shi-Xue; Rieppel, Olivier; Jiang, Da-Yong; Benton, Michael J.; Kelley, Neil P.; Aitchison, Jonathan C.; Zhou, Chang-Yong; Wen, Wen; Huang, Jin-Yuan; Xie, Tao; Lv, Tao
2014-11-01
The presence of gigantic apex predators in the eastern Panthalassic and western Tethyan oceans suggests that complex ecosystems in the sea had become re-established in these regions at least by the early Middle Triassic, after the Permian-Triassic mass extinction (PTME). However, it is not clear whether oceanic ecosystem recovery from the PTME was globally synchronous because of the apparent lack of such predators in the eastern Tethyan/western Panthalassic region prior to the Late Triassic. Here we report a gigantic nothosaur from the lower Middle Triassic of Luoping in southwest China (eastern Tethyan ocean), which possesses the largest known lower jaw among Triassic sauropterygians. Phylogenetic analysis suggests parallel evolution of gigantism in Triassic sauropterygians. Discovery of this gigantic apex predator, together with associated diverse marine reptiles and the complex food web, indicates global recovery of shallow marine ecosystems from PTME by the early Middle Triassic.
Liu, Jun; Hu, Shi-Xue; Rieppel, Olivier; Jiang, Da-Yong; Benton, Michael J; Kelley, Neil P; Aitchison, Jonathan C; Zhou, Chang-Yong; Wen, Wen; Huang, Jin-Yuan; Xie, Tao; Lv, Tao
2014-11-27
The presence of gigantic apex predators in the eastern Panthalassic and western Tethyan oceans suggests that complex ecosystems in the sea had become re-established in these regions at least by the early Middle Triassic, after the Permian-Triassic mass extinction (PTME). However, it is not clear whether oceanic ecosystem recovery from the PTME was globally synchronous because of the apparent lack of such predators in the eastern Tethyan/western Panthalassic region prior to the Late Triassic. Here we report a gigantic nothosaur from the lower Middle Triassic of Luoping in southwest China (eastern Tethyan ocean), which possesses the largest known lower jaw among Triassic sauropterygians. Phylogenetic analysis suggests parallel evolution of gigantism in Triassic sauropterygians. Discovery of this gigantic apex predator, together with associated diverse marine reptiles and the complex food web, indicates global recovery of shallow marine ecosystems from PTME by the early Middle Triassic.
1992-05-01
and systems for developing , testing, and operating the system. A new, lightweight cable de- used this evolving technology base in the ensuing years...Funding Numbers. Development , Testing, and Operation of a Large Suspended Ocean Contrac Measurement Structure for Deep-Ocean Use Program Element No...Research L.aboratory Report Number. Ocean Acoutics and Technology Directorate PR 91:132:253 Stennis Space Center, MS 39529-5004 9. Sponsoring
NASA Astrophysics Data System (ADS)
Pelz, M.; Hoeberechts, M.; Ewing, N.; Davidson, E.; Riddell, D. J.
2014-12-01
Schools on Canada's west coast and in the Canadian Arctic are participating in the pilot year of a novel educational program based on analyzing, understanding and sharing ocean data collected by cabled observatories. The core of the program is "local observations, global connections." First, students develop an understanding of ocean conditions at their doorstep through the analysis of community-based observatory data. Then, they connect that knowledge with the health of the global ocean by engaging with students at other schools participating in the educational program and through supplemental educational resources. Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories which supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea. This Internet connectivity permits researchers, students and members of the public to download freely available data on their computers anywhere around the globe, in near real-time. In addition to the large NEPTUNE and VENUS cabled observatories off the coast of Vancouver Island, British Columbia, ONC has been installing smaller, community-based cabled observatories. Currently two are installed: one in Cambridge Bay, Nunavut and one at Brentwood College School, on Mill Bay in Saanich Inlet, BC. Several more community-based observatories are scheduled for installation within the next year. The observatories support a variety of subsea instruments, such as a video camera, hydrophone and water quality monitor and shore-based equipment including a weather station and a video camera. Schools in communities hosting an observatory are invited to participate in the program, alongside schools located in other coastal and inland communities. Students and teachers access educational material and data through a web portal, and use video conferencing and social media tools to communicate their findings. A series of lesson plans introduces the teachers and students to cabled observatory technology and instrumentation, including technical aspects and their value in monitoring changing ocean conditions. This presentation will describe the program in more detail and report on our experiences in the first months of the pilot year.
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
NASA Technical Reports Server (NTRS)
Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)
1983-01-01
A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
NASA Astrophysics Data System (ADS)
Hicks, T.
2004-12-01
The School of Ocean and Earth Sciences and Technology (SOEST) at the University of Hawaii at Manoa is home to twelve diverse research institutes, programs and academic departments that focus on a wide range of earth and planetary sciences. SOEST's main outreach goals at the K-12 level are to increase the awareness of Hawaii's schoolchildren regarding earth, ocean, and space science, and to inspire them to consider a career in science. Education and public outreach efforts in SOEST include a variety of programs that engage students and the public in formal as well as informal educational settings, such as our biennial Open House, expedition web sites, Hawaii Ocean Science Bowl, museum exhibits, and programs with local schools. Some of the projects that allow for scientist involvement in E/PO include visiting local classrooms, volunteering in our outreach programs, submitting lessons and media files to our educational database of outreach materials relating to earth and space science research in Hawaii, developing E/PO materials to supplement research grants, and working with local museum staff as science experts.
The Monterey Ocean Observing System Development Program
NASA Astrophysics Data System (ADS)
Chaffey, M.; Graybeal, J. B.; O'Reilly, T.; Ryan, J.
2004-12-01
The Monterey Bay Aquarium Research Institute (MBARI) has a major development program underway to design, build, test and apply technology suitable to deep ocean observatories. The Monterey Ocean Observing System (MOOS) program is designed to form a large-scale instrument network that provides generic interfaces, intelligent instrument support, data archiving and near-real-time interaction for observatory experiments. The MOOS mooring system is designed as a portable surface mooring based seafloor observatory that provides data and power connections to both seafloor and ocean surface instruments through a specialty anchor cable. The surface mooring collects solar and wind energy for powering instruments and transmits data to shore-side researchers using a satellite communications modem. The use of a high modulus anchor cable to reach seafloor instrument networks is a high-risk development effort that is critical for the overall success of the portable observatory concept. An aggressive field test program off the California coast is underway to improve anchor cable constructions as well as end-to-end test overall system design. The overall MOOS observatory systems view is presented and the results of our field tests completed to date are summarized.
The Deglacial to Holocene Paleoceanography of Bering Strait: Results From the SWERUS-C3 Program
NASA Astrophysics Data System (ADS)
Jakobsson, M.; Anderson, L. G.; Backman, J.; Barrientos, N.; Björk, G. M.; Coxall, H.; Cronin, T. M.; De Boer, A. M.; Gemery, L.; Jerram, K.; Johansson, C.; Kirchner, N.; Mayer, L. A.; Mörth, C. M.; Nilsson, J.; Noormets, R. R. N. N.; O'Regan, M.; Pearce, C.; Semiletov, I. P.; Stranne, C.
2017-12-01
The climate-carbon-cryosphere (C3) interactions in the East Siberian Arctic Ocean and related ocean, river and land areas of the Arctic have been the focus for the SWERUS-C3 Program (Swedish - Russian - US Arctic Ocean Investigation of Climate-Cryosphere-Carbon Interactions). This multi-investigator, multi-disciplinary program was carried out on a two-leg 90-day long expedition in 2014 with Swedish icebreaker Oden. One component of the expedition consisted of geophysical mapping and coring of Herald Canyon, located on the Chukchi Sea shelf north of the Bering Strait in the western Arctic Ocean. Herald Canyon is strategically placed to capture the history of the Pacific-Arctic Ocean connection and related changes in Arctic Ocean paleoceanography. Here we present a summary of key results from analyses of the marine geophysical mapping data and cores collected from Herald Canyon on the shelf and slope that proved to be particularly well suited for paleoceanographic reconstruction. For example, we provide a new age constraint of 11 cal ka BP on sediments from the uppermost slope for the initial flooding of the Bering Land Bridge and reestablishment of the Pacific-Arctic Ocean connection following the last glaciation. This age corresponds to meltwater pulse 1b (MWP1b) known as a post-Younger Dryas warming in many sea level and paleoclimate records. In addition, high late Holocene sedimentation rates that range between about 100 and 300 cm kyr-1, in Herald Canyon permitted paleoceanographic reconstructions of ocean circulation and sea ice cover at centennial scales throughout the late Holocene. Evidence suggests varying influence from inflowing Pacific water into the western Arctic Ocean including some evidence for quasi-cyclic variability in several paleoceanographic parameters, e.g. micropaleontological assemblages, isotope geochemistry and sediment physical properties.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Tropical Ocean Global Atmosphere (TOGA) Meteorological and Oceanographic Data Sets for 1985 and 1986
NASA Technical Reports Server (NTRS)
Halpern, D.; Ashby, H.; Finch, C.; Smith, E.; Robles, J.
1990-01-01
The Tropical Ocean Global Atmosphere (TOGA) Program is a component of the World Meteorological Organization (WMO)/International Council of Scientific Unions (ICSU) World Climate Research Program (WCRP). One of the objectives of TOGA, which began in 1985, is to determine the limits of predictability of monthly mean sea surface temperature variations in tropical regions. The TOGA program created a raison d'etre for an explosive growth of the tropical ocean observing system and a substantial improvement in numerical simulations from atmospheric and oceanic general circulation models. Institutions located throughout the world are involved in the TOGA-distributed active data archive system. The diverse TOGA data sets for 1985 and 1986, including results from general circulation models, are included on a CD-ROM. Variables on the CD-ROM are barometric pressure, surface air temperature, dewpoint temperature Cartesian components of surface wind, surface sensible and latent heat fluxes,Cartesian components of surface wind stress and of an index of surface wind stress, sea level, sea surface temperature, and depth profiles of temperature and current in the upper ocean. Some data sets are global in extent, some are regional and cover portions of an ocean basin. Data on the CD-ROM can be extracted with an Apple Macintosh or an IBM PC.
The Europa Ocean Discovery mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, B.C.; Chyba, C.F.; Abshire, J.B.
1997-06-01
Since it was first proposed that tidal heating of Europa by Jupiter might lead to liquid water oceans below Europa`s ice cover, there has been speculation over the possible exobiological implications of such an ocean. Liquid water is the essential ingredient for life as it is known, and the existence of a second water ocean in the Solar System would be of paramount importance for seeking the origin and existence of life beyond Earth. The authors present here a Discovery-class mission concept (Europa Ocean Discovery) to determine the existence of a liquid water ocean on Europa and to characterize Europa`smore » surface structure. The technical goal of the Europa Ocean Discovery mission is to study Europa with an orbiting spacecraft. This goal is challenging but entirely feasible within the Discovery envelope. There are four key challenges: entering Europan orbit, generating power, surviving long enough in the radiation environment to return valuable science, and complete the mission within the Discovery program`s launch vehicle and budget constraints. The authors will present here a viable mission that meets these challenges.« less
NASA Astrophysics Data System (ADS)
Seifert, Karl E.; Chang, Cheng-Wen; Brunotte, Dale A.
1997-04-01
Leg 149 of the Ocean Drilling Program explored the ocean-continent transition (OCT) on the Iberia Abyssal Plain and its role in the opening of the Atlantic Ocean approximately 130 Ma. Mafic igneous rocks recovered from Holes 899B and 900A have Mid-Ocean Ridge Basalt (MORB) trace element and isotopic characteristics indicating that a spreading center was active during the opening of the Iberia Abyssal Plain OCT. The Hole 899B weathered basalt and diabase clasts have transitional to enriched MORB rare earth element characteristics, and the Hole 900A metamorphosed gabbros have MORB initial epsilon Nd values between +6 and +11. During the opening event the Iberia Abyssal Plain OCT is envisioned to have resembled the central and northern parts of the present Red Sea with localized spreading centers and magma chambers producing localized patches of MORB mafic rocks. The lack of a normal ocean floor magnetic anomaly pattern in the Iberia Abyssal Plain means that a continuous spreading center similar to that observed in the present southern Red Sea was not formed before spreading ceased in the Iberia Abyssal Plain OCT and jumped to the present Mid-Atlantic Ridge.
Concurrency-based approaches to parallel programming
NASA Technical Reports Server (NTRS)
Kale, L.V.; Chrisochoides, N.; Kohl, J.; Yelick, K.
1995-01-01
The inevitable transition to parallel programming can be facilitated by appropriate tools, including languages and libraries. After describing the needs of applications developers, this paper presents three specific approaches aimed at development of efficient and reusable parallel software for irregular and dynamic-structured problems. A salient feature of all three approaches in their exploitation of concurrency within a processor. Benefits of individual approaches such as these can be leveraged by an interoperability environment which permits modules written using different approaches to co-exist in single applications.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
ERIC Educational Resources Information Center
MacMillan, Mark W.
1997-01-01
Describes a school program in which two sixth-grade science classes researched, created, and put together an ocean museum targeted at kindergarten through eighth graders who are geographically distanced from the ocean. Details the process for investigating topical areas, organizing teams of students, researching, writing, creating displays, and…
15 CFR 922.93 - Permit procedures and criteria.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE... Director, National Marine Sanctuary Program, ATTN: Manager, Gray's Reef National Marine Sanctuary, 10 Ocean Science Circle, Savannah, GA 31411. (c) The Director, at his or her discretion may issue a permit, subject...
Parallel community climate model: Description and user`s guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain intomore » geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.« less
NASA Astrophysics Data System (ADS)
Lodico, J. M.; Greely, T.; Lodge, A.; Pyrtle, A.; Ivey, S.; Madeiros, A.; Saleem, S.
2005-12-01
The University of South Florida, College of Marine Science Oceans: GK-12 Teaching Fellowship Program is successfully enriching science learning via the oceans. Funded by the National Science Foundation, the program provides a unique opportunity among scientists and K-12 teachers to interact with the intention of bringing ocean science concepts and research to the classroom environment enhance the experience of learning and doing science, and to promote `citizen scientists' for the 21st century. The success of the program relies heavily on the extensive summer training program where graduate students develop teaching skills, create inquiry based science activities for a summer Oceanography Camp for Girls program and build a relationship with their mentor teacher. For the last year and a half, two graduate students from the College of Marine Science have worked in cooperation with teachers from the Pinellas county School District, Southside Fundamental Middle School. Successful lesson plans brought into a 6th grade Earth Science classroom include Weather and climate: Global warming, The Geologic timescale: It's all about time, Density: Layering liquids, and Erosion processes: What moves water and sediment. The school and students have benefited greatly from the program experiencing hands-on inquiry based science and the establishment of an after school science club providing opportunities for students to work on their science fair projects and pursuit other science interests. Students are provided scoring rubrics and their progress is creatively assessed through KWL worksheets, concept maps, surveys, oral one on one and classroom discussions and writing samples. The year culminated with a series of hands on lessons at the nearby beach, where students demonstrated their mastery of skills through practical application. Benefits to the graduate student include improved communication of current science research to a diverse audience, a better understanding of the perspective of teachers and their content knowledge, and experience working with children and youth. The GK-12 teacher mentor benefits include a resource of inquiry based ocean science activities and increased knowledge of current scientific ocean research. The K-12 students gain an opportunity to be engage with young passionate scientists, learn about current ocean science research, and experience inquiry based science activities relating to concepts already being taught in their classroom. This program benefits all involved including the graduate students, the teachers, the K-12 students and the community.
The 2nd Symposium on the Frontiers of Massively Parallel Computations
NASA Technical Reports Server (NTRS)
Mills, Ronnie (Editor)
1988-01-01
Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.
The Goddard Space Flight Center Program to develop parallel image processing systems
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1972-01-01
Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.
Parallel Volunteer Learning during Youth Programs
ERIC Educational Resources Information Center
Lesmeister, Marilyn K.; Green, Jeremy; Derby, Amy; Bothum, Candi
2012-01-01
Lack of time is a hindrance for volunteers to participate in educational opportunities, yet volunteer success in an organization is tied to the orientation and education they receive. Meeting diverse educational needs of volunteers can be a challenge for program managers. Scheduling a Volunteer Learning Track for chaperones that is parallel to a…
Mechanism to support generic collective communication across a variety of programming models
Almasi, Gheorghe [Ardsley, NY; Dozsa, Gabor [Ardsley, NY; Kumar, Sameer [White Plains, NY
2011-07-19
A system and method for supporting collective communications on a plurality of processors that use different parallel programming paradigms, in one aspect, may comprise a schedule defining one or more tasks in a collective operation, an executor that executes the task, a multisend module to perform one or more data transfer functions associated with the tasks, and a connection manager that controls one or more connections and identifies an available connection. The multisend module uses the available connection in performing the one or more data transfer functions. A plurality of processors that use different parallel programming paradigms can use a common implementation of the schedule module, the executor module, the connection manager and the multisend module via a language adaptor specific to a parallel programming paradigm implemented on a processor.
15 CFR 923.46 - Organizational structure.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Authorities and Organization § 923.46 Organizational...
15 CFR 923.46 - Organizational structure.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Authorities and Organization § 923.46 Organizational...
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
File concepts for parallel I/O
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1989-01-01
The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.
Dynamical Instability Produces Transform Faults at Mid-Ocean Ridges
NASA Astrophysics Data System (ADS)
Gerya, Taras
2010-08-01
Transform faults at mid-ocean ridges—one of the most striking, yet enigmatic features of terrestrial plate tectonics—are considered to be the inherited product of preexisting fault structures. Ridge offsets along these faults therefore should remain constant with time. Here, numerical models suggest that transform faults are actively developing and result from dynamical instability of constructive plate boundaries, irrespective of previous structure. Boundary instability from asymmetric plate growth can spontaneously start in alternate directions along successive ridge sections; the resultant curved ridges become transform faults within a few million years. Fracture-related rheological weakening stabilizes ridge-parallel detachment faults. Offsets along the transform faults change continuously with time by asymmetric plate growth and discontinuously by ridge jumps.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
LLMapReduce: Multi-Lingual Map-Reduce for Supercomputing Environments
2015-11-20
1990s. Popularized by Google [36] and Apache Hadoop [37], map-reduce has become a staple technology of the ever- growing big data community...Lexington, MA, U.S.A Abstract— The map-reduce parallel programming model has become extremely popular in the big data community. Many big data ...to big data users running on a supercomputer. LLMapReduce dramatically simplifies map-reduce programming by providing simple parallel programming
NASA Technical Reports Server (NTRS)
Poole, L. R.; Lecroy, S. R.; Morris, W. D.
1977-01-01
A computer program for studying linear ocean wave refraction is described. The program features random-access modular bathymetry data storage. Three bottom topography approximation techniques are available in the program which provide varying degrees of bathymetry data smoothing. Refraction diagrams are generated automatically and can be displayed graphically in three forms: Ray patterns with specified uniform deepwater ray density, ray patterns with controlled nearshore ray density, or crest patterns constructed by using a cubic polynomial to approximate crest segments between adjacent rays.
Communicating Ocean Acidification
ERIC Educational Resources Information Center
Pope, Aaron; Selna, Elizabeth
2013-01-01
Participation in a study circle through the National Network of Ocean and Climate Change Interpretation (NNOCCI) project enabled staff at the California Academy of Sciences to effectively engage visitors on climate change and ocean acidification topics. Strategic framing tactics were used as staff revised the scripted Coral Reef Dive program,…
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
Seismic Imaging Reveals Deep-Penetrating Fault Planes in the Wharton Basin Oceanic Mantle
NASA Astrophysics Data System (ADS)
Carton, H. D.; Singh, S. C.; Dyment, J.; Hananto, N. D.; Chauhan, A.
2011-12-01
We present images from a deep multi-channel seismic reflection survey acquired in 2006 over the oceanic lithosphere of the Wharton Basin offshore northern Sumatra, NW of Simeulue island. The main ~230-km long seismic profile is roughly parallel to the trench at ~32-66 km distance from the subduction front and crosses (at oblique angles to both flow line and isochron directions) an entire segment of 55-57 my-old fast-spread crust formed at the extinct Wharton spreading center, as well as two bounding ~N5°E trending fracture zones near its extremities; complementary data is provided by the oceanic portions of two margin-crossing profiles on either side shot during the same survey. This high-quality, 12-km streamer dataset acquired for deep reflection imaging (10000 cu in tuned airgun array and 15-m source and streamer depths) reveals the presence of mostly SE-dipping (20 to 40 degrees dip) events cutting across and extending below the oceanic Moho, down to a maximum depth below seafloor of ~37 km, at ~5 km spacing along the trench-parallel profile. Similar dipping mantle events are imaged on the oceanic portion of another long-offset profile acquired in 2009 offshore central Sumatra south of Pagai island, which will also be presented. Such events are unlikely to be imaging artefacts of the 2D acquisition, such as out-of-plane energy originating from sharp, buried basement reliefs trending obliquely to the profile. Due to their geometry, they do not seem to be associated with plate bending at the trench outer-rise, which has a relatively modest expression at the seafloor and within the incoming sedimentary section north of the Simeulue elbow. We propose that these deep-penetrating dipping reflectors are fossil fault planes formed due to compressive stresses at the beginning of the continent-continent collision between India and Eurasia, the early stages of which were responsible for the cessation of seafloor spreading at the Wharton ridge at ca 40 Ma.
Debugging Fortran on a shared memory machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, T.R.; Padua, D.A.
1987-01-01
Debugging on a parallel processor is more difficult than debugging on a serial machine because errors in a parallel program may introduce nondeterminism. The approach to parallel debugging presented here attempts to reduce the problem of debugging on a parallel machine to that of debugging on a serial machine by automatically detecting nondeterminism. 20 refs., 6 figs.
Ocean Drilling Program: Privacy Policy
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site ODP/TAMU Science Operator Home Ocean Drilling Program Privacy Policy The following is the privacy policy for the www-odp.tamu.edu web site. 1. Cookies are used in the Database portion of the web
It's Only a Little Planet: A Primer for Ocean Studies.
ERIC Educational Resources Information Center
Meyland, Sarah J.
Developed as part of the Day on the Bay Cruise Program, funded by the National Sea Grant Program, this learner's manual outlines ocean studies conducted on a seven-hour cruise of the Galveston Bay area. A description of the geology and human use of Galveston Bay follows a general introduction to coastal and estuarine ecology. Line drawings…
Seasat data applications in ocean industries
NASA Technical Reports Server (NTRS)
Montgomery, D. R.
1985-01-01
It is pointed out that the world population expansion and resulting shortages of food, minerals, and fuel have focused additional attention on the world's oceans. In this context, aspects of weather prediction and the monitoring/prediction of long-range climatic anomalies become more important. In spite of technological advances, the commercial ocean industry and the naval forces suffer now from inadequate data and forecast products related to the oceans. The Seasat Program and the planned Navy-Remote Oceanographic Satellite System (N-ROSS) represent major contributions to improved observational coverage and the processing needed to achieve better forecasts. The Seasat Program was initiated to evaluate the effectiveness of the remote sensing of oceanographic phenomena from a satellite platform. Possible oceanographic satellite applications are presented in a table, and the impact of Seasat data on industry sectors is discussed. Attention is given to offshore oil development, deep-ocean mining, fishing, and marine transportation.
ONR Ocean Wave Dynamics Workshop
NASA Astrophysics Data System (ADS)
In anticipation of the start (in Fiscal Year 1988) of a new Office of Naval Research (ONR) Accelerated Research Initiative (ARI) on Ocean Surface Wave Dynamics, a workshop was held August 5-7, 1986, at Woods Hole, Mass., to discuss new ideas and directions of research. This new ARI on Ocean Surface Wave Dynamics is a 5-year effort that is organized by the ONR Physical Oceanography Program in cooperation with the ONR Fluid Mechanics Program and the Physical Oceanography Branch at the Naval Ocean Research and Development Activity (NORDA). The central theme is improvement of our understanding of the basic physics and dynamics of surface wave phenomena, with emphasis on the following areas: precise air-sea coupling mechanisms,dynamics of nonlinear wave-wave interaction under realistic environmental conditions,wave breaking and dissipation of energy,interaction between surface waves and upper ocean boundary layer dynamics, andsurface statistical and boundary layer coherent structures.
NASA Technical Reports Server (NTRS)
Keppenne, C. L.; Rienecker, M.; Borovikov, A. Y.
1999-01-01
Two massively parallel data assimilation systems in which the model forecast-error covariances are estimated from the distribution of an ensemble of model integrations are applied to the assimilation of 97-98 TOPEX/POSEIDON altimetry and TOGA/TAO temperature data into a Pacific basin version the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. in the first system, ensemble of model runs forced by an ensemble of atmospheric model simulations is used to calculate asymptotic error statistics. The data assimilation then occurs in the reduced phase space spanned by the corresponding leading empirical orthogonal functions. The second system is an ensemble Kalman filter in which new error statistics are computed during each assimilation cycle from the time-dependent ensemble distribution. The data assimilation experiments are conducted on NSIPP's 512-processor CRAY T3E. The two data assimilation systems are validated by withholding part of the data and quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The pros and cons of each system are discussed.
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
Parallel computation and the basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1993-05-01
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Optimization of Ocean Color Algorithms: Application to Satellite Data Merging
NASA Technical Reports Server (NTRS)
Maritorena, Stephane; Siegel, David A.; Morel, Andre
2003-01-01
The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project. The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage. The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global Ocean the merging of Ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances (L(sub WN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the (L(sub WN)(lambda)) data and, 4) it provides uncertainty estimates for the retrieved variables.
NASA Astrophysics Data System (ADS)
Talley, L. D.; Johnson, K. S.; Claustre, H.; Boss, E.; Emerson, S. R.; Westberry, T. K.; Sarmiento, J. L.; Mazloff, M. R.; Riser, S.; Russell, J. L.
2017-12-01
Our ability to detect changes in biogeochemical (BGC) processes in the ocean that may be driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by undersampling in vast areas of the open ocean. Argo is a major international program that measures ocean heat content and salinity with about 4000 floats distributed throughout the ocean, profiling to 2000 m every 10 days. Extending this approach to a global BGC-Argo float array, using recent, proven sensor technology, and in close synergy with satellite systems, will drive a transformative shift in observing and predicting the effects of climate change on ocean metabolism, carbon uptake, acidification, deoxygenation, and living marine resource management. BGC-Argo will add sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance, with sufficient accuracy for climate studies. Observing System Simulation Experiments (OSSEs) using BGC models indicate that 1000 BGC floats would provide sufficient coverage, hence equipping 1/4 of the Argo array. BGC-Argo (http://biogeochemical-argo.org) will enhance current sustained observational programs such as Argo, GO-SHIP, and long-term ocean time series. BGC-Argo will benefit from deployments on GO-SHIP vessels, which provide sensor verification. Empirically derived algorithms that relate the observed BGC float parameters to the carbon system parameters will provide global information on seasonal ocean-atmosphere carbon exchange. BGC Argo measurements could be paired with other emerging technology, such as pCO2 measurements from ships of opportunity and wave gliders, to extend and validate exchange estimates. BGC-Argo prototype programs already show the potential of a global observing system that can measure seasonal to decadal variability. Various countries have developed regional BGC arrays: Southern Ocean (SOCCOM), North Atlantic Subpolar Gyre (remOcean), Mediterranean (NAOS), the Kuroshio (INBOX), and Indian Ocean (IOBioArgo). As examples, bio-optical sensors are identifying regional anomalies in light attenuation/scattering, with implications for ocean productivity and carbon export; SOCCOM floats show high CO2 outgassing in the Antarctic Circumpolar Current, due to previously unmeasured winter fluxes.
Innovations in Ocean Sciences Education at the University of Washington
NASA Astrophysics Data System (ADS)
Robigou, V.
2003-12-01
A new wave of education collaborations began when the national science education reform documents (AAAS Project 2061 and National Science Education Standards) recommended that scientific researchers become engaged stakeholders in science education. Collaborations between research institutions, universities, nonprofits, corporations, parent groups, and school districts can provide scientists original avenues to contribute to education for all. The University of Washington strongly responded to the national call by promoting partnerships between the university research community, the K-12 community and the general public. The College of Ocean and Fishery Sciences and the School of Oceanography spearheaded the creation of several innovative programs in ocean sciences to contribute to the improvement of Earth science education. Two of these programs are the REVEL Project and the Marine Science Student Mobility (MSSM) program that share the philosophy of involving school districts, K-12 science teachers, their students and undergraduate students in current, international, cutting-edge oceanographic research. The REVEL Project (Research and Education: Volcanoes, Exploration and Life) is an NSF-funded, professional development program for middle and high school science teachers that are determined to use deep-sea research and seafloor exploration as tools to implement inquiry-based science in their classrooms, schools, and districts, and to share their experiences with their communities. Initiated in 1996 as a regional program for Northwest science educators, REVEL evolved into a multi-institutional program inviting teachers to practice doing research on sea-going research expeditions. Today, in its 7th year, the project offers teachers throughout the U. S. an opportunity to participate and contribute to international, multidisciplinary, deep-sea research in the Northeast Pacific ocean to study the relationship between geological processes such as earthquakes and volcanism, fluid circulation and life on our planet. http://www.ocean.washington.edu/outreach/revel/ The Marine Science Student Mobility program is a FIPSE-funded program that fosters communication and collaboration across cultural and linguistic boundaries for undergraduate students interested in pursuing careers in marine sciences. A consortium of six universities in Florida, Hawaii, Washington, Belgium, Spain and France offers a unique way to study abroad. During a six month exchange, students acquire foreign language skills, cultural awareness and ocean sciences field study in one of the four major oceanographic areas: the Atlantic, the Pacific, the Gulf of Mexico and the Mediterranean. The program not only promotes cultural understanding among the participant students but among faculty members from different educational systems, and even among language and science faculty members. Understanding how different cultures approach, implement, and interpret scientific research to better study the world's oceans is the cornerstone of this educational approach. http://www.marine-language-exch.org/ Similar collaborative, educational activities could be adapted by other research institutions on many campuses to provide many opportunities for students, teachers and the general public to get involved in Earth and ocean sciences.
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
NASA Technical Reports Server (NTRS)
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
NASA Astrophysics Data System (ADS)
Farrington, J.; Pantoja, S.
2007-05-01
The Woods Hole Oceanographic Institution, USA (WHOI) and the University of Concepcion, Chile (UDEC) entered into an MOU to enhance graduate education and research in ocean sciences in Chile and enhance research for understanding the Southeastern Pacific Ocean. The MOU was drafted and signed after exchange visits of faculty. The formulation of a five year program of activities included: exchange of faculty for purposes of enhancing research, teaching and advising; visits of Chilean graduate students to WHOI for several months of supplemental study and research in the area of their thesis research; participation of Chilean faculty and graduate students in WHOI faculty led cruises off Chile and Peru (with Peruvian colleagues); a postdoctoral fellowship program for Chilean ocean scientists at WHOI; and the establishment of an Austral Summer Institute of advanced undergraduate and graduate level intensive two to three week courses on diverse topics at the cutting edge of ocean science research co-sponsored by WHOI and UDEC for Chilean and South American students with faculty drawn from WHOI and other U.S. universities with ocean sciences graduate schools and departments, e.g. Scripps Institution of Oceanography, University of Delaware. The program has been evaluated by external review and received excellent comments. The success of the program has been due mainly to: (1) the cooperative attitude and enthusiasm of the faculty colleagues of both Chilean Universities (especially UDEC) and WHOI, students and postdoctoral fellows, and (2) a generous grant from the Fundacion Andes- Chile enabling these activities.
Coupled ice-ocean dynamics in the marginal ice zones Upwelling/downwelling and eddy generation
NASA Technical Reports Server (NTRS)
Hakkinen, S.
1986-01-01
This study is aimed at modeling mesoscale processes such as upwelling/downwelling and ice edge eddies in the marginal ice zones. A two-dimensional coupled ice-ocean model is used for the study. The ice model is coupled to the reduced gravity ocean model through interfacial stresses. The parameters of the ocean model were chosen so that the dynamics would be nonlinear. The model was tested by studying the dynamics of upwelling. Wings parallel to the ice edge with the ice on the right produce upwelling because the air-ice momentum flux is much greater than air-ocean momentum flux; thus the Ekman transport is greater than the ice than in the open water. The stability of the upwelling and downwelling jets is discussed. The downwelling jet is found to be far more unstable than the upwelling jet because the upwelling jet is stabilized by the divergence. The constant wind field exerted on a varying ice cover will generate vorticity leading to enhanced upwelling/downwelling regions, i.e., wind-forced vortices. Steepening and strengthening of vortices are provided by the nonlinear terms. When forcing is time-varying, the advection terms will also redistribute the vorticity. The wind reversals will separate the vortices from the ice edge, so that the upwelling enhancements are pushed to the open ocean and the downwelling enhancements are pushed underneath the ice.
Electrical anisotropy in the presence of oceans—a sensitivity study
NASA Astrophysics Data System (ADS)
Cembrowski, Marcel; Junge, Andreas
2018-05-01
Electrical anisotropy in the presence of oceans is particularly relevant at continent-ocean subduction zones (e.g. Cascadian and Andean Margin), where seismic anisotropy has been found with trench-parallel or perpendicular fast direction. The identification of electrical anisotropy at such locations sheds new light on the relation between seismic and electrical anisotropies. At areas confined by two opposite oceans, for example the Pyrenean Area and Central America, we demonstrate that the superposed responses of both oceans generate a uniform and large phase split of the main phase tensor axes. The pattern of the tipper arrows is comparatively complicated and it is often difficult to associate their length and orientation to the coast effect. On the basis of simple forward models involving opposite oceans and anisotropic layers, we show that both structures generate similar responses. In the case of a deep anisotropic layer, the resistivity and phase split generated by the oceans alone will be increased or decreased depending on the azimuth of the conducting horizontal principal axes. The 3-D isotropic inversion of the anisotropic forward responses reproduces the input data reasonably well. The anisotropy is explained by large opposed conductors outside the station grid and by tube-like elongated conductors representing a macroscopic anisotropy. If the conductive direction is perpendicular to the shorelines, the anisotropy is not recovered by 3-D isotropic inversion.
Geodynamics Branch research report, 1982
NASA Technical Reports Server (NTRS)
Kahn, W. D. (Editor); Cohen, S. C. (Editor)
1983-01-01
The research program of the Geodynamics Branch is summarized. The research activities cover a broad spectrum of geoscience disciplines including space geodesy, geopotential field modeling, tectonophysics, and dynamic oceanography. The NASA programs which are supported by the work described include the Geodynamics and Ocean Programs, the Crustal Dynamics Project, the proposed Ocean Topography Experiment (TOPEX) and Geopotential Research Mission. The individual papers are grouped into chapters on Crustal Movements, Global Earth Dynamics, Gravity Field Model Development, Sea Surface Topography, and Advanced Studies.
Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)
2002-01-01
The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.
Ocean FEST (Families Exploring Science Together)
NASA Astrophysics Data System (ADS)
Bruno, B. C.; Wiener, C. S.
2009-12-01
Ocean FEST (Families Exploring Science Together) exposes families to cutting-edge ocean science research and technology in a fun, engaging way. Research has shown that family involvement in science education adds significant value to the experience. Our overarching goal is to attract underrepresented students (including Native Hawaiians, Pacific Islanders and girls) to geoscience careers. A second goal is to communicate to diverse audiences that geoscience is directly relevant and applicable to their lives, and critical in solving challenges related to global climate change. Ocean FEST engages elementary school students, parents, teachers, and administrators in family science nights based on a proven model developed by Art and Rene Kimura of the Hawaii Space Grant Consortium. Our content focuses on the role of the oceans in climate change, and is based on the transformative research of the NSF Center for Microbial Oceanography: Research and Education (C-MORE) and the Hawaii Institute of Marine Biology (HIMB). Through Ocean FEST, underrepresented students and their parents and teachers learn about new knowledge being generated at Hawaii’s world-renowned ocean research institutes. In the process, they learn about fundamental geoscience concepts and career opportunities. This project is aligned with C-MORE’s goal of increasing the number of underrepresented students pursuing careers in the ocean and earth sciences, and related disciplines. Following a successful round of pilot events at elementary schools on Oahu, funding was obtained through NSF Opportunities for Enhancing Diversity in the Geosciences to implement a three-year program at minority-serving elementary schools in Hawaii. Deliverables include 20 Ocean FEST events per year (each preceded by teacher professional development training), a standards-based program that will be disseminated locally and nationally, three workshops to train educators in program delivery, and an Ocean FEST science kit. In addition, we are currently conducting a series of pilot events at the middle school level at underserved schools at neighbor islands, funded through the Hawaii Innovation Initiative (Act 111). Themes addressed include community outreach, capacity building, teacher preparation, and use of technology.
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So
2003-11-20
We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...
2016-06-01
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
15 CFR 923.60 - Review/approval procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Review/Approval Procedures § 923.60 Review/approval...
15 CFR 923.82 - Amendment review/approval procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Amendments to and Termination of Approved...
15 CFR 923.82 - Amendment review/approval procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Amendments to and Termination of Approved...
15 CFR 923.60 - Review/approval procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Review/Approval Procedures § 923.60 Review/approval...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-10
... Observation Committee, Meeting of the Data Management and Communications Steering Team AGENCY: National Ocean...). ACTION: Notice of open meeting. SUMMARY: NOAA's Integrated Ocean Observing System (IOOS) Program... meeting of the IOOC's Data Management and Communications Steering Team (DMAC-ST). The DMAC-ST membership...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
.../index.html . Dated: September 3, 2013. Jason Donaldson, Chief Financial Officer/Chief Administrative Officer, Office of Oceanic and Atmospheric Research, National Oceanic and Atmospheric Administration... Act Science Program's roles within the context of NOAA's ocean missions and policies. They should be...
76 FR 51353 - Nominations for Membership on the Ocean Research Advisory Panel
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-18
... Leadership Council (NORLC), the governing body of the National Oceanographic Partnership Program (NOPP... extended expertise and experience in the field of ocean science and/or ocean resource management... balance a range of geographic and sector representation and experience. Applicants must be U.S. citizens...
78 FR 9891 - Extension of Nominations for Membership on the Ocean Research Advisory Panel
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
... Leadership Council (NORLC), the governing body of the National Oceanographic Partnership Program (NOPP... experience in the field of ocean science and/or ocean resource management. Nominations should be identified... set of nominees will seek to balance a range of geographic and sector representation and experience...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Evaluation of State Coastal... Administration (NOAA), Office of Ocean and Coastal Resource Management, National Ocean Service, Commerce. ACTION... to its Reserve final management plan approved by the Secretary of Commerce, and adhered to the terms...
An Overview of SIMBIOS Program Activities and Accomplishments. Chapter 1
NASA Technical Reports Server (NTRS)
Fargion, Giulietta S.; McClain, Charles R.
2003-01-01
The SIMBIOS Program was conceived in 1994 as a result of a NASA management review of the agency's strategy for monitoring the bio-optical properties of the global ocean through space-based ocean color remote sensing. At that time, the NASA ocean color flight manifest included two data buy missions, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Earth Observing System (EOS) Color, and three sensors, two Moderate Resolution Imaging Spectroradiometers (MODIS) and the Multi-angle Imaging Spectro-Radiometer (MISR), scheduled for flight on the EOS-Terra and EOS-Aqua satellites. The review led to a decision that the international assemblage of ocean color satellite systems provided ample redundancy to assure continuous global coverage, with no need for the EOS Color mission. At the same time, it was noted that non-trivial technical difficulties attended the challenge (and opportunity) of combining ocean color data from this array of independent satellite systems to form consistent and accurate global bio-optical time series products. Thus, it was announced at the October 1994 EOS Interdisciplinary Working Group meeting that some of the resources budgeted for EOS Color should be redirected into an intercalibration and validation program (McClain et al., 2002).
Parent-Child Parallel-Group Intervention for Childhood Aggression in Hong Kong
ERIC Educational Resources Information Center
Fung, Annis L. C.; Tsang, Sandra H. K. M.
2006-01-01
This article reports the original evidence-based outcome study on parent-child parallel group-designed Anger Coping Training (ACT) program for children aged 8-10 with reactive aggression and their parents in Hong Kong. This research program involved experimental and control groups with pre- and post-comparison. Quantitative data collection…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhr, L.
1987-01-01
This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.
Parallel Performance of a Combustion Chemistry Simulation
Skinner, Gregg; Eigenmann, Rudolf
1995-01-01
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
Deformation of the Songshugou ophiolite in the Qinling orogen
NASA Astrophysics Data System (ADS)
Sun, Shengsi; Dong, Yunpeng
2017-04-01
The Qinling orogen, middle part of the China Central Orogenic Belt, is well documented that was constructed by multiple convergences and subsequent collisions between the North China and South China Blocks mainly based on geochemistry and geochronology of ophiolites, magmatic rocks as well as sedimentary reconstruction. However, this model is lack of constraints from deformation of subduction/collision. The Songshugou ophiolite outcropped to the north of the Shangdan suture zone represents fragments of oceanic crust and upper mantle. Previous works have revealed that the ophiolite was formed at an ocean ridge and then emplaced in the northern Qinling belt. Hence, deformation of the ophiolite would provide constraints for the rifting and subduction processes. The ophiolite consists chiefly of metamorphosed mafic and ultramafic rocks. The ultramafic rocks contain coarse dunite, dunitic mylonite and harzburgite, with minor diopsidite veins. The mafic rocks are mainly amphibolite, garnet amphibolite and amphibole schist, which are considered to be eclogite facies and retrograde metamorphosed oceanic crust. Amphibole grains in the mafic rocks exhibit a strong shape-preferred orientation parallel to the foliation, which is also parallel to the lithologic contacts between mafic and ultramafic rocks. Electron backscattered diffraction (EBSD) analyses show strong olivine crystallographic preferred orientations (CPO) in dunite including A-, B-, and C-types formed by (010)[100], (010)[001] and (100)[001] dislocation slip systems, respectively. A-type CPO suggests high temperature plastic deformation in the upper mantle. In comparison, B-type may be restricted to regions with significantly high water content and high differential stress, and C-type may also be formed in wet condition with lower differential stress. Additionally, the dunite evolved into amphibolite facies metamorphism with mineral assemblages of olivine + talc + anthophyllite. Assuming a pressure of 1.5 GPa, which corresponds to equilibration in the spinel stability field, application of the olivine-spinel thermometer (Ballhaus et al., 1991) suggests temperature of 622 ± 22 °C. Amphibole schists display well-developed amphibole CPO with [100], [010] and [001] axes concentrate parallel to Z-, Y- and X-directions, respectively. The strong CPO of amphiboles could be interpreted as anisotropic growth and passive rigid-body rotation under various different stresses rather than results of dislocation creep. The Hbl + Pl thermometer (Holland and Blundy, 1994) constrains the equilibrium temperature to be 640 ± 34 °C for the amphibolite facies metamorphism. Zircons in light-color from the amphibolite with Th/U<0.1 and depletion of HREE yield a U-Pb age of 504 ± 10 Ma, representing the metamorphic age of eclogite. In comparison, the zircons in dark-color from amphibolite showing flat HREE patterns and negative abnormal of Eu give a U-Pb age of 489 ± 5.2 Ma, constraining the time of retrograde metamorphism of eclogite. Together with field investigation and regional geology, our new data propose that the A-type olivine CPO was formed in oceanic upper mantle with the spreading of Shangdan ocean before ca. 514 Ma. At ca. 504 Ma, the deep subduction of oceanic lithosphere endured eclogite facies metamorphism and induced B-type olivine CPO. Up to ca. 489 Ma, obduction of the fragments of metamorphosed oceanic lithosphere resulted in the C-type olivine CPO in dunite and amphibole CPO in the retrograded metamorphic eclogite.
NASA Astrophysics Data System (ADS)
What do anchovy and coffee prices have in common? They both are influenced by weather patterns. And so are a lot of other industries in the world of commodities. A new report from the National Research Council says it's time to protect these economic interests. The report outlines a new 15-year global research program that would help scientists make better seasonal and interannual climate predictions. Called the Global Ocean-Atmosphere-Land System or GOALS, the new program would be an extension of the decade-long international Tropical Ocean and Global Atmosphere (TOGA) program, which comes to an end this year. Besides studying the climatic effects of tropical phenomena such as the El Niño/Southern Oscillation, the program would expand these types of studies to Earth's higher latitudes and to additional physical processes, such as the effects of changes in upper ocean currents, soil moisture, vegetation, and land, snow, and sea-ice cover, among others.
Scalable Unix commands for parallel processors : a high-performance implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, E.; Lusk, E.; Gropp, W.
2001-06-22
We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1989-01-01
A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.
A CS1 pedagogical approach to parallel thinking
NASA Astrophysics Data System (ADS)
Rague, Brian William
Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.
YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste
Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less
NASA Technical Reports Server (NTRS)
Pratt, Terrence W.
1987-01-01
PISCES 2 is a programming environment and set of extensions to Fortran 77 for parallel programming. It is intended to provide a basis for writing programs for scientific and engineering applications on parallel computers in a way that is relatively independent of the particular details of the underlying computer architecture. This user's manual provides a complete description of the PISCES 2 system as it is currently implemented on the 20 processor Flexible FLEX/32 at NASA Langley Research Center.
Ocean Literacy Alliance-Hawaii (OLA-HI) Resource Guide
NASA Astrophysics Data System (ADS)
Bruno, B. C.; Rivera, M.; Hicks Johnson, T.; Baumgartner, E.; Davidson, K.
2008-05-01
The Ocean Literacy Alliance-Hawaii (OLA-HI) was founded in 2007 to establish a framework for collaboration in ocean science education in Hawaii. OLA-HI is supported by the federal Interagency Working Group-Ocean Education (IWG-OE) and funded through NSF and NOAA. Hawaii support is provided through the organizations listed above in the authors' block. Our inaugural workshop was attended by 55 key stakeholders, including scientists, educators, legislators, and representatives of federal, state, and private organizations and projects in Hawaii. Participants reviewed ongoing efforts, strengthened existing collaborations, and developed strategies to build new partnerships. Evaluations showed high satisfaction with the workshop, with 100% of respondents ranking the overall quality as `good' or `excellent'. Expected outcomes include a calendar of events, a website (www.soest.hawaii.edu/OLAHawaii), a list serve, and a resource guide for ocean science education in Hawaii. These products are all designed to facilitate online and offline networking and collaboration among Hawaii's ocean science educators. The OLA-HI resource guide covers a gamut of marine resources and opportunities, including K-12 curriculum, community outreach programs, museum exhibits and lecture series, internships and scholarships, undergraduate and graduate degree programs, and teacher professional development workshops. This guide is designed to share existing activities and products, minimize duplication of efforts, and help provide gap analysis to steer the direction of future ocean science projects and programs in Hawaii. We ultimately plan on using the resource guide to develop pathways to guide Hawaii's students toward ocean-related careers. We are especially interested in developing pathways for under-represented students in the sciences, particularly Native Hawaiians and Pacific Islanders, and will focus on this topic at a future OLA-HI workshop.
A language comparison for scientific computing on MIMD architectures
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.
1989-01-01
Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Lindstrom Receives 2013 Ocean Sciences Award: Citation
NASA Astrophysics Data System (ADS)
Gordon, Arnold L.; Lagerloef, Gary S. E.
2014-09-01
Eric J. Lindstrom's record over the last 3 decades exemplifies both leadership and service to the ocean science community. Advancement of ocean science not only depends on innovative research but is enabled by support of government agencies. As NASA program scientist for physical oceanography for the last 15 years, Eric combined his proven scientific knowledge and skilled leadership abilities with understanding the inner workings of our government bureaucracy, for the betterment of all. He is a four-time NASA headquarters medalist for his achievements in developing a unified physical oceanography program that is well integrated with those of other federal agencies.
NASA Technical Reports Server (NTRS)
Hakkinen, S.
1984-01-01
This study is aimed at the modelling of mesoscale processed such as up/downwelling and ice edge eddies in the marginal ice zones. A 2-dimensional coupled ice-ocean model is used for the study. The ice model is coupled to the reduced gravity ocean model (f-plane) through interfacial stresses. The constitutive equations of the sea ice are formulated on the basis of the Reiner-Rivlin theory. The internal ice stresses are important only at high ice concentrations (90-100%), otherwise the ice motion is essentially free drift, where the air-ice stress is balanced by the ice-water stress. The model was tested by studying the upwelling dynamics. Winds parallel to the ice edge with the ice on the right produce upwilling because the air-ice momentum flux is much greater that air-ocean momentum flux, and thus the Ekman transport is bigger under the ice than in the open water. The upwelling simulation was extended to include temporally varying forcing, which was chosen to vary sinusoidally with a 4 day period. This forcing resembles successive cyclone passings. In the model with a thin oceanic upper layer, ice bands were formed.
Database Development for Ocean Impacts: Imaging, Outreach, and Rapid Response
2012-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Database Development for Ocean Impacts: Imaging, Outreach...Development for Ocean Impacts: Imaging, Outreach, and Rapid Response 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...hoses ( Applied Ocean Physics & Engineering department, WHOI, to evaluate wear and locate in mooring optical cables used in the Right Whale monitoring
Inter-comparison of isotropic and anisotropic sea ice rheology in a fully coupled model
NASA Astrophysics Data System (ADS)
Roberts, A.; Cassano, J. J.; Maslowski, W.; Osinski, R.; Seefeldt, M. W.; Hughes, M.; Duvivier, A.; Nijssen, B.; Hamman, J.; Hutchings, J. K.; Hunke, E. C.
2015-12-01
We present the sea ice climate of the Regional Arctic System Model (RASM), using a suite of new physics available in the Los Alamos Sea Ice Model (CICE5). RASM is a high-resolution fully coupled pan-Arctic model that also includes the Parallel Ocean Program (POP), the Weather Research and Forecasting Model (WRF) and Variable Infiltration Capacity (VIC) land model. The model domain extends from ~45˚N to the North Pole and is configured to run at ~9km resolution for the ice and ocean components, coupled to 50km resolution atmosphere and land models. The baseline sea ice model configuration includes mushy-layer sea ice thermodynamics and level-ice melt ponds. Using this configuration, we compare the use of isotropic and anisotropic sea ice mechanics, and evaluate model performance using these two variants against observations including Arctic buoy drift and deformation, satellite-derived drift and deformation, and sea ice volume estimates from ICESat. We find that the isotropic rheology better approximates spatial patterns of thickness observed across the Arctic, but that both rheologies closely approximate scaling laws observed in the pack using buoys and RGPS data. A fundamental component of both ice mechanics variants, the so called Elastic-Viscous-Plastic (EVP) and Anisotropic-Elastic-Plastic (EAP), is that they are highly sensitive to the timestep used for elastic sub-cycling in an inertial-resolving coupled framework, and this has a significant affect on surface fluxes in the fully coupled framework.
Activation of the marine ecosystem model 3D CEMBS for the Baltic Sea in operational mode
NASA Astrophysics Data System (ADS)
Dzierzbicka-Glowacka, Lidia; Jakacki, Jaromir; Janecki, Maciej; Nowicki, Artur
2013-04-01
The paper presents a new marine ecosystem model 3D CEMBS designed for the Baltic Sea. The ecosystem model is incorporated into the 3D POPCICE ocean-ice model. The Current Baltic Sea model is based on the Community Earth System Model (CESM from the National Center for Atmospheric Research) which was adapted for the Baltic Sea as a coupled sea-ice model. It consists of the Community Ice Code (CICE model, version 4.0) and the Parallel Ocean Program (version 2.1). The ecosystem model is a biological submodel of the 3D CEMBS. It consists of eleven mass conservation equations. There are eleven partial second-order differential equations of the diffusion type with the advective term for phytoplankton, zooplankton, nutrients, dissolved oxygen, and dissolved and particulate organic matter. This model is an effective tool for solving the problem of ecosystem bioproductivity. The model is forced by 48-hour atmospheric forecasts provided by the UM model from the Interdisciplinary Centre for Mathematical and Computational Modelling of Warsaw University (ICM). The study was financially supported by the Polish State Committee of Scientific Research (grants: No N N305 111636, N N306 353239). The partial support for this study was also provided by the project Satellite Monitoring of the Baltic Sea Environment - SatBaltyk founded by European Union through European Regional Development Fund contract no. POIG 01.01.02-22-011/09. Calculations were carried out at the Academy Computer Centre in Gdańsk.
NASA Astrophysics Data System (ADS)
Boyer, T.; Sun, L.; Locarnini, R. A.; Mishonov, A. V.; Hall, N.; Ouellet, M.
2016-02-01
The World Ocean Database (WOD) contains systematically quality controlled historical and recent ocean profile data (temperature, salinity, oxygen, nutrients, carbon cycle variables, biological variables) ranging from Captain Cooks second voyage (1773) to this year's Argo floats. The US National Centers for Environmental Information (NCEI) also hosts the Global Temperature and Salinity Profile Program (GTSPP) Continuously Managed Database (CMD) which provides quality controlled near-real time ocean profile data and higher level quality controlled temperature and salinity profiles from 1990 to present. Both databases are used extensively for ocean and climate studies. Synchronization of these two databases will allow easier access and use of comprehensive regional and global ocean profile data sets for ocean and climate studies. Synchronizing consists of two distinct phases: 1) a retrospective comparison of data in WOD and GTSPP to ensure that the most comprehensive and highest quality data set is available to researchers without the need to individually combine and contrast the two datasets and 2) web services to allow the constantly accruing near-real time data in the GTSPP CMD and the continuous addition and quality control of historical data in WOD to be made available to researchers together, seamlessly.
15 CFR 923.23 - Other areas of particular concern.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Special Management Areas § 923.23 Other...
NASA Astrophysics Data System (ADS)
Polat, Ali; Kerrich, Robert
1999-10-01
The late Archean (circa 2750-2670 Ma) Schreiber-Hemlo greenstone belt, Superior Province, Canada, is composed of tectonically juxtaposed fragments of oceanic plateaus (circa 2750-2700 Ma), oceanic island arcs (circa 2720-2695 Ma), and siliciclastic trench turbidites (circa 2705-2697 Ma). Following juxtaposition, these lithotectonic assemblages were collectively intruded by synkinematic tonalite-trondhjemite-granodiorite (TTG) plutons (circa 2720-2690 Ma) and ultramafic to felsic dikes and sills (circa 2690-2680 Ma), with subduction zone geochemical signatures. Overprinting relations between different sequences of structures suggest that the belt underwent at least three phases of deformation. During D1 (circa 2695-2685 Ma), oceanic plateau basalts and associated komatiites, arc-derived trench turbidites, and oceanic island arc sequences were all tectonically juxtaposed as they were incorporated into an accretionary complex. Fragmentation of these sequences resulted in broken formations and a tectonic mélange in the Schreiber assemblage of the belt. D2 (circa 2685-2680 Ma) is consistent with an intra-arc, right-lateral transpressional deformation. Fragmentation and mixing of D2 synkinematic dikes and sills suggest that mélange formation continued during D2. The D1 to D2 transition is interpreted in terms of a trenchward migration of the magmatic arc axis due to continued accretion and underplating. The D2 intra-arc strike-slip faults may have provided conduits for uprising melts from the descending slab, and they may have induced decompressional partial melting in the subarc mantle wedge, to yield synkinematic ultramafic to felsic intrusions. A similar close relationship between orogen-parallel strike-slip faulting and magmatism has recently been recognized in several Phanerozoic transpressional orogenic belts, suggesting that as in Phanerozoic counterparts, orogen-parallel strike-slip faulting in the Schreiber-Hemlo greenstone belt played an important role in magma emplacement.
Geochemistry and geodynamics of the Mawat mafic complex in the Zagros Suture zone, northeast Iraq
NASA Astrophysics Data System (ADS)
Azizi, Hossein; Hadi, Ayten; Asahara, Yoshihiro; Mohammad, Youssef Osman
2013-12-01
The Iraqi Zagros Orogenic Belt includes two separate ophiolite belts, which extend along a northwest-southeast trend near the Iranian border. The outer belt shows ophiolite sequences and originated in the oceanic ridge or supra-subduction zone. The inner belt includes the Mawat complex, which is parallel to the outer belt and is separated by the Biston Avoraman block. The Mawat complex with zoning structures includes sedimentary rocks with mafic interbedded lava and tuff, and thick mafic and ultramafic rocks. This complex does not show a typical ophiolite sequences such as those in Penjween and Bulfat. The Mawat complex shows evidence of dynamic deformation during the Late Cretaceous. Geochemical data suggest that basic rocks have high MgO and are significantly depleted in LREE relative to HREE. In addition they show positive ɛ Nd values (+5 to+8) and low 87Sr/86Sr ratios. The occurrence of some OIB type rocks, high Mg basaltic rocks and some intermediate compositions between these two indicate the evolution of the Mawat complex from primary and depleted source mantle. The absence of a typical ophiolite sequence and the presence of good compatibility of the source magma with magma extracted from the mantle plume suggests that a mantle plume from the D″ layer is more consistent as the source of this complex than the oceanic ridge or supra-subduction zone settings. Based on our proposed model the Mawat basin represents an extensional basin formed during the Late Paleozoic to younger along the Arabian passive margin oriented parallel to the Neo-Tethys oceanic ridge or spreading center. The Mawat extensional basin formed without creation of new oceanic basement. During the extension, huge volumes of mafic lava were intruded into this basin. This basin was squeezed between the Arabian Plate and Biston Avoraman block during the Late Cretaceous.
Swift, H F; Gómez Daglio, L; Dawson, M N
2016-06-01
Evolutionary inference can be complicated by morphological crypsis, particularly in open marine systems that may rapidly dissipate signals of evolutionary processes. These complications may be alleviated by studying systems with simpler histories and clearer boundaries, such as marine lakes-small bodies of seawater entirely surrounded by land. As an example, we consider the jellyfish Mastigias spp. which occurs in two ecotypes, one in marine lakes and one in coastal oceanic habitats, throughout the Indo-West Pacific (IWP). We tested three evolutionary hypotheses to explain the current distribution of the ecotypes: (H1) the ecotypes originated from an ancient divergence; (H2) the lake ecotype was derived recently from the ocean ecotype during a single divergence event; and (H3) the lake ecotype was derived from multiple, recent, independent, divergences. We collected specimens from 21 locations throughout the IWP, reconstructed multilocus phylogenetic and intraspecific relationships, and measured variation in up to 40 morphological characters. The species tree reveals three reciprocally monophyletic regional clades, two of which contain ocean and lake ecotypes, suggesting repeated, independent evolution of coastal ancestors into marine lake ecotypes, consistent with H3; hypothesis testing and an intraspecific haplotype network analysis of samples from Palau reaffirms this result. Phylogenetic character mapping strongly correlates morphology to environment rather than lineage (r=0.7512, p<0.00001). Considering also the deeper relationships among regional clades, morphological similarity in Mastigias spp. clearly results from three separate patterns of evolution: morphological stasis in ocean medusae, convergence of lake morphology across distinct species and parallelism between lake morphologies within species. That three evolutionary routes each result in crypsis illustrates the challenges of interpreting evolutionary processes from patterns of biogeography and diversity in the seas. Identifying cryptic species is only the first step in understanding these processes; an equally important second step is exploring and understanding the processes and patterns that create crypsis. Copyright © 2016 Elsevier Inc. All rights reserved.
Metric Selection for Ecosystem Restoration
2013-06-01
focus on wetlands, submerged aquatic vegetation, oyster reefs, riparian forest, and wet prairie (Miner 2005). The objective of these Corps...of coastal habitats, Volume Two: Tools for monitoring coastal habitats. NOAA Coastal Ocean Program Decision Analysis Series No. 23. Silver Spring, MD...NOAA National Centers for Coastal Ocean Science. Thom, R. M., and K. F. Wellman. 1996. Planning aquatic ecosystem restoration monitoring programs
NASA oceanic processes program: Status report, fiscal year 1980
NASA Technical Reports Server (NTRS)
1980-01-01
Goals, philosophy, and objectives of NASA's Oceanic Processes Program are presented as well as detailed information on flight projects, sensor developments, future prospects, individual investigator tasks, and recent publications. A special feature is a group of brief descriptions prepared by leaders in the oceanographic community of how remote sensing might impact various areas of oceanography during the coming decade.
Research activities of the Geodynamics Branch
NASA Technical Reports Server (NTRS)
Kahn, W. D. (Editor); Cohen, S. C. (Editor)
1984-01-01
A broad spectrum of geoscience disciplines including space geodesy, geopotential field modeling, tectonophysics, and dynamic oceanography are discussed. The NASA programs, include the Geodynamics and Ocean Programs, the Crustal Dynamics Project, the proposed Ocean Topography Experiment (TOPEX), and the Geopotential Research Mission (GRM). The papers are grouped into chapters on Crustal Movements, Global Earth Dynamics, Gravity Field Model Development, Sea Surface Topography, and Advanced Studies.
Ocean Tide Loading Computation
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
2005-01-01
September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.
NASA Technical Reports Server (NTRS)
1974-01-01
Accomplishments in the continuing programs are reported. The data were obtained in support of the following broad objectives: (1) to provide a precise and accurate geometric description of the earth's surface; (2) to provide a precise and accurate mathematical description of the earth's gravitational field; and (3) to determine time variations of the geometry of the ocean surface, the solid earth, the gravity field, and other geophysical parameters.
Preconditioned implicit solvers for the Navier-Stokes equations on distributed-memory machines
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liou, Meng-Sing; Dyson, Rodger W.
1994-01-01
The GMRES method is parallelized, and combined with local preconditioning to construct an implicit parallel solver to obtain steady-state solutions for the Navier-Stokes equations of fluid flow on distributed-memory machines. The new implicit parallel solver is designed to preserve the convergence rate of the equivalent 'serial' solver. A static domain-decomposition is used to partition the computational domain amongst the available processing nodes of the parallel machine. The SPMD (Single-Program Multiple-Data) programming model is combined with message-passing tools to develop the parallel code on a 32-node Intel Hypercube and a 512-node Intel Delta machine. The implicit parallel solver is validated for internal and external flow problems, and is found to compare identically with flow solutions obtained on a Cray Y-MP/8. A peak computational speed of 2300 MFlops/sec has been achieved on 512 nodes of the Intel Delta machine,k for a problem size of 1024 K equations (256 K grid points).
New Sensor Technologies for Ocean Exploration and Observation
NASA Astrophysics Data System (ADS)
Manley, J. E.
2005-12-01
NOAA's Office of Ocean Exploration (OE) is an active supporter of new ocean technologies. Sensors, in particular, have been a focus of recent investments as have platforms that can support both dedicated voyages of discovery and Integrated Ocean Observing Systems (IOOS). Recent programs sponsored by OE have developed technical solutions that will be of use in sensor networks and in stand-alone ocean research programs. Particular projects include: 1) the Joint Environmental Science Initiative (JESI) a deployment of a highly flexible marine sensing system, in collaboration with NASA, that demonstrated a new paradigm for marine ecosystem monitoring. 2) the development and testing of an in situ marine mass spectrometer, via grant to the Woods Hole Oceanographic Institution (WHOI). This instrument has been designed to function at depths up to 5000 meters. 3) the evolution of glider AUVs for aerial deployment, through a grant to Webb Research Corporation. This program's goal is air certification for gliders, which will allow them to be operationally deployed from NAVOCEANO aircraft. 4) the development of new behaviors for the Autonomous Benthic Explorer (ABE) allowing it to anchor in place and await instructions, through a grant to WHOI. This will support the operational use of AUVs in observing system networks. 5) development of new sensors for AUVs through a National Ocean Partnership Program (NOPP) award to Rutgers Universty. This project will develop a Fluorescence Induction Relaxation (FIRe) System to measure biomass and integrate the instrument into an AUV glider. 6) an SBIR award for the development of anti-fouling technologies for solar panels and in situ sensors. This effort at Nanohmics Inc. is developing natural product antifoulants (NPA) in optical quality hard polymers. The technology and results of each of these projects are one component of OE's overall approach to technology research and development. OE's technology program represents the leading edge of NOAA investment in ocean sensors and tools that eventually will find application in mission areas such as IOOS. This "big picture" provides context for focused information on detailed results of OE investments. As NOAA increases its investments in IOOS, and related technologies, these projects are timely and should be beneficial to the entire environmental sensor network community.
15 CFR 923.22 - Areas for preservation or restoration.
Code of Federal Regulations, 2011 CFR
2011-01-01
... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OCEAN AND COASTAL RESOURCE MANAGEMENT COASTAL ZONE MANAGEMENT PROGRAM REGULATIONS Special Management Areas § 923.22 Areas for...
An overview of EPA’s oceans, coasts, estuaries and beaches programs and the regulatory (permits/rules) and non-regulatory approaches for managing their associated environmental issues, such as water pollution and climate change.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
The future of spaceborne altimetry. Oceans and climate change: A long-term strategy
NASA Technical Reports Server (NTRS)
Koblinsky, C. J. (Editor); Gaspar, P. (Editor); Lagerloef, G. (Editor)
1992-01-01
The ocean circulation and polar ice sheet volumes provide important memory and control functions in the global climate. Their long term variations are unknown and need to be understood before meaningful appraisals of climate change can be made. Satellite altimetry is the only method for providing global information on the ocean circulation and ice sheet volume. A robust altimeter measurement program is planned which will initiate global observations of the ocean circulation and polar ice sheets. In order to provide useful data about the climate, these measurements must be continued with unbroken coverage into the next century. Herein, past results of the role of the ocean in the climate system is summarized, near term goals are outlined, and requirements and options are presented for future altimeter missions. There are three basic scientific objectives for the program: ocean circulation; polar ice sheets; and mean sea level change. The greatest scientific benefit will be achieved with a series of dedicated high precision altimeter spacecraft, for which the choice of orbit parameters and system accuracy are unencumbered by requirements of companion instruments.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
On the utility of threads for data parallel programming
NASA Technical Reports Server (NTRS)
Fahringer, Thomas; Haines, Matthew; Mehrotra, Piyush
1995-01-01
Threads provide a useful programming model for asynchronous behavior because of their ability to encapsulate units of work that can then be scheduled for execution at runtime, based on the dynamic state of a system. Recently, the threaded model has been applied to the domain of data parallel scientific codes, and initial reports indicate that the threaded model can produce performance gains over non-threaded approaches, primarily through the use of overlapping useful computation with communication latency. However, overlapping computation with communication is possible without the benefit of threads if the communication system supports asynchronous primitives, and this comparison has not been made in previous papers. This paper provides a critical look at the utility of lightweight threads as applied to data parallel scientific programming.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed system present the worst case implications for today s dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirement-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, J.I.; Meyer, R.P.
The Rivera Ocean Seismic Experiment (ROSE) was designed as a combined sea and land seismic program to utilize both explosive and earthquakes to study a number of features of the structure and evolution of a mid-ocean ridge, a major oceanic fracture zone, and the transition region between ocean and continent. The primary region selected for the experiment included the Rivera Fracture Zone, the crest and eastern flank of the East Pacific north of the Rivera and adjacent areas of Baja California and mainland Mexico. The experiment included: (1) study of the East Pacific Rise south of the Orozco Fracture Zonemore » primarily using ocean bottom recording and explosive sources. (2) a seismicity program at the Orozco, and (3) a land-based program of recording natural events along the coastal region of Mexico. A considerable amount of useful data was obtained in each of the three subprograms. In the marine parts of the experiment we were able to address a variety of problems including structure and evolution of young oceanic crust and mantle, structure and dynamics of the East Pacific Rise, seismicity of the Orozco Fracture Zone, and partitioning of energy transmission between the ocean volume and the crust/lithosphere. On land, the fortuitous occurrence of the Petatlan M7.6 earthquake of March 14, 1979, permitted the acquisition of an excellent data set of foreshocks and aftershocks of this large event, which provide new insight into the filling of a major seismic gap in the region. This overview describes the scientific rationale and the design of the experiments, along with some general results.« less
A design methodology for portable software on parallel computers
NASA Technical Reports Server (NTRS)
Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.
1993-01-01
This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured the performance of a portion of this subsystem on the Intel iPSC/2 parallel computer. These results are provided in section four. Our future work is summarized in section five, our acknowledgements are stated in section six, and references for published papers associated with NAG-1-995 are provided in section seven.
U.S. Navy Marine Climatic Atlas of the World. Volume 6. Arctic Ocean
1963-02-01
commonly used trade ch or closely parallel coastlines. difTerences will be noted between isopleth values graphs in the same area. Most discrepancies...95% H»«fil range e 1,500 Uf. otd ihe M% 84% range e 1,000 ( etf ) Mean and langei a’ height are omrtled «hen data loi o level OfB mining at
Facing Climate Change: Connecting Coastal Communities with Place-Based Ocean Science
NASA Astrophysics Data System (ADS)
Pelz, M.; Dewey, R. K.; Hoeberechts, M.; McLean, M. A.; Brown, J. C.; Ewing, N.; Riddell, D. J.
2016-12-01
As coastal communities face a wide range of environmental changes, including threats from climate change, real-time data from cabled observatories can be used to support community members in making informed decisions about their coast and marine resources. Ocean Networks Canada (ONC) deploys and operates an expanding network of community observatories in the Arctic and coastal British Columbia, which enable communities to monitor real-time and historical data from the local marine environment. Community observatories comprise an underwater cabled seafloor platform and shore station equipped with a variety of sensors that collect environmental data 24/7. It is essential that data being collected by ONC instruments are relevant to community members and can contribute to priorities identified within the community. Using a community-based science approach, ONC is engaging local parties at all stages of each project from location planning, to instrument deployment, to data analysis. Alongside the science objectives, place-based educational programming is being developed with local educators and students. As coastal populations continue to grow and our use of and impacts on the ocean increase, it is vital that global citizens develop an understanding that the health of the ocean reflects the health of the planet. This presentation will focus on programs developed by ONC emphasizing the connection to place and local relevance with an emphasis on Indigenous knowledge. Building programs which embrace multiple perspectives is effective both in making ocean science more relevant to Indigenous students and in linking place-based knowledge to ocean science. The inclusion of Indigenous Knowledge into science-based monitoring programs also helps develop a more complete understanding of local conditions. We present a case study from the Canadian Arctic, in which ONC is working with Inuit community members to develop a snow and ice monitoring program to assist with predictions and modelling of sea-ice.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2014-01-01
This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.
Complex Plate Tectonic Features on Planetary Bodies: Analogs from Earth
NASA Astrophysics Data System (ADS)
Stock, J. M.; Smrekar, S. E.
2016-12-01
We review the types and scales of observations needed on other rocky planetary bodies (e.g., Mars, Venus, exoplanets) to evaluate evidence of present or past plate motions. Earth's plate boundaries were initially simplified into three basic types (ridges, trenches, and transform faults). Previous studies examined the Moon, Mars, Venus, Mercury and icy moons such as Europa, for evidence of features, including linear rifts, arcuate convergent zones, strike-slip faults, and distributed deformation (rifting or folding). Yet, several aspects merit further consideration. 1) Is the feature active or fossil? Earth's active mid ocean ridges are bathymetric highs, and seafloor depth increases on either side; whereas, fossil mid ocean ridges may be as deep as the surrounding abyssal plain with no major rift valley, although with a minor gravity low (e.g., Osbourn Trough, W. Pacific Ocean). Fossil trenches have less topographic relief than active trenches (e.g., the fossil trench along the Patton Escarpment, west of California). 2) On Earth, fault patterns of spreading centers depend on volcanism. Excess volcanism reduced faulting. Fault visibility increases as spreading rates slow, or as magmatism decreases, producing high-angle normal faults parallel to the spreading center. At magma-poor spreading centers, high resolution bathymetry shows low angle detachment faults with large scale mullions and striations parallel to plate motion (e.g., Mid Atlantic Ridge, Southwest Indian Ridge). 3) Sedimentation on Earth masks features that might be visible on a non-erosional planet. Subduction zones on Earth in areas of low sedimentation have clear trench -parallel faults causing flexural deformation of the downgoing plate; in highly sedimented subduction zones, no such faults can be seen, and there may be no bathymetric trench at all. 4) Areas of Earth with broad upwelling, such as the North Fiji Basin, have complex plate tectonic patterns with many individual but poorly linked ridge segments and transform faults. These details and scales of features should be considered in planning future surveys of altimetry, reflectance, magnetics, compositional, and gravity data from other planetary bodies aimed at understanding the link between a planet's surface and interior, whether via plate tectonics or other processes.
IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.
NASA Astrophysics Data System (ADS)
Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier
2017-04-01
The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.
Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gupta, Manish
1992-01-01
Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.
GRADSPMHD: A parallel MHD code based on the SPH formalism
NASA Astrophysics Data System (ADS)
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.
Cyberinfrastructure for the NSF Ocean Observatories Initiative
NASA Astrophysics Data System (ADS)
Orcutt, J. A.; Vernon, F. L.; Arrott, M.; Chave, A.; Schofield, O.; Peach, C.; Krueger, I.; Meisinger, M.
2008-12-01
The Ocean Observatories Initiative (OOI) is an environmental observatory covering a diversity of oceanic environments, ranging from the coastal to the deep ocean. The physical infrastructure comprises a combination of seafloor cables, buoys and autonomous vehicles. It is currently in the final design phase, with construction planned to begin in mid-2010 and deployment phased over five years. The Consortium for Ocean Leadership manages this Major Research Equipment and Facilities Construction program with subcontracts to Scripps Institution of Oceanography, University of Washington and Woods Hole Oceanographic Institution. High-level requirements for the CI include the delivery of near-real-time data with minimal latencies, open data, data analysis and data assimilation into models, and subsequent interactive modification of the network (including autonomous vehicles) by the cyberinfrastructure. Network connections include a heterogeneous combination of fiber optics, acoustic modems, and Iridium satellite telemetry. The cyberinfrastructure design loosely couples services that exist throughout the network and share common software and middleware as necessary. In this sense, the system appears to be identical at all scales, so it is self-similar or fractal by design. The system provides near-real-time access to data and developed knowledge by the OOI's Education and Public Engagement program, to the physical infrastructure by the marine operators and to the larger community including scientists, the public, schools and decision makers. Social networking is employed to facilitate the virtual organization that builds, operates and maintains the OOI as well as providing a variety of interfaces to the data and knowledge generated by the program. We are working closely with NOAA to exchange near-real-time data through interfaces to their Data Interchange Facility (DIF) program within the Integrated Ocean Observing System (IOOS). Efficiencies have been emphasized through the use of university and commercial computing clouds.
Shrimankar, D D; Sathe, S R
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.
Shrimankar, D. D.; Sathe, S. R.
2016-01-01
Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868
Parallel Logic Programming and Parallel Systems Software and Hardware
1989-07-29
Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted
Estimation of the Barrier Layer Thickness in the Indian Ocean Using Aquarius Salinity
2014-07-08
number of temperature and salinity measurements in ocean basins . In 2005, buoy coverage in the Indian Ocean began meeting Argo program sampling...distribution of salinity in the Indian Ocean is unique when compared to the other basins with higher salinity in the western contrasted Journal of...eastern regions of the basin (Figure 2). In the Arabian Sea, evaporation (E) greatly exceeds precipitation (P) resulting in high salinity (>36 PSU
2016-03-01
ERDC-EL Research Biologist/Certified Facilitator Mintz Jennifer NOAA-OAR-OAP Regional Coordinator- Ocean Acidification Program/Facilitator Payne Dr...National Oceanic United States Army United States and Atmospheric Engineer Research Army Corps Administration and Development of Engineers (NOAA...and the National Oceanic and Atmospheric Administration (NOAA) Natural and Nature-Based Features Workshop March 1-3, 2016 Charleston, South
Using DSDP/ODP/IODP core photographs and digital images in the classroom
NASA Astrophysics Data System (ADS)
Pereira, Hélder; Berenguer, Jean-Luc
2017-04-01
Since the late 1960's, several scientific ocean drilling programmes have been uncovering the history of the Earth hidden beneath the seafloor. The adventure began in 1968 with the Deep Sea Drilling Project (DSDP) and its special drill ship, the Glomar Challenger. The next stage was the Ocean Drilling Program (ODP) launched in 1985 with a new drill ship, the JOIDES Resolution. The exploration of the ocean seafloor continued, between 2003 and 2013, through the Integrated Ocean Drilling Program (IODP). During that time, in addition to the JOIDES Resolution, operated by the US, the scientists had at their service the Chikyu, operated by Japan, and Mission-Specific-Platforms, funded and implemented by the European Consortium for Ocean Research Drilling. Currently, scientific ocean drilling continues through the collaboration of scientists from 25 nations within the International Ocean Discovery Program (IODP). Over the last 50 years, the scientific ocean drilling expeditions conducted by these programmes have drilled and cored more than 3500 holes. The numerous sediment and rock samples recovered from the ocean floor have provided important insight on the active biological, chemical, and geological processes that have shaped the Earth over millions of years. During an expedition, once the 9.5-meter long cores arrive from the seafloor, the technicians label and cut them into 1.5-meter sections. Next, the shipboard scientists perform several analysis using non-destructive methods. Afterward, the technicians split the cores into two halves, the "working half", which scientists sample and use aboard the drilling platform, and the "archive half", which is kept in untouched condition after being visually described and photographed with a digital imaging system. The shipboard photographer also takes several close-up pictures of the archive-half core sections. This work presents some examples of how teachers can use DSDP/ODP/IODP core photographs and digital images, available through the Janus and LIMS online databases, to develop inquiry-based learning activities for secondary level students.
The force on the flex: Global parallelism and portability
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1986-01-01
A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.
Cellular automata with object-oriented features for parallel molecular network modeling.
Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan
2005-06-01
Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.
Efficient Thread Labeling for Monitoring Programs with Nested Parallelism
NASA Astrophysics Data System (ADS)
Ha, Ok-Kyoon; Kim, Sun-Sook; Jun, Yong-Kee
It is difficult and cumbersome to detect data races occurred in an execution of parallel programs. Any on-the-fly race detection techniques using Lamport's happened-before relation needs a thread labeling scheme for generating unique identifiers which maintain logical concurrency information for the parallel threads. NR labeling is an efficient thread labeling scheme for the fork-join program model with nested parallelism, because its efficiency depends only on the nesting depth for every fork and join operation. This paper presents an improved NR labeling, called e-NR labeling, in which every thread generates its label by inheriting the pointer to its ancestor list from the parent threads or by updating the pointer in a constant amount of time and space. This labeling is more efficient than the NR labeling, because its efficiency does not depend on the nesting depth for every fork and join operation. Some experiments were performed with OpenMP programs having nesting depths of three or four and maximum parallelisms varying from 10,000 to 1,000,000. The results show that e-NR is 5 times faster than NR labeling and 4.3 times faster than OS labeling in the average time for creating and maintaining the thread labels. In average space required for labeling, it is 3.5 times smaller than NR labeling and 3 times smaller than OS labeling.
User-Defined Data Distributions in High-Level Programming Languages
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Parallel Rendering of Large Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Garbutt, Alexander E.
2005-01-01
Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.
Solving Partial Differential Equations in a data-driven multiprocessor environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.
1988-12-31
Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less
Optical Moorings-of-Opportunity for Validation of Ocean Color Satellites
2008-01-01
at the midpoint of the two depths is given by: K , z d dz lnE , z , , d dλ λ λ λ ( ) = − ( )[ ] ( ) = − ( ) ( ) ( ) 4a 1 z ln E , z E , z 4bd 2 d 1...Biological Oceanography Program ( TD : OCE-9627281, OCE-9730471, OCE-9819477), NASA ( TD : NAS5-97127), the ONR Ocean Engineering and Marine Systems Program
Scientific Ocean Drilling: A Legacy of ODP Education and Community Engagement by JOI/USSSP
NASA Astrophysics Data System (ADS)
Johnson, A.; Cortes, M.; Farrell, J. W.
2003-12-01
The U.S. Science Support Program (USSSP) was established in 1986 to support the participation of U.S. scientists in the international Ocean Drilling Program (ODP). Since inception, USSSP has been managed by Joint Oceanographic Institutions (JOI), through a cooperative agreement with NSF, and guided by the U.S. Science Advisory Committee (USSAC). One of USSSP's primary goals has been to enhance the scientific contribution of ocean drilling and to maintain its vitality through a broad range of education and outreach activities. USSSP's first educational program, the Schlanger Ocean Drilling Fellowship, was established to encourage doctoral candidates to conduct research aboard the ODP drill ship, JOIDES Resolution. Since 1987, 74 fellowships have been awarded and the program has been expanded to include shorebased ODP-related research and Masters degree candidates. USSSP's second major educational activity is the Distinguished Lecturer Series. To date, 70 scientists have spoken about their ODP research at 334 institutions, effectively reaching new and diverse educational communities. In addition, USSSP has developed and distributed two interactive educational CD-ROMs (ODP: Mountains to Monsoons and Gateways to Glaciation) and an educational poster (Blast from the Past). All three items are popular supplements in classrooms from middle school to college because they present accessible scientific content, demonstrate the scientific method, and illustrate the collaborative and international nature of scientific research. USSSP's outreach efforts have included publishing the JOI/USSAC Newsletter since 1988 and ODP's Greatest Hits (abstracts written by U.S. scientists). The latter is broadly used because it communicates exciting scientific results in lay terms. USSSP has sponsored other educational efforts including a workshop to seek recommendations for educational activities to be associated with future scientific ocean drilling. NSF is currently considering the response to their solicitation of proposals to manage a successor program to USSSP, which will support the involvement of U.S. scientists in the new Integrated Ocean Drilling Program. The educational and outreach component of the new USSSP will target students at all levels, building upon improving on the USSSP-ODP achievements.
NASA Astrophysics Data System (ADS)
Roberts, S. J.; Feeley, M. H.
2008-05-01
With the increasing stress on ocean and coastal resources, ocean resource management will require greater capacity in terms of people, institutions, technology and tools. Successful capacity-building efforts address the needs of a specific locale or region and include plans to maintain and expand capacity after the project ends. In 2008, the US National Research Council published a report that assesses past and current capacity-building efforts to identify barriers to effective management of coastal and marine resources. The report recommends ways that governments and organizations can strengthen marine conservation and management capacity. Capacity building programs instill the tools, knowledge, skills, and attitudes that address: ecosystem function and change; processes of governance that influence societal and ecosystem change; and assembling and managing interdisciplinary teams. Programs require efforts beyond traditional sector-by-sector planning because marine ecosystems range from the open ocean to coastal waters and land use practices. Collaboration among sectors, scaling from local community-based management to international ocean policies, and ranging from inland to offshore areas, will be required to establish coordinated and efficient governance of ocean and coastal ecosystems. Barriers Most capacity building activities have been initiated to address particular issues such as overfishing or coral reef degradation, or they target a particular region or country facing threats to their marine resources. This fragmentation inhibits the sharing of information and experience and makes it more difficult to design and implement management approaches at appropriate scales. Additional barriers that have limited the effectiveness of capacity building programs include: lack of an adequate needs assessment prior to program design and implementation; exclusion of targeted populations in decision- making efforts; mismanagement, corruption, or both; incomplete or inappropriate evaluation procedures; and, lack of a coordinated and strategic approach among donors. A New Framework Improving ocean stewardship and ending the fragmentation of current capacity building programs will require a new, broadly adopted framework for capacity building that emphasizes cooperation, sustainability, and knowledge transfer within and among communities. The report identifies four specific features of capacity building that would increase the effectiveness and efficiency of future programs: 1. Regional action plans based on periodic program assessments to guide investments in capacity and set realistic milestones and performance measures. 2. Long-term support to establish self-sustaining programs. Sustained capacity building programs require a diversity of sources and coordinated investments from local, regional, and international donors. 3. Development of leadership and political will. One of the most commonly cited reasons for failure and lack of progress in ocean and coastal governance initiatives is lack of political will. One strategy for strengthening support is to identify, develop, mentor, and reward leaders. 4. Establishment of networks and mechanisms for regional collaboration. Networks bring together those working in the same or similar ecosystems with comparable management or governance challenges to share information, pool resources, and learn from one another. The report also recommends the establishment of regional centers to encourage and support collaboration among neighboring countries.
cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.
Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro
2016-01-01
Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.
Visual analysis of inter-process communication for large-scale parallel computing.
Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu
2009-01-01
In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.
NASA Astrophysics Data System (ADS)
Liu, Bo; Han, Bao-Fu; Chen, Jia-Fu; Ren, Rong; Zheng, Bo; Wang, Zeng-Zhen; Feng, Li-Xia
2017-12-01
The Junggar-Balkhash Ocean was a major branch of the southern Paleo-Asian Ocean. The timing of its closure is important for understanding the history of the Central Asian Orogenic Belt. New sedimentological and geochronological data from the Late Paleozoic volcano-sedimentary sequences in the Barleik Mountains of West Junggar, NW China, help to constrain the closure time of the Junggar-Balkhash Ocean. Tielieketi Formation (Fm) is dominated by littoral sediments, but its upper glauconite-bearing sandstone is interpreted to deposit rapidly in a shallow-water shelf setting. By contrast, Heishantou Fm consists chiefly of volcanic rocks, conformably overlying or in fault contact with Tielieketi Fm. Molaoba Fm is composed of parallel-stratified fine sandstone and sandy conglomerate with graded bedding, typical of nonmarine, fluvial deposition. This formation unconformably overlies the Tielieketi and Heishantou formations and is conformably covered by Kalagang Fm characterized by a continental bimodal volcanic association. The youngest U-Pb ages of detrital zircons from sandstones and zircon U-Pb ages from volcanic rocks suggest that the Tielieketi, Heishantou, Molaoba, and Kalagang formations were deposited during the Famennian-Tournaisian, Tournaisian-early Bashkirian, Gzhelian, and Asselian-Sakmarian, respectively. The absence of upper Bashkirian to Kasimovian was likely caused by tectonic uplifting of the West Junggar terrane. This is compatible with the occurrence of coeval stitching plutons in the West Junggar and adjacent areas. The Junggar-Balkhash Ocean should be finally closed before the Gzhelian, slightly later or concurrent with that of other ocean domains of the southern Paleo-Asian Ocean.
The dynamics of plate tectonics and mantle flow: from local to global scales.
Stadler, Georg; Gurnis, Michael; Burstedde, Carsten; Wilcox, Lucas C; Alisic, Laura; Ghattas, Omar
2010-08-27
Plate tectonics is regulated by driving and resisting forces concentrated at plate boundaries, but observationally constrained high-resolution models of global mantle flow remain a computational challenge. We capitalized on advances in adaptive mesh refinement algorithms on parallel computers to simulate global mantle flow by incorporating plate motions, with individual plate margins resolved down to a scale of 1 kilometer. Back-arc extension and slab rollback are emergent consequences of slab descent in the upper mantle. Cold thermal anomalies within the lower mantle couple into oceanic plates through narrow high-viscosity slabs, altering the velocity of oceanic plates. Viscous dissipation within the bending lithosphere at trenches amounts to approximately 5 to 20% of the total dissipation through the entire lithosphere and mantle.
Parallel machine architecture and compiler design facilities
NASA Technical Reports Server (NTRS)
Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex
1990-01-01
The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.
The OpenMP Implementation of NAS Parallel Benchmarks and its Performance
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry
1999-01-01
As the new ccNUMA architecture became popular in recent years, parallel programming with compiler directives on these machines has evolved to accommodate new needs. In this study, we examine the effectiveness of OpenMP directives for parallelizing the NAS Parallel Benchmarks. Implementation details will be discussed and performance will be compared with the MPI implementation. We have demonstrated that OpenMP can achieve very good results for parallelization on a shared memory system, but effective use of memory and cache is very important.
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Dale M. Snider
2011-02-28
This report gives the result from the Phase-1 work on demonstrating greater than 10x speedup of the Barracuda computer program using parallel methods and GPU processors (General-Purpose Graphics Processing Unit or Graphics Processing Unit). Phase-1 demonstrated a 12x speedup on a typical Barracuda function using the GPU processor. The problem test case used about 5 million particles and 250,000 Eulerian grid cells. The relative speedup, compared to a single CPU, increases with increased number of particles giving greater than 12x speedup. Phase-1 work provided a path for reformatting data structure modifications to give good parallel performance while keeping a friendlymore » environment for new physics development and code maintenance. The implementation of data structure changes will be in Phase-2. Phase-1 laid the ground work for the complete parallelization of Barracuda in Phase-2, with the caveat that implemented computer practices for parallel programming done in Phase-1 gives immediate speedup in the current Barracuda serial running code. The Phase-1 tasks were completed successfully laying the frame work for Phase-2. The detailed results of Phase-1 are within this document. In general, the speedup of one function would be expected to be higher than the speedup of the entire code because of I/O functions and communication between the algorithms. However, because one of the most difficult Barracuda algorithms was parallelized in Phase-1 and because advanced parallelization methods and proposed parallelization optimization techniques identified in Phase-1 will be used in Phase-2, an overall Barracuda code speedup (relative to a single CPU) is expected to be greater than 10x. This means that a job which takes 30 days to complete will be done in 3 days. Tasks completed in Phase-1 are: Task 1: Profile the entire Barracuda code and select which subroutines are to be parallelized (See Section Choosing a Function to Accelerate) Task 2: Select a GPU consultant company and jointly parallelize subroutines (CPFD chose the small business EMPhotonics for the Phase-1 the technical partner. See Section Technical Objective and Approach) Task 3: Integrate parallel subroutines into Barracuda (See Section Results from Phase-1 and its subsections) Task 4: Testing, refinement, and optimization of parallel methodology (See Section Results from Phase-1 and Section Result Comparison Program) Task 5: Integrate Phase-1 parallel subroutines into Barracuda and release (See Section Results from Phase-1 and its subsections) Task 6: Roadmap of Phase-2 (See Section Plan for Phase-2) With the completion of Phase 1 we have the base understanding to completely parallelize Barracuda. An overview of the work to move Barracuda to a parallelized code is given in Plan for Phase-2.« less
Coupled Modeling of Hydrodynamics and Sound in Coastal Ocean for Renewable Ocean Energy Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Wen; Jung, Ki Won; Yang, Zhaoqing
An underwater sound model was developed to simulate sound propagation from marine and hydrokinetic energy (MHK) devices or offshore wind (OSW) energy platforms. Finite difference methods were developed to solve the 3D Helmholtz equation for sound propagation in the coastal environment. A 3D sparse matrix solver with complex coefficients was formed for solving the resulting acoustic pressure field. The Complex Shifted Laplacian Preconditioner (CSLP) method was applied to solve the matrix system iteratively with MPI parallelization using a high performance cluster. The sound model was then coupled with the Finite Volume Community Ocean Model (FVCOM) for simulating sound propagation generatedmore » by human activities, such as construction of OSW turbines or tidal stream turbine operations, in a range-dependent setting. As a proof of concept, initial validation of the solver is presented for two coastal wedge problems. This sound model can be useful for evaluating impacts on marine mammals due to deployment of MHK devices and OSW energy platforms.« less
Robards, Martin D.; Gould, Patrick J.; Coe, James M.; Rogers, Donald B.
1997-01-01
Plastic pollution has risen dramatically with an increase in production of plastic resin during the past few decades. Plastic production in the United States increased from 2.9 million tons in I960 to 47.9 million tons in 1985 (Society of the Plastics Industry 1986). This has been paralleled by a significant increase in the concentration of plastic particles in oceanic surface waters of the North Pacific from the 1970s to the late 1980s (Day and Shaw 1987; Day et al. 1990a). Research during the past few decades has indicated two major interactions between marine life and oceanic plastic: entanglement and ingestion (Laist 1987). Studies in the last decade have documented the prevalence of plastic in the diets of many seabird species in the North Pacific and the need for further monitoring of those species and groups that ingest the most plastic (Day et al. 1985).
Liu, Jun; Hu, Shi-xue; Rieppel, Olivier; Jiang, Da-yong; Benton, Michael J.; Kelley, Neil P.; Aitchison, Jonathan C.; Zhou, Chang-yong; Wen, Wen; Huang, Jin-yuan; Xie, Tao; Lv, Tao
2014-01-01
The presence of gigantic apex predators in the eastern Panthalassic and western Tethyan oceans suggests that complex ecosystems in the sea had become re-established in these regions at least by the early Middle Triassic, after the Permian-Triassic mass extinction (PTME). However, it is not clear whether oceanic ecosystem recovery from the PTME was globally synchronous because of the apparent lack of such predators in the eastern Tethyan/western Panthalassic region prior to the Late Triassic. Here we report a gigantic nothosaur from the lower Middle Triassic of Luoping in southwest China (eastern Tethyan ocean), which possesses the largest known lower jaw among Triassic sauropterygians. Phylogenetic analysis suggests parallel evolution of gigantism in Triassic sauropterygians. Discovery of this gigantic apex predator, together with associated diverse marine reptiles and the complex food web, indicates global recovery of shallow marine ecosystems from PTME by the early Middle Triassic. PMID:25429609
Monitoring Data-Structure Evolution in Distributed Message-Passing Programs
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)
1996-01-01
Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.
NASA Astrophysics Data System (ADS)
Weller, Petra; Stein, Ruediger
2008-03-01
During Integrated Ocean Drilling Program Expedition 302 (Arctic Coring Expedition (ACEX)) a more than 200 m thick sequence of Paleogene organic carbon (OC)-rich (black shale type) sediments was drilled. Here we present new biomarker data determined in ACEX sediment samples to decipher processes controlling OC accumulation and their paleoenvironmental significance during periods of Paleogene global warmth and proposed increased freshwater discharge in the early Cenozoic. Specific source-related biomarkers including n-alkanes, fatty acids, isoprenoids, carotenoids, hopanes/hopenes, hopanoic acids, aromatic terpenoids, and long-chain alkenones show a high variability of components, derived from marine and terrestrial origin. The distribution of hopanoic acid isomers is dominated by compounds with the biological 17β(H), 21β(H) configuration indicating a low level of maturity. On the basis of the biomarker data the terrestrial OC supply was significantly enriched during the late Paleocene and part of the earliest Eocene, whereas increased aquatic contributions and euxinic conditions of variable intensity were determined for the Paleocene-Eocene thermal maximum and Eocene thermal maximum 2 events as well as the middle Eocene time interval. Furthermore, samples from the middle Eocene are characterized by the occurrence of long-chain alkenones, high proportions of lycopane, and high ratios (>0.6) of (n-C35 + lycopane)/n-C31. The occurrence of C37-alkenenones, which were first determined toward the end of the Azolla freshwater event, indicates that the OC becomes more marine in origin during the middle Eocene. Preliminary U37K'-based sea surface temperature (SST) values display a long-term temperature decrease of about 15°C during the time interval 49-44.5 Ma (25° to 10°C), coinciding with the global benthic δ18O cooling trend after the early Eocene climatic optimum. At about 46 Ma, parallel with onset of ice-rafted debris, SST (interpreted as summer temperatures) decreased to values <15°C. For the late early Miocene a SST of 11°-15°C was determined. Most of the middle Eocene ACEX sediments are characterized by a smooth short-chain n-alkane distribution, which may point to natural oil-type hydrocarbons from leakage of petroleum reservoirs or erosion of related source rocks and redeposition.
Optimization Of Ocean Color Algorithms: Application To Satellite And In Situ Data Merging. Chapter 9
NASA Technical Reports Server (NTRS)
Maritorena, Stephane; Siegel, David A.; Morel, Andre
2003-01-01
The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project (McClain et al., 1995). The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage (Gregg et al., 1998; Gregg & Woodward, 1998). The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global ocean the merging of ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program (see e.g. Kwiatkowska & Fargion, 2003, Franz et al., 2003). As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized waterleaving radiances (LwN( )) from single or multiple sensors and uses them in the inversion of a semianalytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the LwN( ) data and, 4) it provides uncertainty estimates for the retrieved variables.
Simulation of multistatic and backscattering cross sections for airborne radar
NASA Astrophysics Data System (ADS)
Biggs, Albert W.
1986-07-01
In order to determine susceptibilities of airborne radar to electronic countermeasures and electronic counter-countermeasures simulations of multistatic and backscattering cross sections were developed as digital modules in the form of algorithms. Cross section algorithms are described for prolate (cigar shape) and oblate (disk shape) spheroids. Backscattering cross section algorithms are also described for different categories of terrain. Backscattering cross section computer programs were written for terrain categorized as vegetation, sea ice, glacial ice, geological (rocks, sand, hills, etc.), oceans, man-made structures, and water bodies. PROGRAM SIGTERRA is a file for backscattering cross section modules of terrain (TERRA) such as vegetation (AGCROP), oceans (OCEAN), Arctic sea ice (SEAICE), glacial snow (GLASNO), geological structures (GEOL), man-made structures (MAMMAD), or water bodies (WATER). AGCROP describes agricultural crops, trees or forests, prairies or grassland, and shrubs or bush cover. OCEAN has the SLAR or SAR looking downwind, upwind, and crosswind at the ocean surface. SEAICE looks at winter ice and old or polar ice. GLASNO is divided into a glacial ice and snow or snowfields. MANMAD includes buildings, houses, roads, railroad tracks, airfields and hangars, telephone and power lines, barges, trucks, trains, and automobiles. WATER has lakes, rivers, canals, and swamps. PROGRAM SIGAIR is a similar file for airborne targets such as prolate and oblate spheroids.
Climate Prediction Center - NCEP Global Ocean Data Assimilation System:
home page National Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Monthly in NetCDF Other formats Links NOAA Ocean Climate Observation Program (OCO) Climate Test Bed About Prediction (NCEP) are a valuable community asset for monitoring different aspects of ocean climate
Command/response protocols and concurrent software
NASA Technical Reports Server (NTRS)
Bynum, W. L.
1987-01-01
A version of the program to control the parallel jaw gripper is documented. The parallel jaw end-effector hardware and the Intel 8031 processor that is used to control the end-effector are briefly described. A general overview of the controller program is given and a complete description of the program's structure and design are contained. There are three appendices: a memory map of the on-chip RAM, a cross-reference listing of the self-scheduling routines, and a summary of the top-level and monitor commands.
James W. Evans; Jane K. Evans; David W. Green
1990-01-01
This paper presents computer programs for adjusting the mechanical properties of 2-in. dimension lumber for changes in moisture content. Mechanical properties adjusted are modulus of rupture, ultimate tensile stress parallel to the grain, ultimate compressive stress parallel to the gain, and flexural modulus of elasticity. The models are valid for moisture contents...