Integration experiences and performance studies of A COTS parallel archive systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Bary
2010-01-01
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less
Integration experiments and performance studies of a COTS parallel archive system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsing-bung; Scott, Cody; Grider, Gary
2010-06-16
Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-06-30
Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.
NASA Astrophysics Data System (ADS)
Schrabback, T.; Erben, T.; Simon, P.; Miralles, J.-M.; Schneider, P.; Heymans, C.; Eifler, T.; Fosbury, R. A. E.; Freudling, W.; Hetterscheidt, M.; Hildebrandt, H.; Pirzkal, N.
2007-06-01
Context: This is the first paper of a series describing our measurement of weak lensing by large-scale structure, also termed “cosmic shear”, using archival observations from the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). Aims: In this work we present results from a pilot study testing the capabilities of the ACS for cosmic shear measurements with early parallel observations and presenting a re-analysis of HST/ACS data from the GEMS survey and the GOODS observations of the Chandra Deep Field South (CDFS). Methods: We describe the data reduction and, in particular, a new correction scheme for the time-dependent ACS point-spread-function (PSF) based on observations of stellar fields. This is currently the only technique which takes the full time variation of the PSF between individual ACS exposures into account. We estimate that our PSF correction scheme reduces the systematic contribution to the shear correlation functions due to PSF distortions to <2 × 10-6 for galaxy fields containing at least 10 stars, which corresponds to ⪉5% of the cosmological signal expected on scales of a single ACS field. Results: We perform a number of diagnostic tests indicating that the remaining level of systematics is consistent with zero for the GEMS and GOODS data confirming the success of our PSF correction scheme. For the parallel data we detect a low level of remaining systematics which we interpret to be caused by a lack of sufficient dithering of the data. Combining the shear estimate of the GEMS and GOODS observations using 96 galaxies arcmin-2 with the photometric redshift catalogue of the GOODS-MUSIC sample, we determine a local single field estimate for the mass power spectrum normalisation σ8, CDFS=0.52+0.11-0.15 (stat) ± 0.07(sys) (68% confidence assuming Gaussian cosmic variance) at a fixed matter density Ω_m=0.3 for a ΛCDM cosmology marginalising over the uncertainty of the Hubble parameter and the redshift distribution. We interpret this exceptionally low estimate to be due to a local under-density of the foreground structures in the CDFS. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archives at the Space Telescope European Coordinating Facility and the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
The Role of Data Archives in Synoptic Solar Physics
NASA Astrophysics Data System (ADS)
Reardon, Kevin
The detailed study of solar cycle variations requires analysis of recorded datasets spanning many years of observations, that is, a data archive. The use of digital data, combined with powerful database server software, gives such archives new capabilities to provide, quickly and flexibly, selected pieces of information to scientists. Use of standardized protocols will allow multiple databases, independently maintained, to be seamlessly joined, allowing complex searches spanning multiple archives. These data archives also benefit from being developed in parallel with the telescope itself, which helps to assure data integrity and to provide close integration between the telescope and archive. Development of archives that can guarantee long-term data availability and strong compatibility with other projects makes solar-cycle studies easier to plan and realize.
Augmenting The HST Pure Parallel Observations
NASA Astrophysics Data System (ADS)
Patterson, Alan; Soutchkova, G.; Workman, W.
2012-05-01
Pure Parallel (PP) programs, designated GO/PAR, are a subgroup of General Observer (GO) programs. PP execute simultaneously with prime GO observations to which they are "attached". The PP observations can be performed with ACS/WFC, WFC3/UVIS or WFC3/IR and can be attached only to GO visits in which the instruments are either COS or STIS. The current HST Parallel Observation Processing System (POPS) was introduced after the Servicing Mission 4. It increased the HST productivity by 10% in terms of the utilization of HST prime orbits and was highly appreciated by the HST observers, allowing them to design efficient, multi-orbit survey projects for collecting large amounts of data on identifiable targets. The results of the WFC3 Infrared Spectroscopic Parallel Survey (WISP), Hubble Infrared Pure Parallel Imaging Extragalactic Survey (HIPPIES), and The Brightest-of-Reionizing Galaxies Pure Parallel Survey (BoRG) exemplify this benefit. In Cycle 19, however, the full advantage of GO/PARs came under risk. Whereas each of the previous cycles provided over one million seconds of exposure time for PP, in Cycle 19 that number reduced to 680,000 seconds. This dramatic decline occurred because of fundamental changes in the construction of COS prime observations. To preserve the science output of PP, the PP Working Group was tasked to find a way to recover the lost time and maximize the total time available for PP observing. The solution was to expand the definition of a PP opportunity to allow PP exposures to span one or more primary exposure readouts. So starting in HST Cycle 20, PP opportunities will no longer be limited to GO visits with a single uninterrupted exposure in an orbit. The resulting enhancements in HST Cycle 20 to the PP opportunity identification and matching process are expected to restore the PP time to previously achieved and possibly even greater levels.
WFIRST: Science from the Guest Investigator and Parallel Observation Programs
NASA Astrophysics Data System (ADS)
Postman, Marc; Nataf, David; Furlanetto, Steve; Milam, Stephanie; Robertson, Brant; Williams, Ben; Teplitz, Harry; Moustakas, Leonidas; Geha, Marla; Gilbert, Karoline; Dickinson, Mark; Scolnic, Daniel; Ravindranath, Swara; Strolger, Louis; Peek, Joshua; Marc Postman
2018-01-01
The Wide Field InfraRed Survey Telescope (WFIRST) mission will provide an extremely rich archival dataset that will enable a broad range of scientific investigations beyond the initial objectives of the proposed key survey programs. The scientific impact of WFIRST will thus be significantly expanded by a robust Guest Investigator (GI) archival research program. We will present examples of GI research opportunities ranging from studies of the properties of a variety of Solar System objects, surveys of the outer Milky Way halo, comprehensive studies of cluster galaxies, to unique and new constraints on the epoch of cosmic re-ionization and the assembly of galaxies in the early universe.WFIRST will also support the acquisition of deep wide-field imaging and slitless spectroscopic data obtained in parallel during campaigns with the coronagraphic instrument (CGI). These parallel wide-field imager (WFI) datasets can provide deep imaging data covering several square degrees at no impact to the scheduling of the CGI program. A competitively selected program of well-designed parallel WFI observation programs will, like the GI science above, maximize the overall scientific impact of WFIRST. We will give two examples of parallel observations that could be conducted during a proposed CGI program centered on a dozen nearby stars.
Brave New World: Data Intensive Science with SDSS and the VO
NASA Astrophysics Data System (ADS)
Thakar, A. R.; Szalay, A. S.; O'Mullane, W.; Nieto-Santisteban, M.; Budavari, T.; Li, N.; Carliles, S.; Haridas, V.; Malik, T.; Gray, J.
2004-12-01
With the advent of digital archives and the VO, astronomy is quickly changing from a data-hungry to a data-intensive science. Local and specialized access to data will remain the most direct and efficient way to get data out of individual archives, especially if you know what you are looking for. However, the enormous sizes of the upcoming archives will preclude this type of access for most institutions, and will not allow researchers to tap the vast potential for discovery in cross-matching and comparing data between different archives. The VO makes this type of interoperability and distributed data access possible by adopting industry standards for data access (SQL) and data interchange (SOAP/XML) with platform independence (Web services). As a sneak preview of this brave new world where astronomers may need to become SQL warriors, we present a look at VO-enabled access to catalog data in the SDSS Catalog Archive Server (CAS): CasJobs - a workbench environment that allows arbitrarily complex SQL queries and your own personal database (MyDB) that you can share with collaborators; OpenSkyQuery - an IVOA (International Virtual Observatory Alliance) compliant federation of multiple archives (OpenSkyNodes) that currently links nearly 20 catalogs and allows cross-match queries (in ADQL - Astronomical Data Query Language) between them; Spectrum and Filter Profile Web services that provide access to an open database of spectra (registered users may add their own spectra); and VO-enabled Mirage - a Java visualizatiion tool developed at Bell Labs and enhanced at JHU that allows side-by-side comparison of SDSS catalog and FITS image data. Anticipating the next generation of Petabyte archives like LSST by the end of the decade, we are developing a parallel cross-match engine for all-sky cross-matches between large surveys, along with a 100-Terabyte data intensive science laboratory with high-speed parallel data access.
Archival Services and Technologies for Scientific Data
NASA Astrophysics Data System (ADS)
Meyer, Jörg; Hardt, Marcus; Streit, Achim; van Wezel, Jos
2014-06-01
After analysis and publication, there is no need to keep experimental data online on spinning disks. For reliability and costs inactive data is moved to tape and put into a data archive. The data archive must provide reliable access for at least ten years following a recommendation of the German Science Foundation (DFG), but many scientific communities wish to keep data available much longer. Data archival is on the one hand purely a bit preservation activity in order to ensure the bits read are the same as those written years before. On the other hand enough information must be archived to be able to use and interpret the content of the data. The latter is depending on many also community specific factors and remains an areas of much debate among archival specialists. The paper describes the current practice of archival and bit preservation in use for different science communities at KIT for which a combination of organizational services and technical tools are required. The special monitoring to detect tape related errors, the software infrastructure in use as well as the service certification are discussed. Plans and developments at KIT also in the context of the Large Scale Data Management and Analysis (LSDMA) project are presented. The technical advantages of the T10 SCSI Stream Commands (SSC-4) and the Linear Tape File System (LTFS) will have a profound impact on future long term archival of large data sets.
Structural modeling of carbonaceous mesophase amphotropic mixtures under uniaxial extensional flow.
Golmohammadi, Mojdeh; Rey, Alejandro D
2010-07-21
The extended Maier-Saupe model for binary mixtures of model carbonaceous mesophases (uniaxial discotic nematogens) under externally imposed flow, formulated in previous studies [M. Golmohammadi and A. D. Rey, Liquid Crystals 36, 75 (2009); M. Golmohammadi and A. D. Rey, Entropy 10, 183 (2008)], is used to characterize the effect of uniaxial extensional flow and concentration on phase behavior and structure of these mesogenic blends. The generic thermorheological phase diagram of the single-phase binary mixture, given in terms of temperature (T) and Deborah (De) number, shows the existence of four T-De transition lines that define regions that correspond to the following quadrupolar tensor order parameter structures: (i) oblate (perpendicular, parallel), (ii) prolate (perpendicular, parallel), (iii) scalene O(perpendicular, parallel), and (iv) scalene P(perpendicular, parallel), where the symbols (perpendicular, parallel) indicate alignment of the tensor order ellipsoid with respect to the extension axis. It is found that with increasing T the dominant component of the mixture exhibits weak deviations from the well-known pure species response to uniaxial extensional flow (uniaxial perpendicular nematic-->biaxial nematic-->uniaxial parallel paranematic). In contrast, the slaved component shows a strong deviation from the pure species response. This deviation is dictated by the asymmetric viscoelastic coupling effects emanating from the dominant component. Changes in conformation (oblate <==> prolate) and orientation (perpendicular <==> parallel) are effected through changes in pairs of eigenvalues of the quadrupolar tensor order parameter. The complexity of the structural sensitivity to temperature and extensional flow is a reflection of the dual lyotropic/thermotropic nature (amphotropic nature) of the mixture and their cooperation/competition. The analysis demonstrates that the simple structures (biaxial nematic and uniaxial paranematic) observed in pure discotic mesogens under uniaxial extensional flow are significantly enriched by the interaction of the lyotropic/thermotropic competition with the binary molecular architectures and with the quadrupolar nature of the flow.
Kemps, Eva; Newson, Rachel
2006-04-01
The study compared age-related decrements in verbal and visuo-spatial memory across a broad elderly adult age range. Twenty-four young (18-25 years), 24 young-old (65-74 years), 24 middle-old (75-84 years) and 24 old-old (85-93 years) adults completed parallel recall and recognition measures of verbal and visuo-spatial memory from the Doors and People Test (Baddeley, Emslie & Nimmo-Smith, 1994). These constituted 'pure' and validated indices of either verbal or visuo-spatial memory. Verbal and visuo-spatial memory declined similarly with age, with a steeper decline in recall than recognition. Unlike recognition memory, recall performance also showed a heightened decline after the age of 85. Age-associated memory loss in both modalities was largely due to working memory and executive function. Processing speed and sensory functioning (vision, hearing) made minor contributions to memory performance and age differences in it. Together, these findings demonstrate common, rather than differential, age-related effects on verbal and visuo-spatial memory. They also emphasize the importance of using 'pure', parallel and validated measures of verbal and visuo-spatial memory in memory ageing research.
Oak Ridge Leadership Computing Facility Position Paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oral, H Sarp; Hill, Jason J; Thach, Kevin G
This paper discusses the business, administration, reliability, and usability aspects of storage systems at the Oak Ridge Leadership Computing Facility (OLCF). The OLCF has developed key competencies in architecting and administration of large-scale Lustre deployments as well as HPSS archival systems. Additionally as these systems are architected, deployed, and expanded over time reliability and availability factors are a primary driver. This paper focuses on the implementation of the Spider parallel Lustre file system as well as the implementation of the HPSS archive at the OLCF.
Data grid: a distributed solution to PACS
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
In a hospital, various kinds of medical images acquired from different modalities are generally used and stored in different department and each modality usually attaches several workstations to display or process images. To do better diagnosis, radiologists or physicians often need to retrieve other kinds of images for reference. The traditional image storage solution is to buildup a large-scale PACS archive server. However, the disadvantages of pure centralized management of PACS archive server are obvious. Besides high costs, any failure of PACS archive server would cripple the entire PACS operation. Here we present a new approach to develop the storage grid in PACS, which can provide more reliable image storage and more efficient query/retrieval for the whole hospital applications. In this paper, we also give the performance evaluation by comparing the three popular technologies mirror, cluster and grid.
The ISO Data Archive and Interoperability with Other Archives
NASA Astrophysics Data System (ADS)
Salama, Alberto; Arviset, Christophe; Hernández, José; Dowson, John; Osuna, Pedro
The ESA's Infrared Space Observatory (ISO), an unprecedented observatory for infrared astronomy launched in November 1995, successfully made nearly 30,000 scientific observations in its 2.5-year mission. The ISO data can be retrieved from the ISO Data Archive, available at ISO Data Archive , and comprised of about 150,000 observations, including parallel and serendipity mode observations. A user-friendly Java interface permits queries to the database and data retrieval. The interface currently offers a wide variety of links to other archives, such as name resolution with NED and SIMBAD, access to electronic articles from ADS and CDS/VizieR, and access to IRAS data. In the past year development has been focused on improving the IDA interoperability with other astronomical archives, either by accessing other relevant archives or by providing direct access to the ISO data for external services. A mechanism of information transfer has been developed, allowing direct query to the IDA via a Java Server Page, returning quick look ISO images and relevant, observation-specific information embedded in an HTML page. This method has been used to link from the CDS/Vizier Data Centre and ADS, and work with IPAC to allow access to the ISO Archive from IRSA, including display capabilities of the observed sky regions onto other mission images, is in progress. Prospects for further links to and from other archives and databases are also addressed.
Shaw, Jennifer
2016-02-01
The Human Genome Archive Project (HGAP) aimed to preserve the documentary heritage of the UK's contribution to the Human Genome Project (HGP) by using archival theory to develop a suitable methodology for capturing the results of modern, collaborative science. After assessing past projects and different archival theories, the HGAP used an approach based on the theory of documentation strategy to try to capture the records of a scientific project that had an influence beyond the purely scientific sphere. The HGAP was an archival survey that ran for two years. It led to ninety scientists being contacted and has, so far, led to six collections being deposited in the Wellcome Library, with additional collections being deposited in other UK repositories. In applying documentation strategy the HGAP was attempting to move away from traditional archival approaches to science, which have generally focused on retired Nobel Prize winners. It has been partially successful in this aim, having managed to secure collections from people who are not 'big names', but who made an important contribution to the HGP. However, the attempt to redress the gender imbalance in scientific collections and to improve record-keeping in scientific organisations has continued to be difficult to achieve. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
Documenting genomics: Applying archival theory to preserving the records of the Human Genome Project
Shaw, Jennifer
2016-01-01
The Human Genome Archive Project (HGAP) aimed to preserve the documentary heritage of the UK's contribution to the Human Genome Project (HGP) by using archival theory to develop a suitable methodology for capturing the results of modern, collaborative science. After assessing past projects and different archival theories, the HGAP used an approach based on the theory of documentation strategy to try to capture the records of a scientific project that had an influence beyond the purely scientific sphere. The HGAP was an archival survey that ran for two years. It led to ninety scientists being contacted and has, so far, led to six collections being deposited in the Wellcome Library, with additional collections being deposited in other UK repositories. In applying documentation strategy the HGAP was attempting to move away from traditional archival approaches to science, which have generally focused on retired Nobel Prize winners. It has been partially successful in this aim, having managed to secure collections from people who are not ‘big names’, but who made an important contribution to the HGP. However, the attempt to redress the gender imbalance in scientific collections and to improve record-keeping in scientific organisations has continued to be difficult to achieve. PMID:26388555
Model-Based Systems Engineering in the Execution of Search and Rescue Operations
2015-09-01
OSC can fulfill the duties of an ACO but it may make sense to split the duties if there are no communication links between the OSC and participating...parallel mode. This mode is the most powerful option because it 35 creates sequence diagrams that generate parallel “ swim lanes” for each asset...greater flexibility is desired, sequence mode generates diagrams based purely on sequential action and activity diagrams without the parallel “ swim lanes
pureS2HAT: S 2HAT-based Pure E/B Harmonic Transforms
NASA Astrophysics Data System (ADS)
Grain, J.; Stompor, R.; Tristram, M.
2011-10-01
The pS2HAT routines allow efficient, parallel calculation of the so-called 'pure' polarized multipoles. The computed multipole coefficients are equal to the standard pseudo-multipoles calculated for the apodized sky maps of the Stokes parameters Q and U subsequently corrected by so-called counterterms. If the applied apodizations fullfill certain boundary conditions, these multipoles correspond to the pure multipoles. Pure multipoles of one type, i.e., either E or B, are ensured not to contain contributions from the other one, at least to within numerical artifacts. They can be therefore further used in the estimation of the sky power spectra via the pseudo power spectrum technique, which has to however correctly account for the applied apodization on the one hand, and the presence of the counterterms, on the other. In addition, the package contains the routines permitting calculation of the spin-weighted apodizations, given an input scalar, i.e., spin-0 window. The former are needed to compute the counterterms. It also provides routines for maps and window manipulations. The routines are written in C and based on the S2HAT library, which is used to perform all required spherical harmonic transforms as well as all inter-processor communication. They are therefore parallelized using MPI and follow the distributed-memory computational model. The data distribution patterns, pixelization choices, conventions etc are all as those assumed/allowed by the S2HAT library.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
Ordinary mode instability associated with thermal ring distribution
NASA Astrophysics Data System (ADS)
Hadi, F.; Yoon, P. H.; Qamar, A.
2015-02-01
The purely growing ordinary (O) mode instability driven by excessive parallel temperature anisotropy has recently received renewed attention owing to its potential applicability to the solar wind plasma. Previous studies of O mode instability have assumed either bi-Maxwellian or counter-streaming velocity distributions. For solar wind plasma trapped in magnetic mirror-like geometry such as magnetic clouds or in the vicinity of the Earth's collisionless bow shock environment, however, the velocity distribution function may possess a loss-cone feature. The O-mode instability in such a case may be excited for cyclotron harmonics as well as the purely growing branch. The present paper investigates the O-mode instability for plasmas characterized by the parallel Maxwellian distribution and perpendicular thermal ring velocity distribution in order to understand the general stability characteristics.
Alignment between Protostellar Outflows and Filamentary Structure
NASA Astrophysics Data System (ADS)
Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.; Pokhrel, Riwaj; Sadavoy, Sarah I.; Vorobyov, Eduard I.; Tobin, John J.; Pineda, Jaime E.; Offner, Stella S. R.; Lee, Katherine I.; Kristensen, Lars E.; Jørgensen, Jes K.; Goodman, Alyssa A.; Bourke, Tyler L.; Arce, Héctor G.; Plunkett, Adele L.
2017-09-01
We present new Submillimeter Array (SMA) observations of CO(2-1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mix of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ˜3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.
Clinical experiences with an ASP model backup archive for PACS images
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Cao, Fei; Documet, Luis; Huang, H. K.; Muldoon, Jean
2003-05-01
Last year we presented a Fault-Tolerant Backup Archive using an Application Service Provider (ASP) model for disaster recovery. The purpose of this paper is to update and provide clinical experiences related towards implementing the ASP model archive solution for short-term backup of clinical PACS image data as well as possible applications other than disaster recovery. The ASP backup archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost and with little human intervention. This solution can be used for a variety of scheduled and unscheduled downtimes that occur on the main PACS archive. A backup archive server with hierarchical storage was implemented offsite from the main PACS archive location. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. We have integrated the ASP model backup archive with a clinical PACS at Saint John's Health Center and has been operational for over 6 months. Pitfalls encountered during integration with a live clinical PACS and the impact to clinical workflow will be discussed. In addition, estimations of the cost of establishing such a solution as well as the cost charged to the users will be included. Clinical downtime scenarios, such as a scheduled mandatory downtime and an unscheduled downtime due to a disaster event to the main archive, were simulated and the PACS exams were sent successfully from the offsite ASP storage server back to the hospital PACS in less than 1 day. The ASP backup archive was able to recover PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery. During any clinical downtime scenario, the ASP backup archive server can repopulate a clinical PACS quickly with the majority of studies available for comparison during the interim until the main PACS archive is fully recovered.
The impact of image storage organization on the effectiveness of PACS.
Hindel, R
1990-11-01
Picture archiving communication system (PACS) requires efficient handling of large amounts of data. Mass storage systems are cost effective but slow, while very fast systems, like frame buffers and parallel transfer disks, are expensive. The image traffic can be divided into inbound traffic generated by diagnostic modalities and outbound traffic into workstations. At the contact points with medical professionals, the responses must be fast. Archiving, on the other hand, can employ slower but less expensive storage systems, provided that the primary activities are not impeded. This article illustrates a segmentation architecture meeting these requirements based on a clearly defined PACS concept.
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark A.
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly.
Kodama, Yuichi; Mashima, Jun; Kaminuma, Eli; Gojobori, Takashi; Ogasawara, Osamu; Takagi, Toshihisa; Okubo, Kousaku; Nakamura, Yasukazu
2012-01-01
The DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp) maintains and provides archival, retrieval and analytical resources for biological information. The central DDBJ resource consists of public, open-access nucleotide sequence databases including raw sequence reads, assembly information and functional annotation. Database content is exchanged with EBI and NCBI within the framework of the International Nucleotide Sequence Database Collaboration (INSDC). In 2011, DDBJ launched two new resources: the 'DDBJ Omics Archive' (DOR; http://trace.ddbj.nig.ac.jp/dor) and BioProject (http://trace.ddbj.nig.ac.jp/bioproject). DOR is an archival database of functional genomics data generated by microarray and highly parallel new generation sequencers. Data are exchanged between the ArrayExpress at EBI and DOR in the common MAGE-TAB format. BioProject provides an organizational framework to access metadata about research projects and the data from the projects that are deposited into different databases. In this article, we describe major changes and improvements introduced to the DDBJ services, and the launch of two new resources: DOR and BioProject.
Assad, M; Lemieux, N; Rivard, C H; Yahia, L H
1999-01-01
The genotoxicity level of nickel-titanium (NiTi) was compared to that of its pure constituents, pure nickel (Ni) and pure titanium (Ti) powders, and also to 316L stainless steel (316L SS) as clinical reference material. In order to do so, a dynamic in vitro semiphysiological extraction was performed with all metals using agitation and ISO requirements. Peripheral blood lymphocytes were then cultured in the presence of all material extracts, and their comparative genotoxicity levels were assessed using electron microscopy-in situ end-labeling (EM-ISEL) coupled to immunogold staining. Cellular chromatin exposition to pure Ni and 316L SS demonstrated a significantly stronger gold binding than exposition to NiTi, pure Ti, or the untreated control. In parallel, graphite furnace atomic absorption spectrophotometry (AAS) was also performed on all extraction media. The release of Ni atoms took the following decreasing distribution for the different resulting semiphysiological solutions: pure Ni, 316L SS, NiTi, Ti, and controls. Ti elements were detected after elution of pure titanium only. Both pure titanium and nickel-titanium specimens obtained a relative in vitro biocompatibility. Therefore, this quantitative in vitro study provides optimistic results for the eventual use of nickel-titanium alloys as surgical implant materials.
Structure and kinematics of the broad-line regions in active galaxies from IUE variability data
NASA Technical Reports Server (NTRS)
Koratkar, Anuradha P.; Gaskell, C. Martin
1991-01-01
IUE archival data are used here to investigate the structure nad kinematics of the broad-line regions (BLRs) in nine AGN. It is found that the centroid of the line-continuum cross-correlation functions (CCFs) can be determined with reasonable reliability. The errors in BLR size estimates from CCFs for irregularly sampled light curves are fairly well understood. BLRs are found to have small luminosity-weighted radii, and lines of high ionization tend to be emitted closer to the central source than lines of low ionization, especially for low-luminosity objects. The motion of the gas is gravity-dominated with both pure inflow and pure outflow of high-velocity gas being excluded at a high confidence level for certain geometries.
A model for cytoplasmic rheology consistent with magnetic twisting cytometry.
Butler, J P; Kelly, S M
1998-01-01
Magnetic twisting cytometry is gaining wide applicability as a tool for the investigation of the rheological properties of cells and the mechanical properties of receptor-cytoskeletal interactions. Current technology involves the application and release of magnetically induced torques on small magnetic particles bound to or inside cells, with measurements of the resulting angular rotation of the particles. The properties of purely elastic or purely viscous materials can be determined by the angular strain and strain rate, respectively. However, the cytoskeleton and its linkage to cell surface receptors display elastic, viscous, and even plastic deformation, and the simultaneous characterization of these properties using only elastic or viscous models is internally inconsistent. Data interpretation is complicated by the fact that in current technology, the applied torques are not constant in time, but decrease as the particles rotate. This paper describes an internally consistent model consisting of a parallel viscoelastic element in series with a parallel viscoelastic element, and one approach to quantitative parameter evaluation. The unified model reproduces all essential features seen in data obtained from a wide variety of cell populations, and contains the pure elastic, viscoelastic, and viscous cases as subsets.
Multiscale Monte Carlo equilibration: Pure Yang-Mills theory
Endres, Michael G.; Brower, Richard C.; Orginos, Kostas; ...
2015-12-29
In this study, we present a multiscale thermalization algorithm for lattice gauge theory, which enables efficient parallel generation of uncorrelated gauge field configurations. The algorithm combines standard Monte Carlo techniques with ideas drawn from real space renormalization group and multigrid methods. We demonstrate the viability of the algorithm for pure Yang-Mills gauge theory for both heat bath and hybrid Monte Carlo evolution, and show that it ameliorates the problem of topological freezing up to controllable lattice spacing artifacts.
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
Ho, ThienLuan; Oh, Seung-Rohk
2017-01-01
Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs). In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively. PMID:29016700
Alignment between Protostellar Outflows and Filamentary Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephens, Ian W.; Dunham, Michael M.; Myers, Philip C.
2017-09-01
We present new Submillimeter Array (SMA) observations of CO(2–1) outflows toward young, embedded protostars in the Perseus molecular cloud as part of the Mass Assembly of Stellar Systems and their Evolution with the SMA (MASSES) survey. For 57 Perseus protostars, we characterize the orientation of the outflow angles and compare them with the orientation of the local filaments as derived from Herschel observations. We find that the relative angles between outflows and filaments are inconsistent with purely parallel or purely perpendicular distributions. Instead, the observed distribution of outflow-filament angles are more consistent with either randomly aligned angles or a mixmore » of projected parallel and perpendicular angles. A mix of parallel and perpendicular angles requires perpendicular alignment to be more common by a factor of ∼3. Our results show that the observed distributions probably hold regardless of the protostar’s multiplicity, age, or the host core’s opacity. These observations indicate that the angular momentum axis of a protostar may be independent of the large-scale structure. We discuss the significance of independent protostellar rotation axes in the general picture of filament-based star formation.« less
NASA Technical Reports Server (NTRS)
Lee, L. R.; Montague, K. A.; Charvat, J. M.; Wear, M. L.; Thomas, D. M.; Van Baalen, M.
2016-01-01
Since the 2010 NASA directive to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for astronaut medical data has increased greatly. LSAH and LSDA personnel are working with Human Research Program on many fronts to improve data access and decrease lead time for release of data. Some examples include the following: Feasibility reviews for NASA Research Announcement (NRA) data mining proposals; Improved communication, support for researchers, and process improvements for retrospective Institutional Review Board (IRB) protocols; Supplemental data sharing for flight investigators versus purely retrospective studies; Work with the Multilateral Human Research Panel for Exploration (MHRPE) to develop acceptable data sharing and crew consent processes and to organize inter-agency data coordinators to facilitate requests for international crewmember data. Current metrics on data requests crew consenting will be presented, along with limitations on contacting crew to obtain consent. Categories of medical monitoring data available for request will be presented as well as flow diagrams detailing data request processing and approval steps.
Inarejos-García, A M; Mancebo-Campos, V; Cañizares, P; Llanos, J
2015-05-01
Fruit purees are one of the foods earliest introduced foods in infants' diet during the complementary period. The rheological characteristics together with the sensory analysis are decisive factors for the acceptance of the food product by the infant. The sensory analysis of three commercial fruit purees (mixed fruits, pear, and plum) was studied by employing a new objective sensory parameter named as SAIR (Sensory Acceptance by Infants Ratio), which is the quotient between the percentage of puree consumed (%) by the time (seconds) throughout the storage time. In parallel, the rheological characteristics of the purees were analyzed in order to obtain a relationship with the SAIR parameter. It was proved that the best acceptance of the product (higher SAIR) was observed for such purees showing a lower apparent viscosity (lower consistency index, "K") and a less pseudoplastic behavior (higher flow behavior index, "n"). These results may help to obtain higher acceptance values based on easy obtainable and objective parameters. © 2015 Institute of Food Technologists®
Ghorai, Sankar; Chaudhury, Pinaki
2018-05-30
We have used a replica exchange Monte-Carlo procedure, popularly known as Parallel Tempering, to study the problem of Coulomb explosion in homogeneous Ar and Xe dicationic clusters as well as mixed Ar-Xe dicationic clusters of varying sizes with different degrees of relative composition. All the clusters studied have two units of positive charges. The simulations reveal that in all the cases there is a cutoff size below which the clusters fragment. It is seen that for the case of pure Ar, the value is around 95 while that for Xe it is 55. For the mixed clusters with increasing Xe content, the cutoff limit for suppression of Coulomb explosion gradually decreases from 95 for a pure Ar to 55 for a pure Xe cluster. The hallmark of this study is this smooth progression. All the clusters are simulated using the reliable potential energy surface developed by Gay and Berne (Gay and Berne, Phys. Rev. Lett. 1982, 49, 194). For the hetero clusters, we have also discussed two different ways of charge distribution, that is one in which both positive charges are on two Xe atoms and the other where the two charges are at a Xe atom and at an Ar atom. The fragmentation patterns observed by us are such that single ionic ejections are the favored dissociating pattern. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Displacement and deformation measurement for large structures by camera network
NASA Astrophysics Data System (ADS)
Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu
2014-03-01
A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.
Thermal conductivity and thermal expansion of graphite fiber/copper matrix composites
NASA Technical Reports Server (NTRS)
Ellis, David L.; Mcdanels, David L.
1991-01-01
The high specific conductivity of graphite fiber/copper matrix (Gr/Cu) composites offers great potential for high heat flux structures operating at elevated temperatures. To determine the feasibility of applying Gr/Cu composites to high heat flux structures, composite plates were fabricated using unidirectional and cross-plied pitch-based P100 graphite fibers in a pure copper matrix. Thermal conductivity of the composites was measured from room temperature to 1073 K, and thermal expansion was measured from room temperature to 1050 K. The longitudinal thermal conductivity, parallel to the fiber direction, was comparable to pure copper. The transverse thermal conductivity, normal to the fiber direction, was less than that of pure copper and decreased with increasing fiber content. The longitudinal thermal expansion decreased with increasing fiber content. The transverse thermal expansion was greater than pure copper and nearly independent of fiber content.
Thermal conductivity and thermal expansion of graphite fiber-reinforced copper matrix composites
NASA Technical Reports Server (NTRS)
Ellis, David L.; Mcdanels, David L.
1993-01-01
The high specific conductivity of graphite fiber/copper matrix (Gr/Cu) composites offers great potential for high heat flux structures operating at elevated temperatures. To determine the feasibility of applying Gr/Cu composites to high heat flux structures, composite plates were fabricated using unidirectional and cross-plied pitch-based P100 graphite fibers in a pure copper matrix. Thermal conductivity of the composites was measured from room temperature to 1073 K, and thermal expansion was measured from room temperature to 1050 K. The longitudinal thermal conductivity, parallel to the fiber direction, was comparable to pure copper. The transverse thermal conductivity, normal to the fiber direction, was less than that of pure copper and decreased with increasing fiber content. The longitudinal thermal expansion decreased with increasing fiber content. The transverse thermal expansion was greater than pure copper and nearly independent of fiber content.
TokSearch: A search engine for fusion experimental data
Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...
2018-04-01
At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less
Archive Management of NASA Earth Observation Data to Support Cloud Analysis
NASA Technical Reports Server (NTRS)
Lynnes, Christopher; Baynes, Kathleen; McInerney, Mark
2017-01-01
NASA collects, processes and distributes petabytes of Earth Observation (EO) data from satellites, aircraft, in situ instruments and model output, with an order of magnitude increase expected by 2024. Cloud-based web object storage (WOS) of these data can simplify the execution of such an increase. More importantly, it can also facilitate user analysis of those volumes by making the data available to the massively parallel computing power in the cloud. However, storing EO data in cloud WOS has a ripple effect throughout the NASA archive system with unexpected challenges and opportunities. One challenge is modifying data servicing software (such as Web Coverage Service servers) to access and subset data that are no longer on a directly accessible file system, but rather in cloud WOS. Opportunities include refactoring of the archive software to a cloud-native architecture; virtualizing data products by computing on demand; and reorganizing data to be more analysis-friendly. Reviewed by Mark McInerney ESDIS Deputy Project Manager.
TokSearch: A search engine for fusion experimental data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.
At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less
ERIC Educational Resources Information Center
Calzada Pérez, María
2013-01-01
The present paper revolves around MaxiECPC, one of the various sub-corpora that make up ECPC (the European Comparable and Parallel Corpora), an electronic archive of speeches delivered at different parliaments (i.e. the European Parliament-EP; the Spanish Congreso de los Diputados-CD; and the British House of Commons-HC) from 1996 to 2009. In…
Dutto, Paola; Stickle, Miguel Martin; Pastor, Manuel; Manzanal, Diego; Yague, Angel; Moussavi Tayyebi, Saeid; Lin, Chuan; Elizalde, Maria Dolores
2017-01-01
The choice of a pure cohesive or a pure frictional viscoplastic model to represent the rheological behaviour of a flowslide is of paramount importance in order to obtain accurate results for real cases. The principal goal of the present work is to clarify the influence of the type of viscous model—pure cohesive versus pure frictional—with the numerical reproduction of two different real flowslides that occurred in 1966: the Aberfan flowslide and the Gypsum tailings impoundment flowslide. In the present work, a depth-integrated model based on the v-pw Biot–Zienkiewicz formulation, enhanced with a diffusion-like equation to account for the pore pressure evolution within the soil mass, is applied to both 1966 cases. For the Aberfan flowslide, a frictional viscous model based on Perzyna viscoplasticity is considered, while a pure cohesive viscous model (Bingham model) is considered for the case of the Gypsum flowslide. The numerical approach followed is the SPH method, which has been enriched by adding a 1D finite difference grid to each SPH node in order to improve the description of the pore water evolution in the propagating mixture. The results obtained by the performed simulations are in agreement with the documentation obtained through the UK National Archive (Aberfan flowslide) and the International Commission of large Dams (Gypsum flowslide). PMID:28772924
DFT calculations of graphene monolayer in presence of Fe dopant and vacancy
NASA Astrophysics Data System (ADS)
Ostovari, Fatemeh; Hasanpoori, Marziyeh; Abbasnejad, Mohaddeseh; Salehi, Mohammad Ali
2018-07-01
In the present work, the effects of Fe doping and vacancies on the electronic, magnetic and optical properties of graphene are studied by density functional theory based calculations. The conductive behavior is revealed for the various defected graphene by means of electronic density of states. However, defected structures show different magnetic and optical properties compared to those of pure one. The ferromagnetic phase is the most probable phase by substituting Fe atoms and vacancies at AA sublattice of graphene. The optical properties of impure graphene differ from pure graphene under illumination with parallel polarization of electric field, whereas for perpendicular polarization it remains unchanged. In presence of defect and under parallel polarization of light, the static dielectric constant rises strongly and the maximum peak of Im ε(ω) shows red shift relative to pure graphene. Moreover, the maximum absorption peak gets broaden in the visible to infrared region at the same condition and the magnitude and related energy of peaks shift to higher value in the EELS spectra. Furthermore, the results show that the maximum values of refractive index and reflectivity spectra increase rapidly and represent the red and blue shifts; respectively. Generally; substituting the C atom with Fe has more effect on magnetic and optical properties relative to the C vacancies.
Fault-tolerant back-up archive using an ASP model for disaster recovery
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Huang, H. K.; Cao, Fei; Documet, Luis; Sarti, Dennis A.
2002-05-01
A single point of failure in PACS during a disaster scenario is the main archive storage and server. When a major disaster occurs, it is possible to lose an entire hospital's PACS data. Few current PACS archives feature disaster recovery, but the design is limited at best. These drawbacks include the frequency with which the back-up is physically removed to an offsite facility, the operational costs associated to maintain the back-up, the ease-of-use to perform the backup consistently and efficiently, and the ease-of-use to perform the PACS image data recovery. This paper describes a novel approach towards a fault-tolerant solution for disaster recovery of short-term PACS image data using an Application Service Provider model for service. The ASP back-up archive provides instantaneous, automatic backup of acquired PACS image data and instantaneous recovery of stored PACS image data all at a low operational cost. A back-up archive server and RAID storage device is implemented offsite from the main PACS archive location. In the example of this particular hospital, it was determined that at least 2 months worth of PACS image exams were needed for back-up. Clinical data from a hospital PACS is sent to this ASP storage server in parallel to the exams being archived in the main server. A disaster scenario was simulated and the PACS exams were sent from the offsite ASP storage server back to the hospital PACS. Initially, connectivity between the main archive and the ASP storage server is established via a T-1 connection. In the future, other more cost-effective means of connectivity will be researched such as the Internet 2. A disaster scenario was initiated and the disaster recovery process using the ASP back-up archive server was success in repopulating the clinical PACS within a short period of time. The ASP back-up archive was able to recover two months of PACS image data for comparison studies with no complex operational procedures. Furthermore, no image data loss was encountered during the recovery.
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python
Pecevski, Dejan; Natschläger, Thomas; Schuch, Klaus
2008-01-01
The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations. PMID:19543450
NASA Astrophysics Data System (ADS)
Sun, Fengchun; Liu, Wei; He, Hongwen; Guo, Hongqiang
2016-08-01
For an electric vehicle with independently driven axles, an integrated braking control strategy was proposed to coordinate the regenerative braking and the hydraulic braking. The integrated strategy includes three modes, namely the hybrid composite mode, the parallel composite mode and the pure hydraulic mode. For the hybrid composite mode and the parallel composite mode, the coefficients of distributing the braking force between the hydraulic braking and the two motors' regenerative braking were optimised offline, and the response surfaces related to the driving state parameters were established. Meanwhile, the six-sigma method was applied to deal with the uncertainty problems for reliability. Additionally, the pure hydraulic mode is activated to ensure the braking safety and stability when the predictive failure of the response surfaces occurs. Experimental results under given braking conditions showed that the braking requirements could be well met with high braking stability and energy regeneration rate, and the reliability of the braking strategy was guaranteed on general braking conditions.
Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P.N.; Chang, B.; Hanebutte, U.R.
1999-12-29
Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less
Ajdacic-Gross, Vladeta; Rodgers, Stephanie; Müller, Mario; Hengartner, Michael P; Aleksandrowicz, Aleksandra; Kawohl, Wolfram; Heekeren, Karsten; Rössler, Wulf; Angst, Jules; Castelao, Enrique; Vandeleur, Caroline; Preisig, Martin
2016-09-01
Interest in subtypes of mental disorders is growing in parallel with continuing research progress in psychiatry. The aim of this study was to examine pure animal phobia in contrast to other specific phobias and a mixed subtype. Data from three representative Swiss community samples were analysed: PsyCoLaus (n = 3720), the ZInEP Epidemiology Survey (n = 1500) and the Zurich Study (n = 591). Pure animal phobia and mixed animal/other specific phobias consistently displayed a low age at onset of first symptoms (8-12 years) and clear preponderance of females (OR > 3). Meanwhile, other specific phobias started up to 10 years later and displayed almost a balanced sex ratio. Pure animal phobia showed no associations with any included risk factors and comorbid disorders, in contrast to numerous associations found in the mixed subtype and in other specific phobias. Across the whole range of epidemiological parameters examined in three different samples, pure animal phobia seems to represent a different entity compared to other specific phobias. The etiopathogenetic mechanisms and risk factors associated with pure animal phobias appear less clear than ever.
Horace Lamb and the circumstances of his appointment at Owens College
Launder, Brian
2013-01-01
This paper examines a succession of incidents at a critical juncture in the life of Professor Horace Lamb FRS, a highly regarded classical fluid mechanicist, who, over a period of some 35 years at Manchester, made notable contributions in research, in education and in wise administration at both national and university levels. Drawing on archived documents from the universities of Manchester and Adelaide, the article presents the unusual sequence of events that led to his removing from Adelaide, South Australia, where he had served for nine years as the Elder Professor of Mathematics, to Manchester. In 1885 he was initially appointed to the vacant Chair of Pure Mathematics at Owens College and then, in 1888, as an outcome of his proposal for rearranging professorial responsibilities, to the Beyer Professorship of Pure and Applied Mathematics.
1994-03-25
metrics [DISA93b]. " The Software Engineering Institute (SET) has developed a domain analysis process (Feature-Oriented Domain Analysis - FODA ) and is...and expresses the range of variability of these decisions. 3.2.2.3 Feature Oriented Domain Analysis Feature Oriented Domain Analysis ( FODA ) is a domain...documents created in this phase. From a purely profit-oriented business point of view, a company may develop its own analysis of a government or commercial
E-MSD: improving data deposition and structure quality.
Tagari, M; Tate, J; Swaminathan, G J; Newman, R; Naim, A; Vranken, W; Kapopoulou, A; Hussain, A; Fillon, J; Henrick, K; Velankar, S
2006-01-01
The Macromolecular Structure Database (MSD) (http://www.ebi.ac.uk/msd/) [H. Boutselakis, D. Dimitropoulos, J. Fillon, A. Golovin, K. Henrick, A. Hussain, J. Ionides, M. John, P. A. Keller, E. Krissinel et al. (2003) E-MSD: the European Bioinformatics Institute Macromolecular Structure Database. Nucleic Acids Res., 31, 458-462.] group is one of the three partners in the worldwide Protein DataBank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data [H. Berman, K. Henrick and H. Nakamura (2003) Announcing the worldwide Protein Data Bank. Nature Struct. Biol., 10, 980.]. Since its inception, the MSD group has worked with partners around the world to improve the quality of PDB data, through a clean up programme that addresses inconsistencies and inaccuracies in the legacy archive. The improvements in data quality in the legacy archive have been achieved largely through the creation of a unified data archive, in the form of a relational database that stores all of the data in the wwPDB. The three partners are working towards improving the tools and methods for the deposition of new data by the community at large. The implementation of the MSD database, together with the parallel development of improved tools and methodologies for data harvesting, validation and archival, has lead to significant improvements in the quality of data that enters the archive. Through this and related projects in the NMR and EM realms the MSD continues to improve the quality of publicly available structural data.
NASA Astrophysics Data System (ADS)
Samaké, Abdoulaye; Rampal, Pierre; Bouillon, Sylvain; Ólason, Einar
2017-12-01
We present a parallel implementation framework for a new dynamic/thermodynamic sea-ice model, called neXtSIM, based on the Elasto-Brittle rheology and using an adaptive mesh. The spatial discretisation of the model is done using the finite-element method. The temporal discretisation is semi-implicit and the advection is achieved using either a pure Lagrangian scheme or an Arbitrary Lagrangian Eulerian scheme (ALE). The parallel implementation presented here focuses on the distributed-memory approach using the message-passing library MPI. The efficiency and the scalability of the parallel algorithms are illustrated by the numerical experiments performed using up to 500 processor cores of a cluster computing system. The performance obtained by the proposed parallel implementation of the neXtSIM code is shown being sufficient to perform simulations for state-of-the-art sea ice forecasting and geophysical process studies over geographical domain of several millions squared kilometers like the Arctic region.
New theoretical results for the Lehmann effect in cholesteric liquid crystals
NASA Technical Reports Server (NTRS)
Brand, Helmut R.; Pleiner, Harald
1988-01-01
The Lehmann effect arising in a cholesteric liquid crystal drop when a temperature gradient is applied parallel to its helical axis is investigated theoretically using a local approach. A pseudoscalar quantity is introduced to allow for cross couplings which are absent in nematic liquid crystals, and the statics and dissipative dynamics are analyzed in detail. It is shown that the Lehmann effect is purely dynamic for the case of an external electric field and purely static for an external density gradient, but includes both dynamic and static coupling contributions for the cases of external temperature or concentration gradients.
Resonance-induced sensitivity enhancement method for conductivity sensors
NASA Technical Reports Server (NTRS)
Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)
2009-01-01
Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.
Resolving the Discrepancy of Distance to M60, a Giant Elliptical Galaxy in Virgo
NASA Astrophysics Data System (ADS)
Lee, Myung Gyoon; Jang, In Sung
2017-05-01
There is a well-known discrepancy in the distance estimation of M60, a giant elliptical galaxy in Virgo; the planetary nebula luminosity function (PNLF) distance moduli for this galaxy are, on average, 0.4 mag smaller than the values based on the surface brightness fluctuation (SBF) in the literature. We present photometry of the resolved stars in an outer field of M60 based on deep F775W and F850LP images in the Hubble Space Telescope obtained as part of the Pure Parallel Program in the archive. Detected stars are mostly old red giants in the halo of M60. With this photometry, we determine a distance to M60 using the tip of the red giant branch (TRGB). A TRGB is detected at F850LP{}{TRGB}=26.70+/- 0.06 mag, in the luminosity function of the red giants. This value corresponds to F814W{}0,{TRGB}=27.13+/- 0.06 mag and {{QT}}{TRGB}=27.04+/- 0.07 mag, where QT is a color-corrected F814W magnitude. From this we derive a distance modulus, {(m-M)}0=31.05+/- 0.07({ran}) +/- 0.06({sys}) (d=16.23+/- 0.50({ran})+/- 0.42({sys}) Mpc). This value is 0.3 mag larger than the PNLF distances and 0.1 mag smaller than the SBF distances in the previous studies, which indicates that the PNLF distances to M60 reported in the literature have larger uncertainties than the suggested values.
Fast word reading in pure alexia: "fast, yet serial".
Bormann, Tobias; Wolfer, Sascha; Hachmann, Wibke; Neubauer, Claudia; Konieczny, Lars
2015-01-01
Pure alexia is a severe impairment of word reading in which individuals process letters serially with a pronounced length effect. Yet, there is considerable variation in the performance of alexic readers with generally very slow, but also occasionally fast responses, an observation addressed rarely in previous reports. It has been suggested that "fast" responses in pure alexia reflect residual parallel letter processing or that they may even be subserved by an independent reading system. Four experiments assessed fast and slow reading in a participant (DN) with pure alexia. Two behavioral experiments investigated frequency, neighborhood, and length effects in forced fast reading. Two further experiments measured eye movements when DN was forced to read quickly, or could respond faster because words were easier to process. Taken together, there was little support for the proposal that "qualitatively different" mechanisms or reading strategies underlie both types of responses in DN. Instead, fast responses are argued to be generated by the same serial-reading strategy.
Global Learning Spectral Archive- A new Way to deal with Unknown Urban Spectra -
NASA Astrophysics Data System (ADS)
Jilge, M.; Heiden, U.; Habermeyer, M.; Jürgens, C.
2015-12-01
Rapid urbanization processes and the need of identifying urban materials demand urban planners and the remote sensing community since years. Urban planners cannot overcome the issue of up-to-date information of urban materials due to time-intensive fieldwork. Hyperspectral remote sensing can facilitate this issue by interpreting spectral signals to provide information of occurring materials. However, the complexity of urban areas and the occurrence of diverse urban materials vary due to regional and cultural aspects as well as the size of a city, which makes identification of surface materials a challenging analysis task. For the various surface material identification approaches, spectral libraries containing pure material spectra are commonly used, which are derived from field, laboratory or the hyperspectral image itself. One of the requirements for successful image analysis is that all spectrally different surface materials are represented by the library. Currently, a universal library, applicable in every urban area worldwide and taking each spectral variability into account, is and will not be existent. In this study, the issue of unknown surface material spectra and the demand of an urban site-specific spectral library is tackled by the development of a learning spectral archive tool. Starting with an incomplete library of labelled image spectra from several German cities, surface materials of pure image pixels will be identified in a hyperspectral image based on a similarity measure (e.g. SID-SAM). Additionally, unknown image spectra of urban objects are identified based on an object- and spectral-based-rule set. The detected unknown surface material spectra are entered with additional metadata, such as regional occurrence into the existing spectral library and thus, are reusable for further studies. Our approach is suitable for pure surface material detection of urban hyperspectral images that is globally applicable by taking incompleteness into account. The generically development enables the implementation of different hyperspectral sensors.
NASA Astrophysics Data System (ADS)
Arias-González, Felipe; del Val, Jesús; Comesaña, Rafael; Penide, Joaquín; Lusquiños, Fernando; Quintero, Félix; Riveiro, Antonio; Boutinguiza, Mohamed; Gil, Francisco Javier; Pou, Juan
2018-01-01
In this paper, the microstructure and crystallographic texture of pure Ti thin walls generated by Additive Manufacturing based on Laser Cladding (AMLC) are analyzed in depth. From the results obtained, it is possible to better understand the AMLC process of pure titanium. The microstructure observed in the samples consists of large elongated columnar prior β grains which have grown epitaxially from the substrate to the top, in parallel to the building direction. Within the prior β grains, α-Ti lamellae and lamellar colonies are the result of cooling from above the β-transus temperature. This transformation follows the Burgers relationship and the result is a basket-weave microstructure with a strong crystallographic texture. Finally, a thermal treatment is proposed to transform the microstructure of the as-deposited samples into an equiaxed microstructure of α-Ti grains.
Rendeiro, Catarina; Vauzour, David; Rattray, Marcus; Waffo-Téguo, Pierre; Mérillon, Jean Michel; Butler, Laurie T.; Williams, Claire M.; Spencer, Jeremy P. E.
2013-01-01
Evidence suggests that flavonoid-rich foods are capable of inducing improvements in memory and cognition in animals and humans. However, there is a lack of clarity concerning whether flavonoids are the causal agents in inducing such behavioral responses. Here we show that supplementation with pure anthocyanins or pure flavanols for 6 weeks, at levels similar to that found in blueberry (2% w/w), results in an enhancement of spatial memory in 18 month old rats. Pure flavanols and pure anthocyanins were observed to induce significant improvements in spatial working memory (p = 0.002 and p = 0.006 respectively), to a similar extent to that following blueberry supplementation (p = 0.002). These behavioral changes were paralleled by increases in hippocampal brain-derived neurotrophic factor (R = 0.46, p<0.01), suggesting a common mechanism for the enhancement of memory. However, unlike protein levels of BDNF, the regional enhancement of BDNF mRNA expression in the hippocampus appeared to be predominantly enhanced by anthocyanins. Our data support the claim that flavonoids are likely causal agents in mediating the cognitive effects of flavonoid-rich foods. PMID:23723987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Lorentz Contraction and Current-Carrying Wires
ERIC Educational Resources Information Center
van Kampen, Paul
2008-01-01
The force between two parallel current-carrying wires is investigated in the rest frames of the ions and the electrons. A straightforward Lorentz transformation shows that what appears as a purely magnetostatic force in the ion frame appears as a combined magnetostatic and electrostatic force in the electron frame. The derivation makes use of a…
Felix Klein and the NCTM's Standards: A Mathematician Considers Mathematics Education.
ERIC Educational Resources Information Center
McComas, Kim Krusen
2000-01-01
Discusses the parallels between Klein's position at the forefront of a movement to reform mathematics education and that of the National Council of Teachers of Mathematics' (NCTM) Standards. Draws a picture of Klein as an important historical figure who saw equal importance in studying pure mathematics, applying mathematics, and teaching…
Initial singularity and pure geometric field theories
NASA Astrophysics Data System (ADS)
Wanas, M. I.; Kamal, Mona M.; Dabash, Tahia F.
2018-01-01
In the present article we use a modified version of the geodesic equation, together with a modified version of the Raychaudhuri equation, to study initial singularities. These modified equations are used to account for the effect of the spin-torsion interaction on the existence of initial singularities in cosmological models. Such models are the results of solutions of the field equations of a class of field theories termed pure geometric. The geometric structure used in this study is an absolute parallelism structure satisfying the cosmological principle. It is shown that the existence of initial singularities is subject to some mathematical (geometric) conditions. The scheme suggested for this study can be easily generalized.
IAU Working Group on Wide-Field Imaging.
NASA Astrophysics Data System (ADS)
MacGillivray, H. T.
1991-01-01
Contents: 1. Introduction - The IAU Working Group on Wide-Field Imaging (R. M. West). 2. Reports from the Sub-Sections of the Working Group - a. Sky surveys and patrols (R. M. West). b. Photographic techniques (D. F. Malin). c. Digitization techniques (H. T. MacGillivray). d. Archival and retrieval of wide-field data (B. Lasker). 3. Meeting of the Organising Committee (R. M. West). 4. Wide-field plate archives (M. Tsvetkov). 5. Reproduction of the Palomar Observatory Sky Surveys (R. J. Brucato). 6. Status of the St ScI scan-distribution program (B. Lasker). 7. Pixel addition - pushing Schmidt plates to B = 25 (M. R. S. Hawkins). 8. Photometry from Estar film (S. Phillipps, Q. Parker). 9. ASCHOT - Astrophysical Schmidt Orbital Telescope (H. Lorenz). 10. The Hitchhiker parallel CCD camera (J. Davies, M. Disney, S. Driver, I. Morgan, S. Phillipps).
Fiber Optic Communication System For Medical Images
NASA Astrophysics Data System (ADS)
Arenson, Ronald L.; Morton, Dan E.; London, Jack W.
1982-01-01
This paper discusses a fiber optic communication system linking ultrasound devices, Computerized tomography scanners, Nuclear Medicine computer system, and a digital fluoro-graphic system to a central radiology research computer. These centrally archived images are available for near instantaneous recall at various display consoles. When a suitable laser optical disk is available for mass storage, more extensive image archiving will be added to the network including digitized images of standard radiographs for comparison purposes and for remote display in such areas as the intensive care units, the operating room, and selected outpatient departments. This fiber optic system allows for a transfer of high resolution images in less than a second over distances exceeding 2,000 feet. The advantages of using fiber optic cables instead of typical parallel or serial communication techniques will be described. The switching methodology and communication protocols will also be discussed.
Cost-effective data storage/archival subsystem for functional PACS
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Kim, Yongmin
1993-09-01
Not the least of the requirements of a workable PACS is the ability to store and archive vast amounts of information. A medium-size hospital will generate between 1 and 2 TBytes of data annually on a fully functional PACS. A high-speed image transmission network coupled with a comparably high-speed central data storage unit can make local memory and magnetic disks in the PACS workstations less critical and, in an extreme case, unnecessary. Under these circumstances, the capacity and performance of the central data storage subsystem and database is critical in determining the response time at the workstations, thus significantly affecting clinical acceptability. The central data storage subsystem not only needs to provide sufficient capacity to store about ten days worth of images (five days worth of new studies, and on the average, about one comparison study for each new study), but also supplies images to the requesting workstation in a timely fashion. The database must provide fast retrieval responses upon users' requests for images. This paper analyzes both advantages and disadvantages of multiple parallel transfer disks versus RAID disks for short-term central data storage subsystem, as well as optical disk jukebox versus digital recorder tape subsystem for long-term archive. Furthermore, an example high-performance cost-effective storage subsystem which integrates both the RAID disks and high-speed digital tape subsystem as a cost-effective PACS data storage/archival unit are presented.
New Developments in NOAA's Comprehensive Large Array-Data Stewardship System
NASA Astrophysics Data System (ADS)
Ritchey, N. A.; Morris, J. S.; Carter, D. J.
2012-12-01
The Comprehensive Large Array-data Stewardship System (CLASS) is part of the NOAA strategic goal of Climate Adaptation and Mitigation that gives focus to the building and sustaining of key observational assets and data archives critical to maintaining the global climate record. Since 2002, CLASS has been NOAA's enterprise solution for ingesting, storing and providing access to a host of near real-time remote sensing streams such as the Polar and Geostationary Operational Environmental Satellites (POES and GOES) and the Defense Meteorological Satellite Program (DMSP). Since October, 2011 CLASS has also been the dedicated Archive Data Segment (ADS) of the Suomi National Polar-orbiting Partnership (S-NPP). As the ADS, CLASS receives raw and processed S-NPP records for archival and distribution to the broad user community. Moving beyond just remote sensing and model data, NOAA has endorsed a plan to migrate all archive holdings from NOAA's National Data Centers into CLASS while retiring various disparate legacy data storage systems residing at the National Climatic Data Center (NCDC), National Geophysical Data Center (NGDC) and the National Oceanographic Data Center (NODC). In parallel to this data migration, CLASS is evolving to a service-oriented architecture utilizing cloud technologies for dissemination in addition to clearly defined interfaces that allow better collaboration with partners. This evolution will require implementation of standard access protocols and metadata which will lead to cost effective data and information preservation.
Transport in a toroidally confined pure electron plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crooks, S.M.; ONeil, T.M.
1996-07-01
O{close_quote}Neil and Smith [T.M. O{close_quote}Neil and R.A. Smith, Phys. Plasmas {bold 1}, 8 (1994)] have argued that a pure electron plasma can be confined stably in a toroidal magnetic field configuration. This paper shows that the toroidal curvature of the magnetic field of necessity causes slow cross-field transport. The transport mechanism is similar to magnetic pumping and may be understood by considering a single flux tube of plasma. As the flux tube of plasma undergoes poloidal {ital E}{bold {times}}{ital B} drift rotation about the center of the plasma, the length of the flux tube and the magnetic field strength withinmore » the flux tube oscillate, and this produces corresponding oscillations in {ital T}{sub {parallel}} and {ital T}{sub {perpendicular}}. The collisional relaxation of {ital T}{sub {parallel}} toward {ital T}{sub {perpendicular}} produces a slow dissipation of electrostatic energy into heat and a consequent expansion (cross-field transport) of the plasma. In the limit where the cross section of the plasma is nearly circular the radial particle flux is given by {Gamma}{sub {ital r}}=1/2{nu}{sub {perpendicular},{parallel}}{ital T}({ital r}/{rho}{sub 0}){sup 2}{ital n}/({minus}{ital e}{partial_derivative}{Phi}/{partial_derivative}{ital r}), where {nu}{sub {perpendicular},{parallel}} is the collisional equipartition rate, {rho}{sub 0} is the major radius at the center of the plasma, and {ital r} is the minor radius measured from the center of the plasma. The transport flux is first calculated using this simple physical picture and then is calculated by solving the drift-kinetic Boltzmann equation. This latter calculation is not limited to a plasma with a circular cross section. {copyright} {ital 1996 American Institute of Physics.}« less
NASA Technical Reports Server (NTRS)
Hockney, George; Lee, Seungwon
2008-01-01
A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.
NASA Astrophysics Data System (ADS)
Zhang, Kunhua; Cheng, Qiang
2018-07-01
We investigate the crossed Andreev reflection in a ferromagnet–superconductor–ferromagnet junction on the surface of a topological insulator, where the magnetizations in the left and right leads are perpendicular to the surface. We find that the nonlocal transport process can be pure crossed Andreev reflection or pure elastic cotunneling, and the switch between the two processes can be controlled electrically. Pure crossed Andreev reflection appears for all bias voltages in the superconducting energy gap, which is independent of the configuration of the magnetizations in the two leads. The spin of the crossed Andreev reflected hole could be parallel to the spin of the incident electron, which is brought by the spin-triplet pairing correlation. The average transmission probability of crossed Andreev reflection can be larger than 90%, so a high efficiency nonlocal splitting of Cooper pairs can be generated, and turned on and off electrically.
Generation of a sub-half-wavelength focal spot with purely transverse spin angular momentum
NASA Astrophysics Data System (ADS)
Hang, Li; Fu, Jian; Yu, Xiaochang; Wang, Ying; Chen, Peifeng
2017-11-01
We theoretically demonstrate that optical focus fields with purely transverse spin angular momentum (SAM) can be obtained when a kind of special incident fields is focused by a high numerical aperture (NA) aplanatic lens (AL). When the incident pupil fields are refracted by an AL, two transverse Cartesian components of the electric fields at the exit pupil plane do not have the same order of sinusoidal or cosinoidal components, resulting in zero longitudinal SAMs of the focal fields. An incident field satisfying above conditions is then proposed. Using the Richard-Wolf vectorial diffraction theory, the energy density and SAM density distributions of the tightly focused beam are calculated and the results clearly validate the proposed theory. In addition, a sub-half-wavelength focal spot with purely transverse SAM can be achieved and a flattop energy density distribution parallel to z-axis can be observed around the maximum energy density point.
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
The Effective Width of Curved Sheet After Buckling
NASA Technical Reports Server (NTRS)
Wenzek, W A
1938-01-01
This report describes experiments made for the purpose of ascertaining the effective width of circularly curved sheet under pure flexural stress. A relation for the effective width of curved sheets is established. Experiments were made with circular cylinders compressed in longitudinal direction. The sheets were rigidly built in at the sides parallel to the axis of the cylinder.
Observing with HST V: Improvements to the Scheduling of HST Parallel Observations
NASA Astrophysics Data System (ADS)
Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.
1994-12-01
Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.
Hubble Sees Turquoise-Tinted Plumes in Large Magellanic Cloud
2017-12-08
The brightly glowing plumes seen in this image are reminiscent of an underwater scene, with turquoise-tinted currents and nebulous strands reaching out into the surroundings. However, this is no ocean. This image actually shows part of the Large Magellanic Cloud (LMC), a small nearby galaxy that orbits our galaxy, the Milky Way, and appears as a blurred blob in our skies. The NASA/European Space Agency (ESA) Hubble Space Telescope has peeked many times into this galaxy, releasing stunning images of the whirling clouds of gas and sparkling stars (opo9944a, heic1301, potw1408a). This image shows part of the Tarantula Nebula's outskirts. This famously beautiful nebula, located within the LMC, is a frequent target for Hubble (heic1206, heic1402). In most images of the LMC the color is completely different to that seen here. This is because, in this new image, a different set of filters was used. The customary R filter, which selects the red light, was replaced by a filter letting through the near-infrared light. In traditional images, the hydrogen gas appears pink because it shines most brightly in the red. Here however, other less prominent emission lines dominate in the blue and green filters. This data is part of the Archival Pure Parallel Project (APPP), a project that gathered together and processed over 1,000 images taken using Hubble’s Wide Field Planetary Camera 2, obtained in parallel with other Hubble instruments. Much of the data in the project could be used to study a wide range of astronomical topics, including gravitational lensing and cosmic shear, exploring distant star-forming galaxies, supplementing observations in other wavelength ranges with optical data, and examining star populations from stellar heavyweights all the way down to solar-mass stars. Image Credit: ESA/Hubble & NASA: acknowledgement: Josh Barrington NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Roy, Asim
2017-01-01
The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings - in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system.
Roy, Asim
2017-01-01
The debate about representation in the brain and the nature of the cognitive system has been going on for decades now. This paper examines the neurophysiological evidence, primarily from single cell recordings, to get a better perspective on both the issues. After an initial review of some basic concepts, the paper reviews the data from single cell recordings – in cortical columns and of category-selective and multisensory neurons. In neuroscience, columns in the neocortex (cortical columns) are understood to be a basic functional/computational unit. The paper reviews the fundamental discoveries about the columnar organization and finds that it reveals a massively parallel search mechanism. This columnar organization could be the most extensive neurophysiological evidence for the widespread use of localist representation in the brain. The paper also reviews studies of category-selective cells. The evidence for category-selective cells reveals that localist representation is also used to encode complex abstract concepts at the highest levels of processing in the brain. A third major issue is the nature of the cognitive system in the brain and whether there is a form that is purely abstract and encoded by single cells. To provide evidence for a single-cell based purely abstract cognitive system, the paper reviews some of the findings related to multisensory cells. It appears that there is widespread usage of multisensory cells in the brain in the same areas where sensory processing takes place. Plus there is evidence for abstract modality invariant cells at higher levels of cortical processing. Overall, that reveals the existence of a purely abstract cognitive system in the brain. The paper also argues that since there is no evidence for dense distributed representation and since sparse representation is actually used to encode memories, there is actually no evidence for distributed representation in the brain. Overall, it appears that, at an abstract level, the brain is a massively parallel, distributed computing system that is symbolic. The paper also explains how grounded cognition and other theories of the brain are fully compatible with localist representation and a purely abstract cognitive system. PMID:28261127
Dendritic Growth with Fluid Flow for Pure Materials
NASA Technical Reports Server (NTRS)
Jeong, Jun-Ho; Dantzig, Jonathan A.; Goldenfeld, Nigel
2003-01-01
We have developed a three-dimensional, adaptive, parallel finite element code to examine solidification of pure materials under conditions of forced flow. We have examined the effect of undercooling, surface tension anisotropy and imposed flow velocity on the growth. The flow significantly alters the growth process, producing dendrites that grow faster, and with greater tip curvature, into the flow. The selection constant decreases slightly with flow velocity in our calculations. The results of the calculations agree well with the transport solution of Saville and Beaghton at high undercooling and high anisotropy. At low undercooling, significant deviations are found. We attribute this difference to the influence of other parts of the dendrite, removed from the tip, on the flow field.
Edge profiles and limiter tests in Extrap T2
NASA Astrophysics Data System (ADS)
Bergsåker, H.; Hedin, G.; Ilyinsky, L.; Larsson, D.; Möller, A.
New edge profile measurements, including calorimetric measurements of the parallel heat flux, were made in Extrap T2. Test limiters of pure molybdenum and the TZM molybdenum alloy have been exposed in the edge plasma. The surface damage was studied, mainly by microscopy. Tungsten coated graphite probes were also exposed, and the surfaces were studied by microscopy, ion beam analysis and XPS. In this case cracking and mixing of carbon and tungsten at the interface was observed in the most heated areas, whereas carbide formation at the surface was seen in less heated areas. In these tests pure Mo generally fared better than TZM, and thin and cleaner coatings fared better than thicker and less clean.
Integration Of An MR Image Network Into A Clinical PACS
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Mankovich, Nicholas J.; Taira, Ricky K.; Cho, Paul S.; Huang, H. K.
1988-06-01
A direct link between a clinical pediatric PACS module and a FONAR MRI image network was implemented. The original MR network combines together the MR scanner, a remote viewing station and a central archiving station. The pediatric PACS directly connects to the archiving unit through an Ethernet TCP-IP network adhering to FONAR's protocol. The PACS communication software developed supports the transfer of patient studies and the patient information directly from the MR archive database to the pediatric PACS. In the first phase of our project we developed a package to transfer data between a VAX-111750 and the IBM PC I AT-based MR archive database through the Ethernet network. This system served as a model for PACS-to-modality network communication. Once testing was complete on this research network, the software and network hardware was moved to the clinical pediatric VAX for full PACS integration. In parallel to the direct transmission of digital images to the Pediatric PACS, a broadband communication system in video format was developed for real-time broadcasting of images originating from the MR console to 8 remote viewing stations distributed in the radiology department. These analog viewing stations allow the radiologists to directly monitor patient positioning and to select the scan levels during a patient examination from remote locations in the radiology department. This paper reports (1) the technical details of this implementation, (2) the merits of this network development scheme, and (3) the performance statistics of the network-to-PACS interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.
2012-06-04
Within the ICSTI Insights Series we offer three articles on the 'living publication' that is already available to practitioners in the important field of crystal structure determination and analysis. While the specific examples are drawn from this particular field, we invite readers to draw parallels in their own fields of interest. The first article describes the present state of the crystallographic living publication, already recognized by an ALPSP (Association of Learned and Professional Society Publishers) Award for Publishing Innovation in 2006. The second article describes the potential impact on the record of science as greater post-publication analysis becomes more commonmore » within currently accepted data deposition practices, using processed diffraction data as the starting point. The third article outlines a vision for the further improvement of crystallographic structure reports within potentially achievable enhanced data deposition practices, based upon raw (unprocessed) diffraction data. The IUCr in its Commissions and Journals has for many years emphasized the importance of publications being accompanied by data and the interpretation of the data in terms of atomic models. This has been followed as policy by numerous other journals in the field and its cognate disciplines. This practice has been well served by databases and archiving institutions such as the Protein Data Bank (PDB), the Cambridge Crystallographic Data Centre (CCDC), and the Inorganic Crystal Structure Database (ICSD). Normally the models that are archived are interpretations of the data, consisting of atomic coordinates with their displacement parameters, along with processed diffraction data from X-ray, neutron or electron diffraction studies. In our current online age, a reader can not only consult the printed word, but can display and explore the results with molecular graphics software of exceptional quality. Furthermore, the routine availability of processed diffraction data allows readers to perform direct calculations of the electron density (using X-rays and electrons as probes) or the nuclear density (using neutrons as probe) on which the molecular models are directly based. This current community practice is described in our first article. There are various ways that these data and tools can be used to further analyze the molecules that have been crystallized. Notably, once a set of results is announced via the publication, the research community can start to interact directly with the data and models. This gives the community the opportunity not only to read about the structure, but to examine it in detail, and even generate subsequent improved models. These improved models could, in principle, be archived along with the original interpretation of the data and can represent a continuously improving set of interpretations of a set of diffraction data. The models could improve both by correction of errors in the original interpretation and by the use of new representations of molecules in crystal structures that more accurately represent the contents of a crystal. These possible developments are described in our second article. A current, significant, thrust for the IUCr is whether it would be advantageous for the crystallographic community to require, rather than only encourage, the archiving of the raw (unprocessed) diffraction data images measured from a crystal, a fibre or a solution. This issue is being evaluated in detail by an IUCr Working Group (see http://forums.iucr.org). Such archived raw data would be linked to and from any associated publications. The archiving of raw diffraction data could allow as yet undeveloped processing methods to have access to the originally measured data. The debate within the community about this much larger proposed archiving effort revolves around the issue of 'cost versus benefit'. Costs can be minimized by preserving the raw data in local repositories, either at centralized synchrotron and neutron research institutes, or at research universities. Archiving raw data is also perceived as being more effective than just archiving processed data in countering scientific fraud, which exists in our field, albeit at a tiny level of occurrences. In parallel developments, sensitivities to avoiding research malpractice are encouraging Universities to establish their own data repositories for research and academic staff. These various 'raw data archives', would complement the existing processed data archives. These archives could however have gaps in their coverage arising from a lack of resources. Nevertheless we believe that a sufficiently large raw data archive, with reasonable global coverage, could be encouraged and have major benefits. These possible developments, costs and benefits, are described in our third and final article on 'The living publication'.« less
NASA Astrophysics Data System (ADS)
Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.
2018-01-01
A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.
chemf: A purely functional chemistry toolkit.
Höck, Stefan; Riedl, Rainer
2012-12-20
Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project.
chemf: A purely functional chemistry toolkit
2012-01-01
Background Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. Results We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. Conclusions We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of the programmers involved in this project. PMID:23253942
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
NASA Astrophysics Data System (ADS)
Lu, San; Artemyev, A. V.; Angelopoulos, V.
2017-11-01
Magnetotail current sheet thinning is a distinctive feature of substorm growth phase, during which magnetic energy is stored in the magnetospheric lobes. Investigation of charged particle dynamics in such thinning current sheets is believed to be important for understanding the substorm energy storage and the current sheet destabilization responsible for substorm expansion phase onset. We use Time History of Events and Macroscale Interactions during Substorms (THEMIS) B and C observations in 2008 and 2009 at 18 - 25 RE to show that during magnetotail current sheet thinning, the electron temperature decreases (cooling), and the parallel temperature decreases faster than the perpendicular temperature, leading to a decrease of the initially strong electron temperature anisotropy (isotropization). This isotropization cannot be explained by pure adiabatic cooling or by pitch angle scattering. We use test particle simulations to explore the mechanism responsible for the cooling and isotropization. We find that during the thinning, a fast decrease of a parallel electric field (directed toward the Earth) can speed up the electron parallel cooling, causing it to exceed the rate of perpendicular cooling, and thus lead to isotropization, consistent with observation. If the parallel electric field is too small or does not change fast enough, the electron parallel cooling is slower than the perpendicular cooling, so the parallel electron anisotropy grows, contrary to observation. The same isotropization can also be accomplished by an increasing parallel electric field directed toward the equatorial plane. Our study reveals the existence of a large-scale parallel electric field, which plays an important role in magnetotail particle dynamics during the current sheet thinning process.
Subcritical crack growth in soda-lime glass in combined mode I and mode II loading
NASA Technical Reports Server (NTRS)
Singh, Dileep; Shetty, Dinesh K.
1990-01-01
Subcritical crack growth under mixed-mode loading was studied in soda-lime glass. Pure mode I, combined mode I and mode II, and pure mode II loadings were achieved in precracked disk specimens by loading in diametral compression at selected angles with respect to the symmetric radial crack. Crack growth was monitored by measuring the resistance changes in a microcircuit grid consisting of parallel, electrically conducting grid lines deposited on the surface of the disk specimens by photolithography. Subcritical crack growth rates in pure mode I, pure mode II, and combined mode I and mode II loading could be described by an exponential relationship between crack growth rate and an effective crack driving force derived from a mode I-mode II fracture toughness envelope. The effective crack driving force was based on an empirical representation of the noncoplanar strain energy release rate. Stress intensities for kinked cracks were assessed using the method of caustics and an initial decrease and a subsequent increase in the subcritical crack growth rates of kinked cracks were shown to correlate with the variations of the mode I and the mode II stress intensities.
Pure F-actin networks are distorted and branched by steps in the critical-point drying method.
Resch, Guenter P; Goldie, Kenneth N; Hoenger, Andreas; Small, J Victor
2002-03-01
Elucidation of the ultrastructural organization of actin networks is crucial for understanding the molecular mechanisms underlying actin-based motility. Results obtained from cytoskeletons and actin comets prepared by the critical-point procedure, followed by rotary shadowing, support recent models incorporating actin filament branching as a main feature of lamellipodia and pathogen propulsion. Since actin branches were not evident in earlier images obtained by negative staining, we explored how these differences arise. Accordingly, we have followed the structural fate of dense networks of pure actin filaments subjected to steps of the critical-point drying protocol. The filament networks have been visualized in parallel by both cryo-electron microscopy and negative staining. Our results demonstrate the selective creation of branches and other artificial structures in pure F-actin networks by the critical-point procedure and challenge the reliability of this method for preserving the detailed organization of actin assemblies that drive motility. (c) 2002 Elsevier Science (USA).
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Crystal Orientation Controlled Photovoltaic Properties of Multilayer GaAs Nanowire Arrays.
Han, Ning; Yang, Zai-Xing; Wang, Fengyun; Yip, SenPo; Li, Dapan; Hung, Tak Fu; Chen, Yunfa; Ho, Johnny C
2016-06-28
In recent years, despite significant progress in the synthesis, characterization, and integration of various nanowire (NW) material systems, crystal orientation controlled NW growth as well as real-time assessment of their growth-structure-property relationships still presents one of the major challenges in deploying NWs for practical large-scale applications. In this study, we propose, design, and develop a multilayer NW printing scheme for the determination of crystal orientation controlled photovoltaic properties of parallel GaAs NW arrays. By tuning the catalyst thickness and nucleation and growth temperatures in the two-step chemical vapor deposition, crystalline GaAs NWs with uniform, pure ⟨110⟩ and ⟨111⟩ orientations and other mixture ratios can be successfully prepared. Employing lift-off resists, three-layer NW parallel arrays can be easily attained for X-ray diffraction in order to evaluate their growth orientation along with the fabrication of NW parallel array based Schottky photovoltaic devices for the subsequent performance assessment. Notably, the open-circuit voltage of purely ⟨111⟩-oriented NW arrayed cells is far higher than that of ⟨110⟩-oriented NW arrayed counterparts, which can be interpreted by the different surface Fermi level pinning that exists on various NW crystal surface planes due to the different As dangling bond densities. All this indicates the profound effect of NW crystal orientation on physical and chemical properties of GaAs NWs, suggesting the careful NW design considerations for achieving optimal photovoltaic performances. The approach presented here could also serve as a versatile and powerful platform for in situ characterization of other NW materials.
Li, Wenlong; Jiao, Changhong; Li, Xin; Xie, Yongshu; Nakatani, Keitaro; Tian, He; Zhu, Weihong
2014-04-25
Endowing both solvent independency and excellent thermal bistability, the benzobis(thiadiazole)-bridged diarylethene system provides an efficient approach to realize extremely high photocyclization quantum yields (Φo-c , up to 90.6 %) by both separating completely pure anti-parallel conformer and suppressing intramolecular charge transfer (ICT). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Towards Exascale Seismic Imaging and Inversion
NASA Astrophysics Data System (ADS)
Tromp, J.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Lei, W.; Ruan, Y.
2015-12-01
Post-petascale supercomputers are now available to solve complex scientific problems that were thought unreachable a few decades ago. They also bring a cohort of concerns tied to obtaining optimum performance. Several issues are currently being investigated by the HPC community. These include energy consumption, fault resilience, scalability of the current parallel paradigms, workflow management, I/O performance and feature extraction with large datasets. In this presentation, we focus on the last three issues. In the context of seismic imaging and inversion, in particular for simulations based on adjoint methods, workflows are well defined.They consist of a few collective steps (e.g., mesh generation or model updates) and of a large number of independent steps (e.g., forward and adjoint simulations of each seismic event, pre- and postprocessing of seismic traces). The greater goal is to reduce the time to solution, that is, obtaining a more precise representation of the subsurface as fast as possible. This brings us to consider both the workflow in its entirety and the parts comprising it. The usual approach is to speedup the purely computational parts based on code optimization in order to reach higher FLOPS and better memory management. This still remains an important concern, but larger scale experiments show that the imaging workflow suffers from severe I/O bottlenecks. Such limitations occur both for purely computational data and seismic time series. The latter are dealt with by the introduction of a new Adaptable Seismic Data Format (ASDF). Parallel I/O libraries, namely HDF5 and ADIOS, are used to drastically reduce the cost of disk access. Parallel visualization tools, such as VisIt, are able to take advantage of ADIOS metadata to extract features and display massive datasets. Because large parts of the workflow are embarrassingly parallel, we are investigating the possibility of automating the imaging process with the integration of scientific workflow management tools, specifically Pegasus.
Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze
1998-01-01
An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc.
Fortran code for SU(3) lattice gauge theory with and without MPI checkerboard parallelization
NASA Astrophysics Data System (ADS)
Berg, Bernd A.; Wu, Hao
2012-10-01
We document plain Fortran and Fortran MPI checkerboard code for Markov chain Monte Carlo simulations of pure SU(3) lattice gauge theory with the Wilson action in D dimensions. The Fortran code uses periodic boundary conditions and is suitable for pedagogical purposes and small scale simulations. For the Fortran MPI code two geometries are covered: the usual torus with periodic boundary conditions and the double-layered torus as defined in the paper. Parallel computing is performed on checkerboards of sublattices, which partition the full lattice in one, two, and so on, up to D directions (depending on the parameters set). For updating, the Cabibbo-Marinari heatbath algorithm is used. We present validations and test runs of the code. Performance is reported for a number of currently used Fortran compilers and, when applicable, MPI versions. For the parallelized code, performance is studied as a function of the number of processors. Program summary Program title: STMC2LSU3MPI Catalogue identifier: AEMJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26666 No. of bytes in distributed program, including test data, etc.: 233126 Distribution format: tar.gz Programming language: Fortran 77 compatible with the use of Fortran 90/95 compilers, in part with MPI extensions. Computer: Any capable of compiling and executing Fortran 77 or Fortran 90/95, when needed with MPI extensions. Operating system: Red Hat Enterprise Linux Server 6.1 with OpenMPI + pgf77 11.8-0, Centos 5.3 with OpenMPI + gfortran 4.1.2, Cray XT4 with MPICH2 + pgf90 11.2-0. Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of processors used: 2 to 11664 RAM: 200 Mega bytes per process. Classification: 11.5. Nature of problem: Physics of pure SU(3) Quantum Field Theory (QFT). This is relevant for our understanding of Quantum Chromodynamics (QCD). It includes the glueball spectrum, topological properties and the deconfining phase transition of pure SU(3) QFT. For instance, Relativistic Heavy Ion Collision (RHIC) experiments at the Brookhaven National Laboratory provide evidence that quarks confined in hadrons undergo at high enough temperature and pressure a transition into a Quark-Gluon Plasma (QGP). Investigations of its thermodynamics in pure SU(3) QFT are of interest. Solution method: Markov Chain Monte Carlo (MCMC) simulations of SU(3) Lattice Gauge Theory (LGT) with the Wilson action. This is a regularization of pure SU(3) QFT on a hypercubic lattice, which allows approaching the continuum SU(3) QFT by means of Finite Size Scaling (FSS) studies. Specifically, we provide updating routines for the Cabibbo-Marinari heatbath with and without checkerboard parallelization. While the first is suitable for pedagogical purposes and small scale projects, the latter allows for efficient parallel processing. Targetting the geometry of RHIC experiments, we have implemented a Double-Layered Torus (DLT) lattice geometry, which has previously not been used in LGT MCMC simulations and enables inside and outside layers at distinct temperatures, the lower-temperature layer acting as the outside boundary for the higher-temperature layer, where the deconfinement transition goes on. Restrictions: The checkerboard partition of the lattice makes the development of measurement programs more tedious than is the case for an unpartitioned lattice. Presently, only one measurement routine for Polyakov loops is provided. Unusual features: We provide three different versions for the send/receive function of the MPI library, which work for different operating system +compiler +MPI combinations. This involves activating the correct row in the last three rows of our latmpi.par parameter file. The underlying reason is distinct buffer conventions. Running time: For a typical run using an Intel i7 processor, it takes (1.8-6) E-06 seconds to update one link of the lattice, depending on the compiler used. For example, if we do a simulation on a small (4 * 83) DLT lattice with a statistics of 221 sweeps (i.e., update the two lattice layers of 4 * (4 * 83) links each 221 times), the total CPU time needed can be 2 * 4 * (4 * 83) * 221 * 3 E-06 seconds = 1.7 minutes, where 2 — two layers of lattice 4 — four dimensions 83 * 4 — lattice size 221 — sweeps of updating 6 E-06 s mdash; average time to update one link variable. If we divide the job into 8 parallel processes, then the real time is (for negligible communication overhead) 1.7 mins / 8 = 0.2 mins.
Integrating the ODI-PPA scientific gateway with the QuickReduce pipeline for on-demand processing
NASA Astrophysics Data System (ADS)
Young, Michael D.; Kotulla, Ralf; Gopu, Arvind; Liu, Wilson
2014-07-01
As imaging systems improve, the size of astronomical data has continued to grow, making the transfer and processing of data a significant burden. To solve this problem for the WIYN Observatory One Degree Imager (ODI), we developed the ODI-Portal, Pipeline, and Archive (ODI-PPA) science gateway, integrating the data archive, data reduction pipelines, and a user portal. In this paper, we discuss the integration of the QuickReduce (QR) pipeline into PPA's Tier 2 processing framework. QR is a set of parallelized, stand-alone Python routines accessible to all users, and operators who can create master calibration products and produce standardized calibrated data, with a short turn-around time. Upon completion, the data are ingested into the archive and portal, and made available to authorized users. Quality metrics and diagnostic plots are generated and presented via the portal for operator approval and user perusal. Additionally, users can tailor the calibration process to their specific science objective(s) by selecting custom datasets, applying preferred master calibrations or generating their own, and selecting pipeline options. Submission of a QuickReduce job initiates data staging, pipeline execution, and ingestion of output data products all while allowing the user to monitor the process status, and to download or further process/analyze the output within the portal. User-generated data products are placed into a private user-space within the portal. ODI-PPA leverages cyberinfrastructure at Indiana University including the Big Red II supercomputer, the Scholarly Data Archive tape system and the Data Capacitor shared file system.
What will the future of cloud-based astronomical data processing look like?
NASA Astrophysics Data System (ADS)
Green, Andrew W.; Mannering, Elizabeth; Harischandra, Lloyd; Vuong, Minh; O'Toole, Simon; Sealey, Katrina; Hopkins, Andrew M.
2017-06-01
Astronomy is rapidly approaching an impasse: very large datasets require remote or cloud-based parallel processing, yet many astronomers still try to download the data and develop serial code locally. Astronomers understand the need for change, but the hurdles remain high. We are developing a data archive designed from the ground up to simplify and encourage cloud-based parallel processing. While the volume of data we host remains modest by some standards, it is still large enough that download and processing times are measured in days and even weeks. We plan to implement a python based, notebook-like interface that automatically parallelises execution. Our goal is to provide an interface sufficiently familiar and user-friendly that it encourages the astronomer to run their analysis on our system in the cloud-astroinformatics as a service. We describe how our system addresses the approaching impasse in astronomy using the SAMI Galaxy Survey as an example.
NASA Astrophysics Data System (ADS)
Bouwens, Rychard; Trenti, Michele; Calvi, Valentina; Bernard, Stephanie; Labbe, Ivo; Oesch, Pascal; Coe, Dan; Holwerda, Benne; Bradley, Larry; Mason, Charlotte; Schmidt, Kasper; Illingworth, Garth
2015-10-01
Hubble's WFC3 has been a game changer for studying early galaxy formation in the first 700 Myr after the Big Bang. Reliable samples of sources up to z~10, which can be discovered only from space, are now constraining the evolution of the galaxy luminosity function into the epoch of reionization. Despite these efforts, the size of the highest redshift galaxy samples (z >9 and especially z > 10) is still very small, particularly at high luminosities (L > L*). To deliver transformational results, much larger numbers of bright z > 9 galaxies are needed both to map out the bright end of the luminosity/mass function and for spectroscopic follow-up (with JWST and otherwise). One especially efficient way of expanding current samples is (1) to leverage the huge amounts of pure-parallel data available with HST to identify large numbers of candidate z ~ 9 - 11 galaxies and (2) to follow up each candidate with shallow Spitzer/IRAC observations to distinguish the bona- fide z ~ 9 - 11 galaxies from z ~ 2 old, dusty galaxies. For this program we are requesting shallow Spitzer/IRAC follow-up of 20 candidate z ~ 9 - 11 galaxies we have identified from 130 WFC3/IR pointings obtained from more than 4 separate HST programs with no existing IRAC coverage. Based on our previous CANDELS/GOODS searches, we expect to confirm 5 to 10 sources as L > L* galaxies at z >= 9. Our results will be used to constrain the bright end of the LF at z >= 9, to provide targets for Keck spectroscopy to constrain the ionization state of the z > 8 universe, and to furnish JWST with bright targets for spectroscopic follow-up studies.
Improving operating room productivity via parallel anesthesia processing.
Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R
2014-01-01
Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.
Role of a Modulator in the Synthesis of Phase-Pure NU-1000.
Webber, Thomas E; Liu, Wei-Guang; Desai, Sai Puneet; Lu, Connie C; Truhlar, Donald G; Penn, R Lee
2017-11-15
NU-1000 is a robust, mesoporous metal-organic framework (MOF) with hexazirconium nodes ([Zr 6 O 16 H 16 ] 8+ , referred to as oxo-Zr 6 nodes) that can be synthesized by combining a solution of ZrOCl 2 ·8H 2 O and a benzoic acid modulator in N,N-dimethylformamide with a solution of linker (1,3,6,8-tetrakis(p-benzoic acid)pyrene, referred to as H 4 TBAPy) and by aging at an elevated temperature. Typically, the resulting crystals are primarily composed of NU-1000 domains that crystallize with a more dense phase that shares structural similarity with NU-901, which is an MOF composed of the same linker molecules and nodes. Density differences between the two polymorphs arise from the differences in the node orientation: in NU-1000, the oxo-Zr 6 nodes rotate 120° from node to node, whereas in NU-901, all nodes are aligned in parallel. Considering this structural difference leads to the hypothesis that changing the modulator from benzoic acid to a larger and more rigid biphenyl-4-carboxylic acid might lead to a stronger steric interaction between the modulator coordinating on the oxo-Zr 6 node and misaligned nodes or linkers in the large pore and inhibit the growth of the more dense NU-901-like material, resulting in phase-pure NU-1000. Side-by-side reactions comparing the products of synthesis using benzoic acid or biphenyl-4-carboxylic acid as a modulator produce structurally heterogeneous crystals and phase-pure NU-1000 crystals. It can be concluded that the larger and more rigid biphenyl-4-carboxylate inhibits the incorporation of nodes with an alignment parallel to the neighboring nodes already residing in the crystal.
NASA Technical Reports Server (NTRS)
Wilder, F. D.; Ergun, R. E.; Schwartz, S. J.; Newman, D. L.; Eriksson, S.; Stawarz, J. E.; Goldman, M. V.; Goodrich, K. A.; Gershman, D. J.; Malaspina, D.;
2016-01-01
On 8 September 2015, the four Magnetospheric Multiscale spacecraft encountered a Kelvin-Helmholtz unstable magnetopause near the dusk flank. The spacecraft observed periodic compressed current sheets, between which the plasma was turbulent. We present observations of large-amplitude (up to 100 mVm) oscillations in the electric field. Because these oscillations are purely parallel to the background magnetic field, electrostatic, and below the ion plasma frequency, they are likely to be ion acoustic-like waves. These waves are observed in a turbulent plasma where multiple particle populations are intermittently mixed, including cold electrons with energies less than 10 eV. Stability analysis suggests a cold electron component is necessary for wave growth.
Complementary spin transistor using a quantum well channel.
Park, Youn Ho; Choi, Jun Woo; Kim, Hyung-Jun; Chang, Joonyeon; Han, Suk Hee; Choi, Heon-Jin; Koo, Hyun Cheol
2017-04-20
In order to utilize the spin field effect transistor in logic applications, the development of two types of complementary transistors, which play roles of the n- and p-type conventional charge transistors, is an essential prerequisite. In this research, we demonstrate complementary spin transistors consisting of two types of devices, namely parallel and antiparallel spin transistors using InAs based quantum well channels and exchange-biased ferromagnetic electrodes. In these spin transistors, the magnetization directions of the source and drain electrodes are parallel or antiparallel, respectively, depending on the exchange bias field direction. Using this scheme, we also realize a complementary logic operation purely with spin transistors controlled by the gate voltage, without any additional n- or p-channel transistor.
An illness in the family: Dr. Maude Abbott and her sister, Alice Abbott.
Brookes, Barbara
2011-01-01
This paper explores Maude Abbott's internationally significant career in medicine and her parallel commitment to caring for her sister, Alice Abbott. An examination of Abbott's life reveals the difficulties faced by an ambitious Canadian woman in medicine from the 1890s to the 1920s; difficulties compounded by caring for a sister with a mental illness. The Abbott archive suggests that it was far more difficult for a woman doctor to make the kind of sharp distinction between public and private life that might be expected of professional men.
Parallelizing SHA-256, SHA-1 and MD5 and AES on the Cell Broadband Engine
2010-10-25
job to SPU 5: wait for DONE signal from SPU 6: end while 7: send DONE signal to SPU Figure 4 : The PPU repeatedly tasks jobs to the SPU and consumes ...processors in IBM’s Cell Broadband Engine for Sony’s Playstation 3. Cryptology ePrint Archive, Report 2007/061, 2007. http://eprint.iacr.org/. [ 4 ] Owen...DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD–MM–YYYY) 2. REPORT TYPE 3. DATES COVERED (From — To) 4 . TITLE AND SUBTITLE 5a
Evaluation, development, and characterization of superconducting materials for space applications
NASA Technical Reports Server (NTRS)
Thorpe, Arthur N.
1990-01-01
The anisotropic electromagnetic features of a grain-aligned YBa2Cu3O(x) bulk sample derived from a process of long-time partial melt growth were investigated by the measurements of direct current magnetization (at 77 K) and alternating current susceptibility as a function of temperature, with the fields applied parallel and perpendicular to the c axis, respectively. The extended Bean model was further studied and applied to explain the experimental results. Upon comparison of the grain-aligned sample with pure single crystal materials, it is concluded that because of the existence of more effective pinning sites in the grain-aligned sample, not only its critical current density perpendicular to the c axis is improved, but the one parallel to the c axis is improved even more significantly. The anisotropy in the critical current densities in the grain-aligned sample at 77 K is at least one to two orders of magnitude smaller than in the pure single crystal. The measurement of anisotropy of alternating current susceptibility as a function of temperature, especially its imaginary part, shows that there are still some residues of interlayer weak links in the grain-aligned samples, but they are quite different from and far less serious than the weak links in the sintered sample.
Time-Domain Pure-state Polarization Analysis of Surface Waves Traversing California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J; Walter, W R; Lay, T
A time-domain pure-state polarization analysis method is used to characterize surface waves traversing California parallel to the plate boundary. The method is applied to data recorded at four broadband stations in California from twenty-six large, shallow earthquakes which occurred since 1988, yielding polarization parameters such as the ellipticity, Euler angles, instantaneous periods, and wave incident azimuths. The earthquakes are located along the circum-Pacific margin and the ray paths cluster into two groups, with great-circle paths connecting stations MHC and PAS or CMB and GSC. The first path (MHC-PAS) is in the vicinity of the San Andreas Fault System (SAFS), andmore » the second (CMB-GSC) traverses the Sierra Nevada Batholith parallel to and east of the SAFS. Both Rayleigh and Love wave data show refractions due to lateral velocity heterogeneities under the path, indicating that accurate phase velocity and attenuation analysis requires array measurements. The Rayleigh waves are strongly affected by low velocity anomalies beneath Central California, with ray paths bending eastward as waves travel toward the south, while Love waves are less affected, providing observables to constrain the depth extent of the anomalies. Strong lateral gradients in the lithospheric structure between the continent and the ocean are the likely cause of the path deflections.« less
NASA Astrophysics Data System (ADS)
Nickelsen, Simin; Moghadam, Afsaneh Dorri; Ferguson, J. B.; Rohatgi, Pradeep
2015-10-01
In the present study, the wetting behavior of surfaces of various common metallic materials used in the water industry including C84400 brass, commercially pure aluminum (99.0% pure), Nickle-Molybdenum alloy (Hastelloy C22), and 316 Stainless Steel prepared by mechanical abrasion and contact angles of several materials after mechanical abrasion were measured. A model to estimate roughness factor, Rf, and fraction of solid/oil interface, ƒso, for surfaces prepared by mechanical abrasion is proposed based on the assumption that abrasive particles acting on a metallic surface would result in scratches parallel to each other and each scratch would have a semi-round cross-section. The model geometrically describes the relation between sandpaper particle size and water/oil contact angle predicted by both the Wenzel and Cassie-Baxter contact type, which can then be used for comparison with experimental data to find which regime is active. Results show that brass and Hastelloy followed Cassie-Baxter behavior, aluminum followed Wenzel behavior and stainless steel exhibited a transition from Wenzel to Cassie-Baxter. Microstructural studies have also been done to rule out effects beyond the Wenzel and Cassie-Baxter theories such as size of structural details.
Multiple-stage pure phase encoding with biometric information
NASA Astrophysics Data System (ADS)
Chen, Wen
2018-01-01
In recent years, many optical systems have been developed for securing information, and optical encryption/encoding has attracted more and more attention due to the marked advantages, such as parallel processing and multiple-dimensional characteristics. In this paper, an optical security method is presented based on pure phase encoding with biometric information. Biometric information (such as fingerprint) is employed as security keys rather than plaintext used in conventional optical security systems, and multiple-stage phase-encoding-based optical systems are designed for generating several phase-only masks with biometric information. Subsequently, the extracted phase-only masks are further used in an optical setup for encoding an input image (i.e., plaintext). Numerical simulations are conducted to illustrate the validity, and the results demonstrate that high flexibility and high security can be achieved.
Performance enhancement of various real-time image processing techniques via speculative execution
NASA Astrophysics Data System (ADS)
Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.
1996-03-01
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
Contact allergy to air-exposed geraniol: clinical observations and report of 14 cases.
Hagvall, Lina; Karlberg, Ann-Therese; Christensson, Johanna Bråred
2012-07-01
The fragrance terpene geraniol forms sensitizing compounds via autoxidation and skin metabolism. Geranial and neral, the two isomers of citral, are the major haptens formed in both of these activation pathways. To investigate whether testing with oxidized geraniol detects more cases of contact allergy than testing with pure geraniol. The pattern of reactions to pure and oxidized geraniol, and metabolites/autoxidation products, was studied to investigate the importance of autoxidation or cutaneous metabolism in contact allergy to geraniol. Pure and oxidized geraniol were tested at 2.0% petrolatum in 2227 and 2179 consecutive patients, respectively. In parallel, geranial, neral and citral were tested in 2152, 1626 and 1055 consecutive patients, respectively. Pure and oxidized geraniol gave positive patch test reactions in 0.13% and 0.55% of the patients, respectively. Eight of 11 patients with positive patch test reactions to oxidized geraniol also reacted to citral or its components. Relevance for the positive patch test reactions in relation to the patients' dermatitis was found in 11 of 14 cases. Testing with oxidized geraniol could detect more cases of contact allergy to geraniol. The reaction pattern of the 14 cases presented indicates that both autoxidation and metabolism could be important in sensitization to geraniol. © 2012 John Wiley & Sons A/S.
Hayakawa, Satoshi; Matsumoto, Yuko; Uetsuki, Keita; Shirosaki, Yuki; Osaka, Akiyoshi
2015-06-01
Pure titanium substrates were chemically oxidized with H2O2 and subsequent thermally oxidized at 400 °C in air to form anatase-type titania layer on their surface. The chemically and thermally oxidized titanium substrate (CHT) was aligned parallel to the counter specimen such as commercially pure titanium (cpTi), titanium alloy (Ti6Al4V) popularly used as implant materials or Al substrate with 0.3-mm gap. Then, they were soaked in Kokubo's simulated body fluid (SBF, pH 7.4, 36.5 °C) for 7 days. XRD and SEM analysis showed that the in vitro apatite-forming ability of the contact surface of the CHT specimen decreased in the order: cpTi > Ti6Al4V > Al. EDX and XPS surface analysis showed that aluminum species were present on the contact surface of the CHT specimen aligned parallel to the counter specimen such as Ti6Al4V and Al. This result indicated that Ti6Al4V or Al specimens released the aluminum species into the SBF under the spatial gap. The released aluminum species might be positively or negatively charged in the SBF and thus can interact with calcium or phosphate species as well as titania layer, causing the suppression of the primary heterogeneous nucleation and growth of apatite on the contact surface of the CHT specimen under the spatial gap. The diffusion and adsorption of aluminum species derived from the half-sized counter specimen under the spatial gap resulted in two dimensionally area-selective deposition of apatite particles on the contact surfaces of the CHT specimen.
Effects of Hall current and electron temperature anisotropy on proton fire-hose instabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hau, L.-N.; Department of Physics, National Central University, Jhongli, Taiwan; Wang, B.-J.
The standard magnetohydrodynamic (MHD) theory predicts that the Alfvén wave may become fire-hose unstable for β{sub ∥}−β{sub ⊥}>2. In this study, we examine the proton fire-hose instability (FHI) based on the gyrotropic two-fluid model, which incorporates the ion inertial effects arising from the Hall current and electron temperature anisotropy but neglects the electron inertia in the generalized Ohm's law. The linear dispersion relation is derived and analyzed which in the long wavelength approximation, λ{sub i}k→0 or α{sub e}=μ{sub 0}(p{sub ∥,e}−p{sub ⊥,e})/B{sup 2}=1, recovers the ideal MHD model with separate temperature for ions and electrons. Here, λ{sub i} is the ionmore » inertial length and k is the wave number. For parallel propagation, both ion cyclotron and whistler waves become propagating and growing for β{sub ∥}−β{sub ⊥}>2+λ{sub i}{sup 2}k{sup 2}(α{sub e}−1){sup 2}/2. For oblique propagation, the necessary condition for FHI remains to be β{sub ∥}−β{sub ⊥}>2 and there exist one or two unstable fire-hose modes, which can be propagating and growing or purely growing. For large λ{sub i}k values, there exists no nearly parallel FHI leaving only oblique FHI and the effect of α{sub e}>1 may greatly enhance the growth rate of parallel and oblique FHI. The magnetic field polarization of FHI may be reversed due to the sign change associated with (α{sub e}−1) and the purely growing FHI may possess linear polarization while the propagating and growing FHI may possess right-handed or left-handed polarization.« less
Maximizing RNA yield from archival renal tumors and optimizing gene expression analysis.
Glenn, Sean T; Head, Karen L; Teh, Bin T; Gross, Kenneth W; Kim, Hyung L
2010-01-01
Formalin-fixed, paraffin-embedded tissues are widely available for gene expression analysis using TaqMan PCR. Five methods, including 4 commercial kits, for recovering RNA from paraffin-embedded renal tumor tissue were compared. The MasterPure kit from Epicentre produced the highest RNA yield. However, the difference in RNA yield between the kit from Epicenter and Invitrogen's TRIzol method was not significant. Using the top 3 RNA isolation methods, the manufacturers' protocols were modified to include an overnight Proteinase K digestion. Overnight protein digestion resulted in a significant increase in RNA yield. To optimize the reverse transcription reaction, conventional reverse transcription with random oligonucleotide primers was compared to reverse transcription using primers specific for genes of interest. Reverse transcription using gene-specific primers significantly increased the quantity of cDNA detectable by TaqMan PCR. Therefore, expression profiling of formalin-fixed, paraffin-embedded tissue using TaqMan qPCR can be optimized by using the MasterPure RNA isolation kit modified to include an overnight Proteinase K digestion and gene-specific primers during the reverse transcription.
Visualization Software for VisIT Java Client
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billings, Jay Jay; Smith, Robert W
The VisIT Java Client (JVC) library is a lightweight thin client that is designed and written purely in the native language of Java (the Python & JavaScript versions of the library use the same concept) and communicates with any new unmodified standalone version of VisIT, a high performance computing parallel visualization toolkit, over traditional or web sockets and dynamically determines capabilities of the running VisIT instance whether local or remote.
Hydrogen Assisted Cracking of High Strength Alloys
2003-08-01
maraging steels (Dautovich and Floreen, 1973, 1977; Gerberich et al., 1988; Yamaguchi, et al., 1997). This behavior is typically described by a...transgranular. A similar maximum in IG cracking susceptibility near the free corrosion potential was reported for 18Ni Maraging steel in neutral NaCl... steel in 133 kPa pure H2 parallels the behavior of AISI 4340 and the 18 Ni maraging steels , particularly in terms of a low temperature activation
NASA Technical Reports Server (NTRS)
Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce
2016-01-01
A long-standing "Digital Divide" in data representation exists between the preferred way of data access by the hydrology community and the common way of data archival by earth science data centers. Typically, in hydrology, earth surface features are expressed as discrete spatial objects (e.g., watersheds), and time-varying data are contained in associated time series. Data in earth science archives, although stored as discrete values (of satellite swath pixels or geographical grids), represent continuous spatial fields, one file per time step. This Divide has been an obstacle, specifically, between the Consortium of Universities for the Advancement of Hydrologic Science, Inc. and NASA earth science data systems. In essence, the way data are archived is conceptually orthogonal to the desired method of access. Our recent work has shown an optimal method of bridging the Divide, by enabling operational access to long-time series (e.g., 36 years of hourly data) of selected NASA datasets. These time series, which we have termed "data rods," are pre-generated or generated on-the-fly. This optimal solution was arrived at after extensive investigations of various approaches, including one based on "data curtains." The on-the-fly generation of data rods uses "data cubes," NASA Giovanni, and parallel processing. The optimal reorganization of NASA earth science data has significantly enhanced the access to and use of the data for the hydrology user community.
SPDF Data and Orbit Services Supporting Open Access, Use and Archiving of MMS Data
NASA Astrophysics Data System (ADS)
McGuire, R. E.; Bilitza, D.; Candey, R. M.; Chimiak, R.; Cooper, J. F.; Garcia, L. N.; Harris, B. T.; Johnson, R. C.; Kovalick, T. J.; Lal, N.; Leckner, H. A.; Liu, M. H.; Papitashvili, N. E.; Roberts, D. A.; Yurow, R. E.
2015-12-01
NASA's Space Physics Data Facility (SPDF) project is now serving MMS definitive and predictive interactive orbit plots, listings and conjunction calculations through our SSCWeb and 4D Orbit Viewer services. In March 2016 and in parallel with the MMS Science Data Center (SDC) at LASP, SPDF will begin publicly serving a complete set of MMS Level-2 and higher, survey and burst-mode science data products from all four spacecraft and all instruments. The initial Level-2 data available will be from September 2015 to early February 2016, with Level-2 products subsequently validated and publicly available with an approximate one month lag. All MMS Level-2 and higher data products are produced in standard CDF format with standard ISTP/SPDF metadata and will be served by SPDF through our CDAWeb data service, including our web services and associated APIs for IDL and Matlab users, and through direct FTP/HTTP directory browse and file downloads. SPDF's ingest, archival preservation and active serving of current MMS science data is part of our role as an active heliophysics final archive. SPDF's ingest of complete and current science data products from other active heliophysics missions with SPDF services will help enable coordinated and correlative MMS science analysis by the open international science community with current data from THEMIS, the Van Allen Probes and other missions including TWINS, Cluster, ACE, Wind, >120 ground magnetometer stations as well as instruments on the NOAA GOES and POES spacecraft. Please see the related Candey et.al. paper on "SPDF Ancillary Services and Technologies Supporting Open Access, Use and Archiving of MMS Data" for other aspects of what SPDF is doing. All SPDF data and services are available from the SPDF home page at http://spdf.gsfc.nasa.gov .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
Motion control of planar parallel robot using the fuzzy descriptor system approach.
Vermeiren, Laurent; Dequidt, Antoine; Afroun, Mohamed; Guerra, Thierry-Marie
2012-09-01
This work presents the control of a two-degree of freedom parallel robot manipulator. A quasi-LPV approach, through the so-called TS fuzzy model and LMI constraints problems is used. Moreover, in this context a way to derive interesting control laws is to keep the descriptor form of the mechanical system. Therefore, new LMI problems have to be defined that helps to reduce the conservatism of the usual results. Some relaxations are also proposed to leave the pure quadratic stability/stabilization framework. A comparison study between the classical control strategies from robotics and the control design using TS fuzzy descriptor models is carried out to show the interest of the proposed approach. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
CFD Research, Parallel Computation and Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1995-01-01
During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.
Virtual rounds: simulation-based education in procedural medicine
NASA Astrophysics Data System (ADS)
Shaffer, David W.; Meglan, Dwight A.; Ferrell, Margaret; Dawson, Steven L.
1999-07-01
Computer-based simulation is a goal for training physicians in specialties where traditional training puts patients at risk. Intuitively, interactive simulation of anatomy, pathology, and therapeutic actions should lead to shortening of the learning curve for novice or inexperienced physicians. Effective transfer of knowledge acquired in simulators must be shown for such devices to be widely accepted in the medical community. We have developed an Interventional Cardiology Training Simulator which incorporates real-time graphic interactivity coupled with haptic response, and an embedded curriculum permitting rehearsal, hypertext links, personal archiving and instructor review and testing capabilities. This linking of purely technical simulation with educational content creates a more robust educational purpose for procedural simulators.
NASA Technical Reports Server (NTRS)
Bebout, Leslie; Keller, R.; Miller, S.; Jahnke, L.; DeVincenzi, D. (Technical Monitor)
2002-01-01
The Ames Exobiology Culture Collection Database (AECC-DB) has been developed as a collaboration between microbial ecologists and information technology specialists. It allows for extensive web-based archiving of information regarding field samples to document microbial co-habitation of specific ecosystem micro-environments. Documentation and archiving continues as pure cultures are isolated, metabolic properties determined, and DNA extracted and sequenced. In this way metabolic properties and molecular sequences are clearly linked back to specific isolates and the location of those microbes in the ecosystem of origin. Use of this database system presents a significant advancement over traditional bookkeeping wherein there is generally little or no information regarding the environments from which microorganisms were isolated. Generally there is only a general ecosystem designation (i.e., hot-spring). However within each of these there are a myriad of microenvironments with very different properties and determining exactly where (which microenvironment) a given microbe comes from is critical in designing appropriate isolation media and interpreting physiological properties. We are currently using the database to aid in the isolation of a large number of cyanobacterial species and will present results by PI's and students demonstrating the utility of this new approach.
Password Cracking Using Sony Playstations
NASA Astrophysics Data System (ADS)
Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet
Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.
VizieR Online Data Catalog: HST Frontier Fields Herschel sources (Rawle+, 2016)
NASA Astrophysics Data System (ADS)
Rawle, T. D.; Altieri, B.; Egami, E.; Perez-Gonzalez, P. G.; Boone, F.; Clement, B.; Ivison, R. J.; Richard, J.; Rujopakarn, W.; Valtchanov, I.; Walth, G.; Weiner, B. J.; Blain, A. W.; Dessauges-Zavadsky, M.; Kneib, J.-P.; Lutz, D.; Rodighiero, G.; Schaerer, D.; Smail, I.
2017-07-01
We present a complete census of the 263 Herschel-detected sources within the HST Frontier Fields, including 163 lensed sources located behind the clusters. Our primary aim is to provide a robust legacy catalogue of the Herschel fluxes, which we combine with archival data from Spitzer and WISE to produce IR SEDs. We optimally combine the IR photometry with data from HST, VLA and ground-based observatories in order to identify optical counterparts and gain source redshifts. Each cluster is observed in two distinct regions, referred to as the central and parallel footprints. (2 data files).
Opinion: Why we need a centralized repository for isotopic data
Pauli, Jonathan N.; Newsome, Seth D.; Cook, Joseph A.; Harrod, Chris; Steffan, Shawn A.; Baker, Christopher J. O.; Ben-David, Merav; Bloom, David; Bowen, Gabriel J.; Cerling, Thure E.; Cicero, Carla; Cook, Craig; Dohm, Michelle; Dharampal, Prarthana S.; Graves, Gary; Gropp, Robert; Hobson, Keith A.; Jordan, Chris; MacFadden, Bruce; Pilaar Birch, Suzanne; Poelen, Jorrit; Ratnasingham, Sujeevan; Russell, Laura; Stricker, Craig A.; Uhen, Mark D.; Yarnes, Christopher T.; Hayden, Brian
2017-01-01
Stable isotopes encode and integrate the origin of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines (1, 2). Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created a scientific field that is growing exponentially, and generating data at a rate paralleling the explosive rise of DNA sequencing and genomics (3). Centralized data repositories, such as GenBank, have become increasingly important as a means for archiving information, and “Big Data” analytics of these resources are revolutionizing science and everyday life.
Scalable Data Mining and Archiving for the Square Kilometre Array
NASA Astrophysics Data System (ADS)
Jones, D. L.; Mattmann, C. A.; Hart, A. F.; Lazio, J.; Bennett, T.; Wagstaff, K. L.; Thompson, D. R.; Preston, R.
2011-12-01
As the technologies for remote observation improve, the rapid increase in the frequency and fidelity of those observations translates into an avalanche of data that is already beginning to eclipse the resources, both human and technical, of the institutions and facilities charged with managing the information. Common data management tasks like cataloging both data itself and contextual meta-data, creating and maintaining scalable permanent archive, and making data available on-demand for research present significant software engineering challenges when considered at the scales of modern multi-national scientific enterprises such as the upcoming Square Kilometre Array project. The NASA Jet Propulsion Laboratory (JPL), leveraging internal research and technology development funding, has begun to explore ways to address the data archiving and distribution challenges with a number of parallel activities involving collaborations with the EVLA and ALMA teams at the National Radio Astronomy Observatory (NRAO), and members of the Square Kilometre Array South Africa team. To date, we have leveraged the Apache OODT Process Control System framework and its catalog and archive service components that provide file management, workflow management, resource management as core web services. A client crawler framework ingests upstream data (e.g., EVLA raw directory output), identifies its MIME type and automatically extracts relevant metadata including temporal bounds, and job-relevant/processing information. A remote content acquisition (pushpull) service is responsible for staging remote content and handing it off to the crawler framework. A science algorithm wrapper (called CAS-PGE) wraps underlying code including CASApy programs for the EVLA, such as Continuum Imaging and Spectral Line Cube generation, executes the algorithm, and ingests its output (along with relevant extracted metadata). In addition to processing, the Process Control System has been leveraged to provide data curation and automatic ingestion for the MeerKAT/KAT-7 precursor instrument in South Africa, helping to catalog and archive correlator and sensor output from KAT-7, and to make the information available for downstream science analysis. These efforts, supported by the increasing availability of high-quality open source software, represent a concerted effort to seek a cost-conscious methodology for maintaining the integrity of observational data from the upstream instrument to the archive, and at the same time ensuring that the data, with its richly annotated catalog of meta-data, remains a viable resource for research into the future.
Diky, Vladimir; Chirico, Robert D; Muzny, Chris D; Kazakov, Andrei F; Kroenlein, Kenneth; Magee, Joseph W; Abdulagatov, Ilmutdin; Frenkel, Michael
2013-12-23
ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported in this journal. The present article describes the background and implementation for new additions in latest release of TDE. Advances are in the areas of program architecture and quality improvement for automatic property evaluations, particularly for pure compounds. It is shown that selection of appropriate program architecture supports improvement of the quality of the on-demand property evaluations through application of a readily extensible collection of constraints. The basis and implementation for other enhancements to TDE are described briefly. Other enhancements include the following: (1) implementation of model-validity enforcement for specific equations that can provide unphysical results if unconstrained, (2) newly refined group-contribution parameters for estimation of enthalpies of formation for pure compounds containing carbon, hydrogen, and oxygen, (3) implementation of an enhanced group-contribution method (NIST-Modified UNIFAC) in TDE for improved estimation of phase-equilibrium properties for binary mixtures, (4) tools for mutual validation of ideal-gas properties derived through statistical calculations and those derived independently through combination of experimental thermodynamic results, (5) improvements in program reliability and function that stem directly from the recent redesign of the TRC-SOURCE Data Archival System for experimental property values, and (6) implementation of the Peng-Robinson equation of state for binary mixtures, which allows for critical evaluation of mixtures involving supercritical components. Planned future developments are summarized.
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Arkas: Rapid reproducible RNAseq analysis
Colombo, Anthony R.; J. Triche Jr, Timothy; Ramsingh, Giridharan
2017-01-01
The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments. We offer cloud-scale RNAseq pipelines Arkas-Quantification, and Arkas-Analysis available within Illumina’s BaseSpace cloud application platform which expedites Kallisto preparatory routines, reliably calculates differential expression, and performs gene-set enrichment of REACTOME pathways . Due to inherit inefficiencies of scale, Illumina's BaseSpace computing platform offers a massively parallel distributive environment improving data management services and data importing. Arkas-Quantification deploys Kallisto for parallel cloud computations and is conveniently integrated downstream from the BaseSpace Sequence Read Archive (SRA) import/conversion application titled SRA Import. Arkas-Analysis annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata, calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The Arkas cloud pipeline supports ENSEMBL transcriptomes and can be used downstream from the SRA Import facilitating raw sequencing importing, SRA FASTQ conversion, RNA quantification and analysis steps. PMID:28868134
A design concept of parallel elasticity extracted from biological muscles for engineered actuators.
Chen, Jie; Jin, Hongzhe; Iida, Fumiya; Zhao, Jie
2016-08-23
Series elastic actuation that takes inspiration from biological muscle-tendon units has been extensively studied and used to address the challenges (e.g. energy efficiency, robustness) existing in purely stiff robots. However, there also exists another form of passive property in biological actuation, parallel elasticity within muscles themselves, and our knowledge of it is limited: for example, there is still no general design strategy for the elasticity profile. When we look at nature, on the other hand, there seems a universal agreement in biological systems: experimental evidence has suggested that a concave-upward elasticity behaviour is exhibited within the muscles of animals. Seeking to draw possible design clues for elasticity in parallel with actuators, we use a simplified joint model to investigate the mechanisms behind this biologically universal preference of muscles. Actuation of the model is identified from general biological joints and further reduced with a specific focus on muscle elasticity aspects, for the sake of easy implementation. By examining various elasticity scenarios, one without elasticity and three with elasticity of different profiles, we find that parallel elasticity generally exerts contradictory influences on energy efficiency and disturbance rejection, due to the mechanical impedance shift thus caused. The trade-off analysis between them also reveals that concave parallel elasticity is able to achieve a more advantageous balance than linear and convex ones. It is expected that the results could contribute to our further understanding of muscle elasticity and provide a theoretical guideline on how to properly design parallel elasticity behaviours for engineering systems such as artificial actuators and robotic joints.
Message Passing and Shared Address Space Parallelism on an SMP Cluster
NASA Technical Reports Server (NTRS)
Shan, Hongzhang; Singh, Jaswinder P.; Oliker, Leonid; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
Currently, message passing (MP) and shared address space (SAS) are the two leading parallel programming paradigms. MP has been standardized with MPI, and is the more common and mature approach; however, code development can be extremely difficult, especially for irregularly structured computations. SAS offers substantial ease of programming, but may suffer from performance limitations due to poor spatial locality and high protocol overhead. In this paper, we compare the performance of and the programming effort required for six applications under both programming models on a 32-processor PC-SMP cluster, a platform that is becoming increasingly attractive for high-end scientific computing. Our application suite consists of codes that typically do not exhibit scalable performance under shared-memory programming due to their high communication-to-computation ratios and/or complex communication patterns. Results indicate that SAS can achieve about half the parallel efficiency of MPI for most of our applications, while being competitive for the others. A hybrid MPI+SAS strategy shows only a small performance advantage over pure MPI in some cases. Finally, improved implementations of two MPI collective operations on PC-SMP clusters are presented.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
Hirano, Toshiyuki; Sato, Fumitoshi
2014-07-28
We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.
Hybrid Parallel Contour Trees, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher; Fasel, Patricia; Carr, Hamish
A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less
NASA Astrophysics Data System (ADS)
Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.
2017-12-01
Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2017-06-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
Pecevski, Dejan; Natschläger, Thomas; Schuch, Klaus
2009-01-01
The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations.
Pure quasi-P wave equation and numerical solution in 3D TTI media
NASA Astrophysics Data System (ADS)
Zhang, Jian-Min; He, Bing-Shou; Tang, Huai-Gu
2017-03-01
Based on the pure quasi-P wave equation in transverse isotropic media with a vertical symmetry axis (VTI media), a quasi-P wave equation is obtained in transverse isotropic media with a tilted symmetry axis (TTI media). This is achieved using projection transformation, which rotates the direction vector in the coordinate system of observation toward the direction vector for the coordinate system in which the z-component is parallel to the symmetry axis of the TTI media. The equation has a simple form, is easily calculated, is not influenced by the pseudo-shear wave, and can be calculated reliably when δ is greater than ɛ. The finite difference method is used to solve the equation. In addition, a perfectly matched layer (PML) absorbing boundary condition is obtained for the equation. Theoretical analysis and numerical simulation results with forward modeling prove that the equation can accurately simulate a quasi-P wave in TTI medium.
Electrolytic Migration of Ag-Pd Alloy Wires with Various Pd Contents
NASA Astrophysics Data System (ADS)
Lin, Yan-Cheng; Chen, Chun-Hao; He, Yu-Zhen; Chen, Sheng-Chi; Chuang, Tung-Han
2018-07-01
During Ag ion migration in an aqueous water drop covering a pair of parallel Ag-Pd wires under current stressing, hydrogen bubbles form first from the cathode, followed by the appearance of pure Ag dendrites on the cathodic wire. In this study, Ag dendrites with a diameter of 0.2-0.4 μm grew toward the anodic wire. The growth rate ( v) of these dendrites decreased with the Pd content ( c) with a linear relationship of: v = 10.02 - 0.43 c . Accompanying the growth of pure Ag dendrites was the formation of a continuous layer of crystallographic Ag2O particles on the surface of the anodic wire. The deposition of such insulating Ag2O products did not prevent the contact of Ag dendrites with the anodic Ag-Pd wire or the short circuit of the wire couple.
Developmental and individual differences in pure numerical estimation.
Booth, Julie L; Siegler, Robert S
2006-01-01
The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.
Electrolytic Migration of Ag-Pd Alloy Wires with Various Pd Contents
NASA Astrophysics Data System (ADS)
Lin, Yan-Cheng; Chen, Chun-Hao; He, Yu-Zhen; Chen, Sheng-Chi; Chuang, Tung-Han
2018-03-01
During Ag ion migration in an aqueous water drop covering a pair of parallel Ag-Pd wires under current stressing, hydrogen bubbles form first from the cathode, followed by the appearance of pure Ag dendrites on the cathodic wire. In this study, Ag dendrites with a diameter of 0.2-0.4 μm grew toward the anodic wire. The growth rate (v) of these dendrites decreased with the Pd content (c) with a linear relationship of: v = 10.02 - 0.43 c . Accompanying the growth of pure Ag dendrites was the formation of a continuous layer of crystallographic Ag2O particles on the surface of the anodic wire. The deposition of such insulating Ag2O products did not prevent the contact of Ag dendrites with the anodic Ag-Pd wire or the short circuit of the wire couple.
NASA Astrophysics Data System (ADS)
Yang, Chao; Song, Jian; Li, Liang; Li, Shengbo; Cao, Dongpu
2016-08-01
This paper presents an economical launching and accelerating mode, including four ordered phases: pure electrical driving, clutch engagement and engine start-up, engine active charging, and engine driving, which can be fit for the alternating conditions and improve the fuel economy of hybrid electric bus (HEB) during typical city-bus driving scenarios. By utilizing the fast response feature of electric motor (EM), an adaptive controller for EM is designed to realize the power demand during the pure electrical driving mode, the engine starting mode and the engine active charging mode. Concurrently, the smoothness issue induced by the sequential mode transitions is solved with a coordinated control logic for engine, EM and clutch. Simulation and experimental results show that the proposed launching and accelerating mode and its control methods are effective in improving the fuel economy and ensure the drivability during the fast transition between the operation modes of HEB.
Conformal superalgebras via tractor calculus
NASA Astrophysics Data System (ADS)
Lischewski, Andree
2015-01-01
We use the manifestly conformally invariant description of a Lorentzian conformal structure in terms of a parabolic Cartan geometry in order to introduce a superalgebra structure on the space of twistor spinors and normal conformal vector fields formulated in purely algebraic terms on parallel sections in tractor bundles. Via a fixed metric in the conformal class, one reproduces a conformal superalgebra structure that has been considered in the literature before. The tractor approach, however, makes clear that the failure of this object to be a Lie superalgebra in certain cases is due to purely algebraic identities on the spinor module and to special properties of the conformal holonomy representation. Moreover, it naturally generalizes to higher signatures. This yields new formulas for constructing new twistor spinors and higher order normal conformal Killing forms out of existing ones, generalizing the well-known spinorial Lie derivative. Moreover, we derive restrictions on the possible dimension of the space of twistor spinors in any metric signature.
Research in Parallel Computing: 1987-1990
1994-08-05
emulation, we layered UNIX BSD 4.3 functionality above the kernel primitives, but packaged both as a monolithic unit running in privileged state. This...further, so that only a "pure kernel " or " microkernel " runs in privileged mode, while the other components of the environment execute as one or more client... kernel DTIC TAB 24 2.2.2 Nectar’s communication software Unannounced 0 25 2.2.3 A Nectar programming interface Justification 25 2.3 System evaluation 26
Chamakuri, Srinivas; Jain, Prashi; Guduru, Shiva Krishna Reddy; Arney, Joseph Winston; MacKenzie, Kevin; Santini, Conrad; Young, Damian W
2018-05-11
Amino acids from the chiral pool have been used to produce a 24-member branch of 2,6-disubstituted piperazine scaffolds suitable for use in compound library production. Each scaffold was obtained as a single absolute stereoisomer in multi-gram quantities. Stereochemistry was confirmed by 2D NMR protocols and enantiomeric purity was determined by chiral HPLC. The scaffolds are intended for use as intermediates in parallel synthesis of small-molecule libraries.
Tretter, F
2016-08-01
Methodological reflections on pain research and pain therapy focussing on addiction risks are addressed in this article. Starting from the incompleteness of objectification of the purely subjectively fully understandable phenomena of pain and addiction, the relevance of a comprehensive general psychology is underlined. It is shown that that reduction of pain and addiction to a mainly focally arguing neurobiology is only possible if both disciplines have a systemic concept of pain and addiction. With this aim, parallelized conceptual network models are presented.
Soviet Research in Production and Physical Metallurgy of Pure Metals
1964-01-10
theeby the level of internal friction. Conclusions 1. A methodology was developed for growing nP27bdemn slag crystals from the gaseous phae using the...case of zinc and cadmium the base may be situated perpendicularly to the axis of the specimen, i.e., parallel to the crystallization front. The same...separately, the latter being soldered to the ring with copper- zinc solder. With the modulator in a position as shown in Figure 2, the geometrical center
Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook
NASA Astrophysics Data System (ADS)
Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.
2012-12-01
The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.
CARDS - comprehensive aerological reference data set. Station history, Version 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-03-01
The possibility of anthropogenic climate change has reached the attention of Government officials and researchers. However, one cannot study climate change without climate data. The CARDS project will produce high-quality upper-air data for the research community and for policy-makers. The authors intend to produce a dataset which is: easy to use, as complete as possible, as free of random errors as possible. They will also attempt to identify biases and remove them whenever possible. In this report, they relate progress toward their goal. They created a robust new format for archiving upper-air data, and designed a relational database structure tomore » hold them. The authors have converted 13 datasets to the new format and have archived over 10,000,000 individual soundings from 10 separate data sources. They produce and archive a metadata summary of each sounding they load. They have researched station histories, and have built a preliminary upper-air station history database. They have converted station-sorted data from their primary database into synoptic-sorted data in a parallel database. They have tested and will soon implement an advanced quality-control procedure, capable of detecting and often repairing errors in geopotential height, temperature, humidity, and wind. This unique quality-control method uses simultaneous vertical, horizontal, and temporal checks of several meteorological variables. It can detect errors other methods cannot. This report contains the station histories for the CARDS data set.« less
Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun
2018-01-01
Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073
Flexible and unique representations of two-digit decimals.
Zhang, Li; Chen, Min; Lin, Chongde; Szűcs, Denes
2014-09-01
We examined the representation of two-digit decimals through studying distance and compatibility effects in magnitude comparison tasks in four experiments. Using number pairs with different leftmost digits, we found both the second digit distance effect and compatibility effect with two-digit integers but only the second digit distance effect with two-digit pure decimals. This suggests that both integers and pure decimals are processed in a compositional manner. In contrast, neither the second digit distance effect nor the compatibility effect was observed in two-digit mixed decimals, thereby showing no evidence for compositional processing of two-digit mixed decimals. However, when the relevance of the rightmost digit processing was increased by adding some decimals pairs with the same leftmost digits, both pure and mixed decimals produced the compatibility effect. Overall, results suggest that the processing of decimals is flexible and depends on the relevance of unique digit positions. This processing mode is different from integer analysis in that two-digit mixed decimals demonstrate parallel compositional processing only when the rightmost digit is relevant. Findings suggest that people probably do not represent decimals by simply ignoring the decimal point and converting them to natural numbers. Copyright © 2014 Elsevier B.V. All rights reserved.
Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun
2018-01-01
Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.
Hersmus, Remko; Stoop, Hans; van de Geijn, Gert Jan; Eini, Ronak; Biermann, Katharina; Oosterhuis, J. Wolter; DHooge, Catharina; Schneider, Dominik T.; Meijssen, Isabelle C.; Dinjens, Winand N. M.; Dubbink, Hendrikus Jan; Drop, Stenvert L. S.; Looijenga, Leendert H. J.
2012-01-01
Activating c-KIT mutations (exons 11 and 17) are found in 10–40% of testicular seminomas, the majority being missense point mutations (codon 816). Malignant ovarian dysgerminomas represent ∼3% of all ovarian cancers in Western countries, resembling testicular seminomas, regarding chromosomal aberrations and c-KIT mutations. DSD patients with specific Y-sequences have an increased risk for Type II Germ Cell Tumor/Cancer, with gonadoblastoma as precursor progressing to dysgerminoma. Here we present analysis of c-KIT exon 8, 9, 11, 13 and 17, and PDGFRA exon 12, 14 and 18 by conventional sequencing together with mutational analysis of c-KIT codon 816 by a sensitive and specific LightCycler melting curve analysis, confirmed by sequencing. The results are combined with data on TSPY and OCT3/4 expression in a series of 16 DSD patients presenting with gonadoblastoma and dysgerminoma and 15 patients presenting pure ovarian dysgerminomas without DSD. c-KIT codon 816 mutations were detected in five out of the total of 31 cases (all found in pure ovarian dysgerminomas). A synonymous SNP (rs 5578615) was detected in two patients, one DSD patient (with bilateral disease) and one patient with dysgerminoma. Next to these, three codon N822K mutations were detected in the group of 15 pure ovarian dysgerminomas. In total activating c-KIT mutations were found in 53% of ovarian dysgerminomas without DSD. In the group of 16 DSD cases a N505I and D820E mutation was found in a single tumor of a patient with gonadoblastoma and dysgerminoma. No PDGFRA mutations were found. Positive OCT3/4 staining was present in all gonadoblastomas and dysgerminomas investigated, TSPY expression was only seen in the gonadoblastoma/dysgerminoma lesions of the 16 DSD patients. This data supports the existence of two distinct but parallel pathways in the development of dysgerminoma, in which mutational status of c-KIT might parallel the presence of TSPY. PMID:22937135
NASA Astrophysics Data System (ADS)
Rowell, Alexandra L. K.; Thomas, David S. G.; Bailey, Richard M.; Holmes, Peter J.
2018-06-01
Sand ramps occur on a continuum of topographically-controlled landforms, ranging from purely aeolian features (climbing/falling dunes) to talus cones and alluvial fans. Sand ramps have been identified as potentially important palaeoenvironmental archives in dryland regions that possess relatively few Quaternary proxy records. Their utility however requires not only good age control of depositional phases but clear identification of process regimes, determined through morphological and sedimentological analyses, with several recent studies indicating the complexities of palaeoenvironmental interpretations and the controls of ramp development (Bateman et al., 2012; Rowell et al., 2018). Klipkraal Sands is a sand ramp on the north-eastern margin of the semi-arid Karoo that has been important for inferences of the extent of southern African Late Quaternary aeolian activity (Thomas et al., 2002). We reanalyse this feature, in the light of both its significance and other recent studies that have inferred extensive southern African LGM aeolian activity (Telfer et al., 2012, 2014). New sedimentological data and twelve OSL dates indicate the Klipkraal Sands formed episodically between 100-0.14 ka, rather than accumulating rapidly, while sedimentological data question the aeolian affinities of the bulk of the feature. Therefore, Klipkraal is reinterpreted as showing no particular affinity to the LGM, with sediments locally sourced with a significant colluvial component. Only the upper historical sediments can be clearly interpreted as aeolian deposits. A complex interplay of processes is suggested, for which a meaningful palaeoenvironmental interpretation cannot be easily defined. This implies that the local geomorphic processes and controls operating on sand ramps need to be established before they can be fully utilised as palaeoenvironmental archives, with implications for their interpretation worldwide.
Mn 0.9Co 0.1P in field parallel to hard direction: phase diagram and irreversibility of CONE phase
NASA Astrophysics Data System (ADS)
Zieba, A.; Becerra, C. C.; Oliveira, N. F.; Fjellvåg, H.; Kjekshus, A.
1992-02-01
A single crystal of Mn 0.9Co 0.1P, a homologue of MnP with disordered metal sublattice, has been studied by the ac susceptibility method in a steady field H. This report concerns H parallel to the orthorhombic a axis ( a> b> c). The magnetic phase diagram is qualitatively similar to that of MnP, including the presence of a Lifshitz multicritical point ( TL = 98 K, HL = 42 kOe) at the confluence of the paramagnetic, ferromagnetic and modulated FAN phases. Contrary to pure MnP, irreversible behaviour was observed in the susceptibility of the modulated CONE phase. This phenomenon develops only for fields above 30 kOe, in contrast to the irreversibility of the FAN phase (reported previously for H‖ b in the whole field range down to H = 0). New features of the presumably continuous CONE-FAN transition were also found.
Parallel processing by cortical inhibition enables context-dependent behavior.
Kuchibhotla, Kishore V; Gill, Jonathan V; Lindsay, Grace W; Papadoyannis, Eleni S; Field, Rachel E; Sten, Tom A Hindmarsh; Miller, Kenneth D; Froemke, Robert C
2017-01-01
Physical features of sensory stimuli are fixed, but sensory perception is context dependent. The precise mechanisms that govern contextual modulation remain unknown. Here, we trained mice to switch between two contexts: passively listening to pure tones and performing a recognition task for the same stimuli. Two-photon imaging showed that many excitatory neurons in auditory cortex were suppressed during behavior, while some cells became more active. Whole-cell recordings showed that excitatory inputs were affected only modestly by context, but inhibition was more sensitive, with PV + , SOM + , and VIP + interneurons balancing inhibition and disinhibition within the network. Cholinergic modulation was involved in context switching, with cholinergic axons increasing activity during behavior and directly depolarizing inhibitory cells. Network modeling captured these findings, but only when modulation coincidently drove all three interneuron subtypes, ruling out either inhibition or disinhibition alone as sole mechanism for active engagement. Parallel processing of cholinergic modulation by cortical interneurons therefore enables context-dependent behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S.; Wang, M.P.; Chen, C., E-mail: chench011-33@163.com
2014-05-01
The orientation dependence of the deformation microstructure has been investigated in commercial pure molybdenum. After deformation, the dislocation boundaries of compressed molybdenum can be classified, similar to that in face-centered cubic metals, into three types: dislocation cells (Type 2), and extended planar boundaries parallel to (Type 1) or not parallel to (Type 3) a (110) trace. However, it shows a reciprocal relationship between face-centered cubic metals and body-centered cubic metals on the orientation dependence of the deformation microstructure. The higher the strain, the finer the microstructure is and the smaller the inclination angle between extended planar boundaries and the compressionmore » axis is. - Highlights: • A reciprocal relationship between FCC metals and BCC metals is confirmed. • The dislocation boundaries can be classified into three types in compressed Mo. • The dislocation characteristic of different dislocation boundaries is different.« less
Smith, D.V.; Drenth, B.R.; Fairhead, J.D.; Lei, K.; Dark, J.A.; Al-Bassam, K.
2011-01-01
Aeromagnetic data belonging to the State Company of Geology and Mining of Iraq (GEOSURV) have been recovered from magnetic tapes and early paper maps. In 1974 a national airborne survey was flown by the French firm Compagnie General de Geophysique (CGG). Following the survey the magnetic data were stored on magnetic tapes within an air conditioned archive run by GEOSURV. In 1990, the power supply to the archive was cut resulting in the present-day poor condition of the tapes. Frontier Processing Company and the U.S. Geological Survey (USGS) have been able to recover over 99 percent of the original digital data from the CGG tapes. Preliminary reprocessing of the data yielded a total magnetic field anomaly map that reveals fine structures not evident in available published maps. Successful restoration of these comprehensive, high quality digital datasets obviates the need to resurvey the entire country, thereby saving considerable time and money. These data were delivered to GEOSURV in a standard format for further analysis and interpretation. A parallel effort by GETECH concentrated on recovering the legacy gravity data from the original field data sheets archived by IPC (Iraq Petroleum Company). These data have been compiled with more recent GEOSURV sponsored surveys thus allowing for the first time a comprehensive digital and unified national gravity database to be constructed with full principal facts. Figure 1 shows the final aeromagnetic and gravity data coverage of Iraq. The only part of Iraq lacking gravity and aeromagnetic data coverage is the mountainous areas of the Kurdish region of northeastern Iraq. Joint interpretation of the magnetic and gravity data will help guide future geophysical investigations by GEOSURV, whose ultimate aim is to discover economical mineral and energy resources. ?? 2011 Society of Exploration Geophysicists.
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
NASA Astrophysics Data System (ADS)
Renjith, A. R.; Mamtani, Manish A.; Urai, Janos L.
2016-01-01
We ask the question whether petrofabric data from anisotropy of magnetic susceptibility (AMS) analysis of deformed quartzites gives information about shape preferred orientation (SPO) or crystallographic preferred orientation (CPO) of quartz. Since quartz is diamagnetic and has a negative magnetic susceptibility, 11 samples of nearly pure quartzites with a negative magnetic susceptibility were chosen for this study. After performing AMS analysis, electron backscatter diffraction (EBSD) analysis was done in thin sections prepared parallel to the K1K3 plane of the AMS ellipsoid. Results show that in all the samples quartz SPO is sub-parallel to the orientation of the magnetic foliation. However, in most samples no clear correspondance is observed between quartz CPO and K1 (magnetic lineation) direction. This is contrary to the parallelism observed between K1 direction and orientation of quartz c-axis in the case of undeformed single quartz crystal. Pole figures of quartz indicate that quartz c-axis tends to be parallel to K1 direction only in the case where intracrystalline deformation of quartz is accommodated by prism
Lai, Victor K.; Lake, Spencer P.; Frey, Christina R.; Tranquillo, Robert T.; Barocas, Victor H.
2012-01-01
Fibrin and collagen, biopolymers occurring naturally in the body, are biomaterials commonly-used as scaffolds for tissue engineering. How collagen and fibrin interact to confer macroscopic mechanical properties in collagen-fibrin composite systems remains poorly understood. In this study, we formulated collagen-fibrin co-gels at different collagen-tofibrin ratios to observe changes in the overall mechanical behavior and microstructure. A modeling framework of a two-network system was developed by modifying our micro-scale model, considering two forms of interaction between the networks: (a) two interpenetrating but noninteracting networks (“parallel”), and (b) a single network consisting of randomly alternating collagen and fibrin fibrils (“series”). Mechanical testing of our gels show that collagen-fibrin co-gels exhibit intermediate properties (UTS, strain at failure, tangent modulus) compared to those of pure collagen and fibrin. The comparison with model predictions show that the parallel and series model cases provide upper and lower bounds, respectively, for the experimental data, suggesting that a combination of such interactions exists between the collagen and fibrin in co-gels. A transition from the series model to the parallel model occurs with increasing collagen content, with the series model best describing predominantly fibrin co-gels, and the parallel model best describing predominantly collagen co-gels. PMID:22482659
yourSky: Custom Sky-Image Mosaics via the Internet
NASA Technical Reports Server (NTRS)
Jacob, Joseph
2003-01-01
yourSky (http://yourSky.jpl.nasa.gov) is a computer program that supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. [yourSky is an upgraded version of the software reported in Software for Generating Mosaics of Astronomical Images (NPO-21121), NASA Tech Briefs, Vol. 25, No. 4 (April 2001), page 16a.] A requester no longer has to engage in the tedious process of determining what subset of images is needed, nor even to know how the images are indexed in image archives. Instead, in response to a requester s specification of the size and location of the sky area, (and optionally of the desired set and type of data, resolution, coordinate system, projection, and image format), yourSky automatically retrieves the component image data from archives totaling tens of terabytes stored on computer tape and disk drives at multiple sites and assembles the component images into a mosaic image by use of a high-performance parallel code. yourSky runs on the server computer where the mosaics are assembled. Because yourSky includes a Web-interface component, no special client software is needed: ordinary Web browser software is sufficient.
Real-Time Data Streaming and Storing Structure for the LHD's Fusion Plasma Experiments
NASA Astrophysics Data System (ADS)
Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Imazu, Setsuo; Nonomura, Miki; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Ida, Katsumi
2016-02-01
The LHD data acquisition and archiving system, i.e., LABCOM system, has been fully equipped with high-speed real-time acquisition, streaming, and storage capabilities. To deal with more than 100 MB/s continuously generated data at each data acquisition (DAQ) node, DAQ tasks have been implemented as multitasking and multithreaded ones in which the shared memory plays the most important role for inter-process fast and massive data handling. By introducing a 10-second time chunk named “subshot,” endless data streams can be stored into a consecutive series of fixed length data blocks so that they will soon become readable by other processes even while the write process is continuing. Real-time device and environmental monitoring are also implemented in the same way with further sparse resampling. The central data storage has been separated into two layers to be capable of receiving multiple 100 MB/s inflows in parallel. For the frontend layer, high-speed SSD arrays are used as the GlusterFS distributed filesystem which can provide max. 2 GB/s throughput. Those design optimizations would be informative for implementing the next-generation data archiving system in big physics, such as ITER.
Ultrasound Picture Archiving And Communication Systems
NASA Astrophysics Data System (ADS)
Koestner, Ken; Hottinger, C. F.
1982-01-01
The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.
1994-01-01
inborno- geneoui medium, Communications on Pure and Applied Mathematics, XVI, (1963). 363-38]. (8) M. Born and E . Wolf, Principles of Optics...of initiated communications . The final sta• e of the parallalised partitioning technique is the solution of a coupling matrix by the use of a parallel...Frequmeny Asympofic Exposoio for Hypebllcc Equaiomes" by B. Ewupuia. E . PAmni, and S. Odwn 32 ’A New- To’hmapa for Synthesis of OffsK Dud Rtfeca Sysmm
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
Recursive Algorithms for Real-Time Digital CR-RCn Pulse Shaping
NASA Astrophysics Data System (ADS)
Nakhostin, M.
2011-10-01
This paper reports on recursive algorithms for real-time implementation of CR-(RC)n filters in digital nuclear spectroscopy systems. The algorithms are derived by calculating the Z-transfer function of the filters for filter orders up to n=4 . The performances of the filters are compared with the performance of the conventional digital trapezoidal filter using a noise generator which separately generates pure series, 1/f and parallel noise. The results of our study enable one to select the optimum digital filter for different noise and rate conditions.
JCell--a Java-based framework for inferring regulatory networks from time series data.
Spieth, C; Supper, J; Streichert, F; Speer, N; Zell, A
2006-08-15
JCell is a Java-based application for reconstructing gene regulatory networks from experimental data. The framework provides several algorithms to identify genetic and metabolic dependencies based on experimental data conjoint with mathematical models to describe and simulate regulatory systems. Owing to the modular structure, researchers can easily implement new methods. JCell is a pure Java application with additional scripting capabilities and thus widely usable, e.g. on parallel or cluster computers. The software is freely available for download at http://www-ra.informatik.uni-tuebingen.de/software/JCell.
An engineering approach to automatic programming
NASA Technical Reports Server (NTRS)
Rubin, Stuart H.
1990-01-01
An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.
Lattice QCD calculation using VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Ohta, Shigemi
1995-02-01
A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.
Cyclotron line resonant transfer through neutron star atmospheres
NASA Technical Reports Server (NTRS)
Wang, John C. L.; Wasserman, Ira M.; Salpeter, Edwin E.
1988-01-01
Monte Carlo methods are used to study in detail the resonant radiative transfer of cyclotron line photons with recoil through a purely scattering neutron star atmosphere for both the polarized and unpolarized cases. For each case, the number of scatters, the path length traveled, the escape frequency shift, the escape direction cosine, the emergent frequency spectra, and the angular distribution of escaping photons are investigated. In the polarized case, transfer is calculated using both the cold plasma e- and o-modes and the magnetic vacuum perpendicular and parallel modes.
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro
1996-09-01
We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
McConnell, Joseph R.; Aristarain, Alberto J.; Banta, J. Ryan; Edwards, P. Ross; Simões, Jefferson C.
2007-01-01
Crustal dust in the atmosphere impacts Earth's radiative forcing directly by modifying the radiation budget and affecting cloud nucleation and optical properties, and indirectly through ocean fertilization, which alters carbon sequestration. Increased dust in the atmosphere has been linked to decreased global air temperature in past ice core studies of glacial to interglacial transitions. We present a continuous ice core record of aluminum deposition during recent centuries in the northern Antarctic Peninsula, the most rapidly warming region of the Southern Hemisphere; such a record has not been reported previously. This record shows that aluminosilicate dust deposition more than doubled during the 20th century, coincident with the ≈1°C Southern Hemisphere warming: a pattern in parallel with increasing air temperatures, decreasing relative humidity, and widespread desertification in Patagonia and northern Argentina. These results have far-reaching implications for understanding the forces driving dust generation and impacts of changing dust levels on climate both in the recent past and future. PMID:17389397
WaveJava: Wavelet-based network computing
NASA Astrophysics Data System (ADS)
Ma, Kun; Jiao, Licheng; Shi, Zhuoer
1997-04-01
Wavelet is a powerful theory, but its successful application still needs suitable programming tools. Java is a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multi- threaded, dynamic language. This paper addresses the design and development of a cross-platform software environment for experimenting and applying wavelet theory. WaveJava, a wavelet class library designed by the object-orient programming, is developed to take advantage of the wavelets features, such as multi-resolution analysis and parallel processing in the networking computing. A new application architecture is designed for the net-wide distributed client-server environment. The data are transmitted with multi-resolution packets. At the distributed sites around the net, these data packets are done the matching or recognition processing in parallel. The results are fed back to determine the next operation. So, the more robust results can be arrived quickly. The WaveJava is easy to use and expand for special application. This paper gives a solution for the distributed fingerprint information processing system. It also fits for some other net-base multimedia information processing, such as network library, remote teaching and filmless picture archiving and communications.
Rapid Processing of Radio Interferometer Data for Transient Surveys
NASA Astrophysics Data System (ADS)
Bourke, S.; Mooley, K.; Hallinan, G.
2014-05-01
We report on a software infrastructure and pipeline developed to process large radio interferometer datasets. The pipeline is implemented using a radical redesign of the AIPS processing model. An infrastructure we have named AIPSlite is used to spawn, at runtime, minimal AIPS environments across a cluster. The pipeline then distributes and processes its data in parallel. The system is entirely free of the traditional AIPS distribution and is self configuring at runtime. This software has so far been used to process a EVLA Stripe 82 transient survey, the data for the JVLA-COSMOS project, and has been used to process most of the EVLA L-Band data archive imaging each integration to search for short duration transients.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1992-01-01
In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1993-01-01
In the future, NASA expects to gather over a tera-byte per day of data requiring space for levels of archival storage. Data compression will be a key component in systems that store this data (e.g., optical disk and tape) as well as in communications systems (both between space and Earth and between scientific locations on Earth). We propose to develop algorithms that can be a basis for software and hardware systems that compress a wide variety of scientific data with different criteria for fidelity/bandwidth tradeoffs. The algorithmic approaches we consider are specially targeted for parallel computation where data rates of over 1 billion bits per second are achievable with current technology.
Membrane triangles with corner drilling freedoms. II - The ANDES element
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.; Militello, Carmelo
1992-01-01
This is the second article in a three-part series on the construction of 3-node, 9-dof membrane elements with normal-to-its-plane rotational freedoms (the so-called drilling freedoms) using parametrized variational principles. In this part, one such element is derived within the context of the assumed natural deviatoric strain (ANDES) formulation. The higher-order strains are obtained by constructing three parallel-to-sides pure-bending modes from which natural strains are obtained at the corner points and interpolated over the element. To attain rank sufficiency, an additional higher-order 'torsional' mode, corresponding to equal hierarchical rotations at each corner with all other motions precluded, is incorporated. The resulting formulation has five free parameters. When these parameters are optimized against pure bending by energy balance methods, the resulting element is found to coalesce with the optimal EFF element derived in Part I. Numerical integration as a strain filtering device is found to play a key role in this achievement.
Polarization-modulated FTIR spectroscopy of lipid/gramicidin monolayers at the air/water interface.
Ulrich, W P; Vogel, H
1999-01-01
Monolayers of gramicidin A, pure and in mixtures with dimyristoylphosphatidylcholine (DMPC), were studied in situ at the air/H2O and air/D2O interfaces by polarization-modulated infrared reflection absorption spectroscopy (PM-IRRAS). Simulations of the entire set of amide I absorption modes were also performed, using complete parameter sets for different conformations based on published normal mode calculations. The structure of gramicidin A in the DMPC monolayer could clearly be assigned to a beta6.3 helix. Quantitative analysis of the amide I bands revealed that film pressures of up to 25-30 mN/m the helix tilt angle from the vertical in the pure gramicidin A layer exceeded 60 degrees. A marked dependence of the peptide orientation on the applied surface pressure was observed for the mixed lipid-peptide monolayers. At low pressure the helix lay flat on the surface, whereas at high pressures the helix was oriented almost parallel to the surface normal. PMID:10049344
NASA Astrophysics Data System (ADS)
Margarit, G.
2013-12-01
This paper presents the results obtained by GMV in the maritime surveillance operational activations conducted in a set of research projects. These activations have been actively supported by users, which feedback has been essential for better understanding their needs and the most urgent requested improvements. Different domains have been evaluated from pure theoretical and scientific background (in terms of processing algorithms) up to pure logistic issues (IT configuration issues, strategies for improving system performance and avoiding bottlenecks, parallelization and back-up procedures). In all the cases, automatizing is the key work because users need almost real time operations where the interaction of human operators is minimized. In addition, automatizing permits reducing human-derived errors and provides better error tracking procedures. In the paper, different examples will be depicted and analysed. For sake of space limitation, only the most representative ones will be selected. Feedback from users will be include and analysed as well.
A new Hysteretic Nonlinear Energy Sink (HNES)
NASA Astrophysics Data System (ADS)
Tsiatas, George C.; Charalampakis, Aristotelis E.
2018-07-01
The behavior of a new Hysteretic Nonlinear Energy Sink (HNES) coupled to a linear primary oscillator is investigated in shock mitigation. Apart from a small mass and a nonlinear elastic spring of the Duffing oscillator, the HNES is also comprised of a purely hysteretic and a linear elastic spring of potentially negative stiffness, connected in parallel. The Bouc-Wen model is used to describe the force produced by both the purely hysteretic and linear elastic springs. Coupling the primary oscillator with the HNES, three nonlinear equations of motion are derived in terms of the two displacements and the dimensionless hysteretic variable, which are integrated numerically using the analog equation method. The performance of the HNES is examined by quantifying the percentage of the initially induced energy in the primary system that is passively transferred and dissipated by the HNES. Remarkable results are achieved for a wide range of initial input energies. The great performance of the HNES is mostly evidenced when the linear spring stiffness takes on negative values.
Three new enantiomerically pure ferrocenylphosphole compounds.
López Cortés, José Guadalupe; Vincendeau, Sandrine; Daran, Jean Claude; Manoury, Eric; Gouygou, Maryse
2006-05-01
The absolute configurations of three new enantiomerically pure ferrocenylphosphole compounds, namely (2S,4S,S(Fc))-4-methoxymethyl-2-[2-(9-thioxo-9lambda5-phosphafluoren-9-yl)ferrocenyl]-1,3-dioxane, [Fe(C5H5)(C23H22O3PS)], (III), (S(Fc))-[2-(9-thioxo-9lambda5-phosphafluoren-9-yl)ferrocenyl]methanol, [Fe(C5H5)(C18H14OPS)], (V), and (S(Fc))-diphenyl[2-(9-thioxo-9lambda5-phosphafluoren-9-yl]ferrocenylmethyl]phosphine, [Fe(C5H5)(C30H23P2)], (VIII), have been unambiguously established. All three ligands contain a planar chiral ferrocene group, bearing a dibenzophosphole and either a dioxane, a methanol or a diphenylphosphinomethane group on the same cyclopentadienyl. In compound (V), the occurrence of O-H...S and C-H...S hydrogen bonds results in the formation of a two-dimensional network parallel to (001). The geometry of the ferrocene frameworks agrees with related reported structures.
Some TEM observations of Al2O3 scales formed on NiCrAl alloys
NASA Technical Reports Server (NTRS)
Smialek, J.; Gibala, R.
1979-01-01
The microstructural development of Al2O3 scales on NiCrAl alloys has been examined by transmission electron microscopy. Voids were observed within grains in scales formed on a pure NiCrAl alloy. Both voids and oxide grains grew measurably with oxidation time at 1100 C. The size and amount of porosity decreased towards the oxide-metal growth interface. The voids resulted from an excess number of oxygen vacancies near the oxidemetal interface. Short-circuit diffusion paths were discussed in reference to current growth stress models for oxide scales. Transient oxidation of pure, Y-doped, and Zr-doped NiCrAl was also examined. Oriented alpha-(Al, Cr)2O3 and Ni(Al, Cr)2O4 scales often coexisted in layered structures on all three alloys. Close-packed oxygen planes and directions in the corundum and spinel layers were parallel. The close relationship between oxide layers provided a gradual transition from initial transient scales to steady state Al2O3 growth.
CDX2 immunostaining in primary and metastatic germ cell tumours of the testis.
Oz Atalay, Fatma; Aytac Vuruskan, Berna; Vuruskan, Hakan
2016-12-01
Objective To evaluate the immunohistochemical staining pattern of caudal type homeobox 2 (CDX2) protein in germ cell tumours (GCTs) of the testis. Methods This study reassessed archival tissue samples collected from patients diagnosed with primary and metastatic testicular GCTs for CDX2 immunoreactivity using standard immunohistochemical techniques. Positive nuclear immunostaining was evaluated with regard to both the staining intensity and the extent of the staining. Results Tissue sections from primary and metastatic testicular GCTs ( n = 104), germ cell neoplasia in situ (GCNis) ( n = 5) and benign testicles ( n = 15) were analysed. The GCNis and benign testicular tissues showed no immunoreactivity for CDX2. Strong and diffuse staining of CDX2 was demonstrated only in the mature colonic epithelium of teratomas in both primary and metastatic GCTs. CDX2 positivity in other tumours (one pure yolk sac tumour, one yolk sac component of a mixed GCT and one pure seminoma) was infrequent, and was only weak and focal. Conclusions CDX2 immunostaining should be interpreted based on both the staining intensity and the extent of staining so as not to cause misdiagnosis. Teratomas with colonic-type epithelium should be considered in the differential diagnosis if a metastatic tumour with an unknown primary shows prominent CDX2 immunostaining.
Rubio, L; Ortiz, M C; Sarabia, L A
2014-04-11
A non-separative, fast and inexpensive spectrofluorimetric method based on the second order calibration of excitation-emission fluorescence matrices (EEMs) was proposed for the determination of carbaryl, carbendazim and 1-naphthol in dried lime tree flowers. The trilinearity property of three-way data was used to handle the intrinsic fluorescence of lime flowers and the difference in the fluorescence intensity of each analyte. It also made possible to identify unequivocally each analyte. Trilinearity of the data tensor guarantees the uniqueness of the solution obtained through parallel factor analysis (PARAFAC), so the factors of the decomposition match up with the analytes. In addition, an experimental procedure was proposed to identify, with three-way data, the quenching effect produced by the fluorophores of the lime flowers. This procedure also enabled the selection of the adequate dilution of the lime flowers extract to minimize the quenching effect so the three analytes can be quantified. Finally, the analytes were determined using the standard addition method for a calibration whose standards were chosen with a D-optimal design. The three analytes were unequivocally identified by the correlation between the pure spectra and the PARAFAC excitation and emission spectral loadings. The trueness was established by the accuracy line "calculated concentration versus added concentration" in all cases. Better decision limit values (CCα), in x0=0 with the probability of false positive fixed at 0.05, were obtained for the calibration performed in pure solvent: 2.97 μg L(-1) for 1-naphthol, 3.74 μg L(-1) for carbaryl and 23.25 μg L(-1) for carbendazim. The CCα values for the second calibration carried out in matrix were 1.61, 4.34 and 51.75 μg L(-1) respectively; while the values obtained considering only the pure samples as calibration set were: 2.65, 8.61 and 28.7 μg L(-1), respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
Ichiyanagi, Osamu; Nagaoka, Akira; Izumi, Takuji; Kawamura, Yuko; Tsukigi, Masaaki; Ishii, Tatsuya; Ohji, Hiroshi; Kato, Tomoyuki; Tomita, Yoshihiko
2013-04-01
The aim of this study was to assess stone-free rates following extracorporeal shockwave lithotripsy (ESWL) of pure calcium oxalate (CaOx) stones in the proximal ureter. The investigators retrospectively examined 53 patients with 5-10 mm pure CaOx stones in the proximal ureter from the medical archives of 593 consecutive patients treated with ESWL. The compositions of calcium oxalate monohydrate (COM) and dihydrate (COD) in a given stone were determined by infrared spectrometry. Stone size, attenuation number and stone-to-skin distance (SSD) were measured using plain radiography and computed tomography (CT). ESWL success was evaluated by stone-free status after the first single session. On average, calculi were 8.0 × 5.3 mm in size, with an SSD of 11.0 cm. The mean CT attenuation value was 740.1 HU. Attenuation numbers correlated significantly with stone diameter (r = 0.49), but had no correlation with the stone content of COM or COD. A negative correlation was observed between COM and COD content (r = -0.925). With regard to patients' physical characteristics and COM and COD content, no differences were found between study subgroups with stone-free and residual status (n = 38 and 15, respectively). There were also no differences in clinical features between patient subgroups with COM- or COD-predominant stones (n = 22 and 31, respectively). The findings indicated that the differences in COM and COD content of CaOx stones had no impact on stone clearance after ESWL and that a favorable stone-free rate of the stones treated with ESWL may be achieved independently of CaOx hydration.
Evaluating models of remember-know judgments: complexity, mimicry, and discriminability.
Cohen, Andrew L; Rotello, Caren M; Macmillan, Neil A
2008-10-01
Remember-know judgments provide additional information in recognition memory tests, but the nature of this information and the attendant decision process are in dispute. Competing models have proposed that remember judgments reflect a sum of familiarity and recollective information (the one-dimensional model), are based on a difference between these strengths (STREAK), or are purely recollective (the dual-process model). A choice among these accounts is sometimes made by comparing the precision of their fits to data, but this strategy may be muddied by differences in model complexity: Some models that appear to provide good fits may simply be better able to mimic the data produced by other models. To evaluate this possibility, we simulated data with each of the models in each of three popular remember-know paradigms, then fit those data to each of the models. We found that the one-dimensional model is generally less complex than the others, but despite this handicap, it dominates the others as the best-fitting model. For both reasons, the one-dimensional model should be preferred. In addition, we found that some empirical paradigms are ill-suited for distinguishing among models. For example, data collected by soliciting remember/know/new judgments--that is, the trinary task--provide a particularly weak ground for distinguishing models. Additional tables and figures may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, at www.psychonomic.org/archive.
NASA Astrophysics Data System (ADS)
Casu, F.; de Luca, C.; Lanari, R.; Manunta, M.; Zinno, I.
2016-12-01
A methodology for computing surface deformation time series and mean velocity maps of large areas is presented. Our approach relies on the availability of a multi-temporal set of Synthetic Aperture Radar (SAR) data collected from ascending and descending orbits over an area of interest, and also permits to estimate the vertical and horizontal (East-West) displacement components of the Earth's surface. The adopted methodology is based on an advanced Cloud Computing implementation of the Differential SAR Interferometry (DInSAR) Parallel Small Baseline Subset (P-SBAS) processing chain which allows the unsupervised processing of large SAR data volumes, from the raw data (level-0) imagery up to the generation of DInSAR time series and maps. The presented solution, which is highly scalable, has been tested on the ascending and descending ENVISAT SAR archives, which have been acquired over a large area of Southern California (US) that extends for about 90.000 km2. Such an input dataset has been processed in parallel by exploiting 280 computing nodes of the Amazon Web Services Cloud environment. Moreover, to produce the final mean deformation velocity maps of the vertical and East-West displacement components of the whole investigated area, we took also advantage of the information available from external GPS measurements that permit to account for possible regional trends not easily detectable by DInSAR and to refer the P-SBAS measurements to an external geodetic datum. The presented results clearly demonstrate the effectiveness of the proposed approach that paves the way to the extensive use of the available ERS and ENVISAT SAR data archives. Furthermore, the proposed methodology can be particularly suitable to deal with the very huge data flow provided by the Sentinel-1 constellation, thus permitting to extend the DInSAR analyses at a nearly global scale. This work is partially supported by: the DPC-CNR agreement, the EPOS-IP project and the ESA GEP project.
Sequential or parallel decomposed processing of two-digit numbers? Evidence from eye-tracking.
Moeller, Korbinian; Fischer, Martin H; Nuerk, Hans-Christoph; Willmes, Klaus
2009-02-01
While reaction time data have shown that decomposed processing of two-digit numbers occurs, there is little evidence about how decomposed processing functions. Poltrock and Schwartz (1984) argued that multi-digit numbers are compared in a sequential digit-by-digit fashion starting at the leftmost digit pair. In contrast, Nuerk and Willmes (2005) favoured parallel processing of the digits constituting a number. These models (i.e., sequential decomposition, parallel decomposition) make different predictions regarding the fixation pattern in a two-digit number magnitude comparison task and can therefore be differentiated by eye fixation data. We tested these models by evaluating participants' eye fixation behaviour while selecting the larger of two numbers. The stimulus set consisted of within-decade comparisons (e.g., 53_57) and between-decade comparisons (e.g., 42_57). The between-decade comparisons were further divided into compatible and incompatible trials (cf. Nuerk, Weger, & Willmes, 2001) and trials with different decade and unit distances. The observed fixation pattern implies that the comparison of two-digit numbers is not executed by sequentially comparing decade and unit digits as proposed by Poltrock and Schwartz (1984) but rather in a decomposed but parallel fashion. Moreover, the present fixation data provide first evidence that digit processing in multi-digit numbers is not a pure bottom-up effect, but is also influenced by top-down factors. Finally, implications for multi-digit number processing beyond the range of two-digit numbers are discussed.
Surgical bedside master console for neurosurgical robotic system.
Arata, Jumpei; Kenmotsu, Hajime; Takagi, Motoki; Hori, Tatsuya; Miyagi, Takahiro; Fujimoto, Hideo; Kajita, Yasukazu; Hayashi, Yuichiro; Chinzei, Kiyoyuki; Hashizume, Makoto
2013-01-01
We are currently developing a neurosurgical robotic system that facilitates access to residual tumors and improves brain tumor removal surgical outcomes. The system combines conventional and robotic surgery allowing for a quick conversion between the procedures. This concept requires a new master console that can be positioned at the surgical bedside and be sterilized. The master console was developed using new technologies, such as a parallel mechanism and pneumatic sensors. The parallel mechanism is a purely passive 5-DOF (degrees of freedom) joystick based on the author's haptic research. The parallel mechanism enables motion input of conventional brain tumor removal surgery with a compact, intuitive interface that can be used in a conventional surgical environment. In addition, the pneumatic sensors implemented on the mechanism provide an intuitive interface and electrically isolate the tool parts from the mechanism so they can be easily sterilized. The 5-DOF parallel mechanism is compact (17 cm width, 19cm depth, and 15cm height), provides a 505,050 mm and 90° workspace and is highly backdrivable (0.27N of resistance force representing the surgical motion). The evaluation tests revealed that the pneumatic sensors can properly measure the suction strength, grasping force, and hand contact. In addition, an installability test showed that the master console can be used in a conventional surgical environment. The proposed master console design was shown to be feasible for operative neurosurgery based on comprehensive testing. This master console is currently being tested for master-slave control with a surgical robotic system.
Design of a real-time wind turbine simulator using a custom parallel architecture
NASA Technical Reports Server (NTRS)
Hoffman, John A.; Gluck, R.; Sridhar, S.
1995-01-01
The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.
NASA Astrophysics Data System (ADS)
Butler, S. L.
2010-09-01
A porosity localizing instability occurs in compacting porous media that are subjected to shear if the viscosity of the solid matrix decreases with porosity ( Stevenson, 1989). This instability may have significant consequences for melt transport in regions of partial melt in the mantle and may significantly modify the effective viscosity of the asthenosphere ( Kohlstedt and Holtzman, 2009). Most analyses of this instability have been carried out assuming an imposed simple shear flow (e.g., Spiegelman, 2003; Katz et al., 2006; Butler, 2009). Pure shear can be realized in laboratory experiments and studying the instability in a pure shear flow allows us to test the generality of some of the results derived for simple shear and the flow pattern for pure shear more easily separates the effects of deformation from rotation. Pure shear flows may approximate flows near the tops of mantle plumes near earth's surface and in magma chambers. In this study, we present linear theory and nonlinear numerical model results for a porosity and strain-rate weakening compacting porous layer subjected to pure shear and we investigate the effects of buoyancy-induced oscillations. The linear theory and numerical model will be shown to be in excellent agreement. We will show that melt bands grow at the same angles to the direction of maximum compression as in simple shear and that buoyancy-induced oscillations do not significantly inhibit the porosity localizing instability. In a pure shear flow, bands parallel to the direction of maximum compression increase exponentially in wavelength with time. However, buoyancy-induced oscillations are shown to inhibit this increase in wavelength. In a simple shear flow, bands increase in wavelength when they are in the orientation for growth of the porosity localizing instability. Because the amplitude spectrum is always dominated by bands in this orientation, band wavelengths increase with time throughout simple shear simulations until the wavelength becomes similar to one compaction length. Once the wavelength becomes similar to one compaction length, the growth of the amplitude of the band slows and shorter wavelength bands that are increasing in amplitude at a greater rate take over. This may provide a mechanism to explain the experimental observation that band spacing is controlled by the compaction length ( Kohlstedt and Holtzman, 2009).
USAID Expands eMODIS Coverage for Famine Early Warning
NASA Astrophysics Data System (ADS)
Jenkerson, C.; Meyer, D. J.; Evenson, K.; Merritt, M.
2011-12-01
Food security in countries at risk is monitored by U.S. Agency for International Development (USAID) through its Famine Early Warning Systems Network (FEWS NET) using many methods including Moderate Resolution Imaging Spectroradiometer (MODIS) data processed by U.S. Geological Survey (USGS) into eMODIS Normalized Difference Vegetation Index (NDVI) products. Near-real time production is used comparatively with trends derived from the eMODIS archive to operationally monitor vegetation anomalies indicating threatened cropland and rangeland conditions. eMODIS production over Central America and the Caribbean (CAMCAR) began in 2009, and processes 10-day NDVI composites every 5 days from surface reflectance inputs produced using predicted spacecraft and climatology information at Land and Atmosphere Near real time Capability for Earth Observing Systems (EOS) (LANCE). These expedited eMODIS composites are backed by a parallel archive of precision-based NDVI calculated from surface reflectance data ordered through Level 1 and Atmosphere Archive and Distribution System (LAADS). Success in the CAMCAR region led to the recent expansion of eMODIS production to include Africa in 2010, and Central Asia in 2011. Near-real time 250-meter products are available for each region on the last day of an acquisition interval (generally before midnight) from an anonymous file transfer protocol (FTP) distribution site (ftp://emodisftp.cr.usgs.gov/eMODIS). The FTP site concurrently hosts the regional historical collections (2000 to present) which are also searchable using the USGS Earth Explorer (http://edcsns17.cr.usgs.gov/NewEarthExplorer). As eMODIS coverage continues to grow, these geographically gridded, georeferenced tagged image file format (GeoTIFF) NDVI composites increase their utility as effective tools for operational monitoring of near-real time vegetation data against historical trends.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Mitchell; Thompson, Aidan P.
The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-Hemore » binary system.« less
NASA Technical Reports Server (NTRS)
Mullan, Dermott J.
1987-01-01
Theoretical work on the atmospheres of M dwarfs has progressed along lines parallel to those followed in the study of other classes of stars. Such models have become increasingly sophisticated as improvements in opacities, in the equation of state, and in the treatment of convection were incorporated during the last 15 to 20 years. As a result, spectrophotometric data on M dwarfs can now be fitted rather well by current models. The various attempts at modeling M dwarf photospheres in purely thermal terms are summarized. Some extensions of these models to include the effects of microturbulence and magnetic inhomogeneities are presented.
Live Soap: Stability, Order, and Fluctuations in Apolar Active Smectics
NASA Astrophysics Data System (ADS)
Adhyapak, Tapan Chandra; Ramaswamy, Sriram; Toner, John
2013-03-01
We construct a hydrodynamic theory of noisy, apolar active smectics in bulk suspension or on a substrate. Unlike purely orientationally ordered active fluids, active apolar smectics can be dynamically stable in Stokesian bulk suspensions. Smectic order in these systems is quasilong ranged in dimension d=2 and long ranged in d=3. We predict reentrant Kosterlitz-Thouless melting to an active nematic in our simplest model in d=2, a nonzero second-sound speed parallel to the layers in bulk suspensions, and that there are no giant number fluctuations in either case. We also briefly discuss possible instabilities in these systems.
Evaluation of aural manifestations in temporo-mandibular joint dysfunction.
Sobhy, O A; Koutb, A R; Abdel-Baki, F A; Ali, T M; El Raffa, I Z; Khater, A H
2004-08-01
Thirty patients with temporo-mandibular joint dysfunction were selected to investigate the changes in otoacoustic emissions before and after conservative treatment of their temporo-mandibular joints. Pure tone audiometry, transient-evoked otoacoustic emissions (TEOAE), distortion-product otoacoustic emissions (DPOAE) as well as a tinnitus questionnaire were administered to all patients before and after therapy. Therapy was conservative in the form of counselling, physiotherapy, anti-inflammatory agents, muscle relaxants, and occlusal splints. Results indicated insignificant changes in the TEOAEs, whereas there were significant increases in distortion product levels at most of the frequency bands. These results were paralleled to subjective improvement of tinnitus.
Voltage-spike analysis for a free-running parallel inverter
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1974-01-01
Unwanted and sometimes damaging high-amplitude voltage spikes occur during each half cycle in many transistor saturable-core inverters at the moment when the core saturates and the transistors switch. The analysis shows that spikes are an intrinsic characteristic of certain types of inverters even with negligible leakage inductance and purely resistive load. The small but unavoidable after-saturation inductance of the saturable-core transformer plays an essential role in creating these undesired thigh-voltage spikes. State-plane analysis provides insight into the complex interaction between core and transistors, and shows the circuit parameters upon which the magnitude of these spikes depends.
Flow cytometry for enrichment and titration in massively parallel DNA sequencing
Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim
2009-01-01
Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748
Pythran: enabling static optimization of scientific Python programs
NASA Astrophysics Data System (ADS)
Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan
2015-01-01
Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.
Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego
2012-03-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (the transport-coefficient ratio χ/χ˜10^10 in fusion plasmas). Recently, a novel Lagrangian Green's function method has been proposedfootnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011); D. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, submitted (2011) to solve the local and non-local purely parallel transport equation in general 3D magnetic fields. The approach avoids numerical pollution, is inherently positivity-preserving, and is scalable algorithmically (i.e., work per degree-of-freedom is grid-independent). In this poster, we discuss the extension of the Lagrangian Green's function approach to include perpendicular transport terms and sources. We present an asymptotic-preserving numerical formulation, which ensures a consistent numerical discretization temporally and spatially for arbitrary χ/χ ratios. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.
Analysis of multiple internal reflections in a parallel aligned liquid crystal on silicon SLM.
Martínez, José Luis; Moreno, Ignacio; del Mar Sánchez-López, María; Vargas, Asticio; García-Martínez, Pascuala
2014-10-20
Multiple internal reflection effects on the optical modulation of a commercial reflective parallel-aligned liquid-crystal on silicon (PAL-LCoS) spatial light modulator (SLM) are analyzed. The display is illuminated with different wavelengths and different angles of incidence. Non-negligible Fabry-Perot (FP) effect is observed due to the sandwiched LC layer structure. A simplified physical model that quantitatively accounts for the observed phenomena is proposed. It is shown how the expected pure phase modulation response is substantially modified in the following aspects: 1) a coupled amplitude modulation, 2) a non-linear behavior of the phase modulation, 3) some amount of unmodulated light, and 4) a reduction of the effective phase modulation as the angle of incidence increases. Finally, it is shown that multiple reflections can be useful since the effect of a displayed diffraction grating is doubled on a beam that is reflected twice through the LC layer, thus rendering gratings with doubled phase modulation depth.
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Linczuk, Maciej
2016-09-01
The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.
Virtual Flight Demonstration of the Stratospheric Dual-Aircraft Platform
NASA Technical Reports Server (NTRS)
Engblom, W. A.; Decker, R. K.
2016-01-01
A baseline configuration for the dual-aircraft platform (DAP) concept is described and evaluated in a physics-based flight dynamics simulations for two month-long missions as a communications relay in the lower stratosphere above central Florida. The DAP features two unmanned aerial vehicles connected via a long adjustable cable which effectively sail back-and-forth using wind velocity gradients and solar energy. Detailed atmospheric profiles in the vicinity of 60,000-ft derived from archived data measured by the 50-Mhz Doppler Radar Wind Profiler at Cape Canaveral are used in the flight simulations. An overview of the novel guidance and flight control strategies are provided. The energy-usage of the baseline configuration during month-long stationkeeping missions (i.e., within 150-mile radius of downtown Orlando) is characterized and compared to that of a pure solar aircraft.
A spectroscopic search for colliding stellar winds in O-type close binary systems. IV - Iota Orionis
NASA Technical Reports Server (NTRS)
Gies, Douglas R.; Wiggs, Michael S.; Bagnuolo, William G., Jr.
1993-01-01
We present H-alpha and He I 6678 A line profiles for the eccentric orbit binary Iota Ori. We have applied a tomography algorithm which uses the established orbital velocity curves and intensity ratio to reconstruct the spectral line profiles for each star. The He I profiles appear as pure photospheric lines, and H-alpha shows variable emission in the line core throughout the orbit (which is typical of O giants) and in the blue wing near periastron passage. We show that the blue wing emission is consistent with an origin between the stars which probably results from a dramatic focusing of the primary's stellar wind at periastron. We also present IUE archival spectra of the UV wind lines N V 1240 A and C IV 1550 A.
Transition Regimes of Jet Impingement on Rib and Cavity Superhydrophobic Surfaces
NASA Astrophysics Data System (ADS)
Johnson, Michael; Maynes, Daniel; Webb, Brent
2010-11-01
We report experimental results characterizing the dynamics of a liquid jet impinging normally on superhydrophobic surfaces spanning the Weber number (based on the jet velocity and diameter) range from 100 to 2000.The superhydrophobic surfaces are fabricated with both silicon and PDMS surfaces and exhibit micro-ribs and cavities coated with a hydrophobic coating. In general, the hydraulic jump exhibits an elliptical shape with the major axis being aligned parallel to the ribs, concomitant with the frictional resistance being smaller in the parallel direction than in the transverse direction. When the water depth downstream of the jump was imposed at a predetermined value, the major and minor axis of the jump increased with decreasing water depth, following classical hydraulic jump behavior. When no water depth was imposed, a regime change was observed within the Weber number range explained. For We < 1200, the flow forms a filament at the edge of the ellipse, where the flow moves along the rim of the ellipse toward the major axis. The filaments then join and continue to move parallel to the ribs. For 1200 < We < 1800, the filaments beyond the ellipse break into multiple streams and droplets and begin to take on a component perpendicular to the ribs. For We > 1800 a small amount of water flows purely in the transverse direction.
2D imaging of helium ion velocity in the DIII-D divertor
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Porter, G. D.; Meyer, W. H.; Rognlien, T. D.; Allen, S. L.; Briesemeister, A.; Mclean, A. G.; Zeng, L.; Jaervinen, A. E.; Howard, J.
2018-05-01
Two-dimensional imaging of parallel ion velocities is compared to fluid modeling simulations to understand the role of ions in determining divertor conditions and benchmark the UEDGE fluid modeling code. Pure helium discharges are used so that spectroscopic He+ measurements represent the main-ion population at small electron temperatures. Electron temperatures and densities in the divertor match simulated values to within about 20%-30%, establishing the experiment/model match as being at least as good as those normally obtained in the more regularly simulated deuterium plasmas. He+ brightness (HeII) comparison indicates that the degree of detachment is captured well by UEDGE, principally due to the inclusion of E ×B drifts. Tomographically inverted Coherence Imaging Spectroscopy measurements are used to determine the He+ parallel velocities which display excellent agreement between the model and the experiment near the divertor target where He+ is predicted to be the main-ion species and where electron-dominated physics dictates the parallel momentum balance. Upstream near the X-point where He+ is a minority species and ion-dominated physics plays a more important role, there is an underestimation of the flow velocity magnitude by a factor of 2-3. These results indicate that more effort is required to be able to correctly predict ion momentum in these challenging regimes.
Herzog, Bastian; Lemmer, Hilde; Horn, Harald; Müller, Elisabeth
2014-02-22
Evaluation of xenobiotics biodegradation potential, shown here for benzotriazoles (corrosion inhibitors) and sulfamethoxazole (sulfonamide antibiotic) by microbial communities and/or pure cultures normally requires time intensive and money consuming LC/GC methods that are, in case of laboratory setups, not always needed. The usage of high concentrations to apply a high selective pressure on the microbial communities/pure cultures in laboratory setups, a simple UV-absorbance measurement (UV-AM) was developed and validated for screening a large number of setups, requiring almost no preparation and significantly less time and money compared to LC/GC methods. This rapid and easy to use method was evaluated by comparing its measured values to LC-UV and GC-MS/MS results. Furthermore, its application for monitoring and screening unknown activated sludge communities (ASC) and mixed pure cultures has been tested and approved to detect biodegradation of benzotriazole (BTri), 4- and 5-tolyltriazole (4-TTri, 5-TTri) as well as SMX. In laboratory setups, xenobiotics concentrations above 1.0 mg L(-1) without any enrichment or preparation could be detected after optimization of the method. As UV-AM does not require much preparatory work and can be conducted in 96 or even 384 well plate formats, the number of possible parallel setups and screening efficiency was significantly increased while analytic and laboratory costs were reduced to a minimum.
Griera, Albert; Steinbach, Florian; Bons, Paul D.; Jansen, Daniela; Roessiger, Jens; Lebensohn, Ricardo A.
2017-01-01
The flow of glaciers and polar ice sheets is controlled by the highly anisotropic rheology of ice crystals that have hexagonal symmetry (ice lh). To improve our knowledge of ice sheet dynamics, it is necessary to understand how dynamic recrystallization (DRX) controls ice microstructures and rheology at different boundary conditions that range from pure shear flattening at the top to simple shear near the base of the sheets. We present a series of two-dimensional numerical simulations that couple ice deformation with DRX of various intensities, paying special attention to the effect of boundary conditions. The simulations show how similar orientations of c-axis maxima with respect to the finite deformation direction develop regardless of the amount of DRX and applied boundary conditions. In pure shear this direction is parallel to the maximum compressional stress, while it rotates towards the shear direction in simple shear. This leads to strain hardening and increased activity of non-basal slip systems in pure shear and to strain softening in simple shear. Therefore, it is expected that ice is effectively weaker in the lower parts of the ice sheets than in the upper parts. Strain-rate localization occurs in all simulations, especially in simple shear cases. Recrystallization suppresses localization, which necessitates the activation of hard, non-basal slip systems. This article is part of the themed issue ‘Microdynamics of ice’. PMID:28025295
2014-01-01
Background Evaluation of xenobiotics biodegradation potential, shown here for benzotriazoles (corrosion inhibitors) and sulfamethoxazole (sulfonamide antibiotic) by microbial communities and/or pure cultures normally requires time intensive and money consuming LC/GC methods that are, in case of laboratory setups, not always needed. Results The usage of high concentrations to apply a high selective pressure on the microbial communities/pure cultures in laboratory setups, a simple UV-absorbance measurement (UV-AM) was developed and validated for screening a large number of setups, requiring almost no preparation and significantly less time and money compared to LC/GC methods. This rapid and easy to use method was evaluated by comparing its measured values to LC-UV and GC-MS/MS results. Furthermore, its application for monitoring and screening unknown activated sludge communities (ASC) and mixed pure cultures has been tested and approved to detect biodegradation of benzotriazole (BTri), 4- and 5-tolyltriazole (4-TTri, 5-TTri) as well as SMX. Conclusions In laboratory setups, xenobiotics concentrations above 1.0 mg L-1 without any enrichment or preparation could be detected after optimization of the method. As UV-AM does not require much preparatory work and can be conducted in 96 or even 384 well plate formats, the number of possible parallel setups and screening efficiency was significantly increased while analytic and laboratory costs were reduced to a minimum. PMID:24558966
Ramachandran, Balaji; Jayavelu, Subramani; Murhekar, Kanchan; Rajkumar, Thangarajan
2016-01-01
EGCG (Epigallocatechin-3-gallate) is the major active principle catechin found in green tea. Skepticism regarding the safety of consuming EGCG is gaining attention, despite the fact that it is widely being touted for its potential health benefits, including anti-cancer properties. The lack of scientific data on safe dose levels of pure EGCG is of concern, while EGCG has been commonly studied as a component of GTE (Green tea extract) and not as a single active constituent. This study has been carried out to estimate the maximum tolerated non-toxic dose of pure EGCG and to identify the treatment related risk factors. In a fourteen day consecutive treatment, two different administration modalities were compared, offering an improved [i.p (intraperitoneal)] and limited [p.o (oral)] bioavailability. A trend of dose and route dependant hepatotoxicity was observed particularly with i.p treatment and EGCG increased serum lipid profile in parallel to hepatotoxicity. Fourteen day tolerable dose of EGCG was established as 21.1 mg/kg for i.p and 67.8 mg/kg for p.o. We also observed that, EGCG induced effects by both treatment routes are reversible, subsequent to an observation period for further fourteen days after cessation of treatment. It was demonstrated that the severity of EGCG induced toxicity appears to be a function of dose, route of administration and period of treatment.
Nekkanti, Vijaykumar; Venkatesan, Natarajan; Wang, Zhijun; Betageri, Guru V
2015-01-01
The objective of our investigational work was to develop a proliposomal formulation to improve the oral bioavailability of valsartan. Proliposomes were formulated by thin film hydration technique using different ratios of phospholipids:drug:cholesterol. The prepared proliposomes were evaluated for vesicle size, encapsulation efficiency, morphological properties, in vitro drug release, in vitro permeability and in vivo pharmacokinetics. In vitro drug-release studies were performed in simulated gastric fluid (pH 1.2) and purified water using dialysis bag method. In vitro drug permeation was studied using parallel artificial membrane permeation assay (PAMPA), Caco-2 monolayer and everted rat intestinal perfusion techniques. In vivo pharmacokinetic studies were conducted in male Sprague Dawley (SD) rats. Among the proliposomal formulations, F-V was found to have the highest encapsulation efficiency of 95.6 ± 2.9% with a vesicle size of 364.1 ± 14.9 nm. The in vitro dissolution studies indicated an improved drug release from proliposomal formulation, F-V in comparison to pure drug suspension in both, purified water and pH 1.2 dissolution media after 12 h. Permeability across PAMPA, Caco-2 cell and everted rat intestinal perfusion studies were higher with F-V formulation as compared to pure drug. Following single oral administration of F-V formulation, a relative bioavailability of 202.36% was achieved as compared to pure valsartan.
Passive wireless surface acoustic wave sensors for monitoring sequestration sites CO 2 emission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yizhong; Chyu, Minking; Wang, Qing-Ming
2013-02-14
University of Pittsburgh’s Transducer lab has teamed with the U.S. Department of Energy’s National Energy Technology Laboratory (DOE NETL) to conduct a comprehensive study to develop/evaluate low-cost, efficient CO 2 measuring technologies for geological sequestration sites leakage monitoring. A passive wireless CO 2 sensing system based on surface acoustic wave technology and carbon nanotube nanocomposite was developed. Surface acoustic wave device was studied to determine the optimum parameters. Delay line structure was adopted as basic sensor structure. CNT polymer nanocomposite was fabricated and tested under different temperature and strain condition for natural environment impact evaluation. Nanocomposite resistance increased for 5more » times under pure strain, while the temperature dependence of resistance for CNT solely was -1375ppm/°C. The overall effect of temperature on nanocomposite resistance was -1000ppm/°C. The gas response of the nanocomposite was about 10% resistance increase under pure CO 2 . The sensor frequency change was around 300ppm for pure CO 2 . With paralyne packaging, the sensor frequency change from relative humidity of 0% to 100% at room temperature decreased from over 1000ppm to less than 100ppm. The lowest detection limit of the sensor is 1% gas concentration, with 36ppm frequency change. Wireless module was tested and showed over one foot transmission distance at preferred parallel orientation.« less
Llorens, Maria-Gema; Griera, Albert; Steinbach, Florian; Bons, Paul D; Gomez-Rivas, Enrique; Jansen, Daniela; Roessiger, Jens; Lebensohn, Ricardo A; Weikusat, Ilka
2017-02-13
The flow of glaciers and polar ice sheets is controlled by the highly anisotropic rheology of ice crystals that have hexagonal symmetry (ice lh). To improve our knowledge of ice sheet dynamics, it is necessary to understand how dynamic recrystallization (DRX) controls ice microstructures and rheology at different boundary conditions that range from pure shear flattening at the top to simple shear near the base of the sheets. We present a series of two-dimensional numerical simulations that couple ice deformation with DRX of various intensities, paying special attention to the effect of boundary conditions. The simulations show how similar orientations of c-axis maxima with respect to the finite deformation direction develop regardless of the amount of DRX and applied boundary conditions. In pure shear this direction is parallel to the maximum compressional stress, while it rotates towards the shear direction in simple shear. This leads to strain hardening and increased activity of non-basal slip systems in pure shear and to strain softening in simple shear. Therefore, it is expected that ice is effectively weaker in the lower parts of the ice sheets than in the upper parts. Strain-rate localization occurs in all simulations, especially in simple shear cases. Recrystallization suppresses localization, which necessitates the activation of hard, non-basal slip systems.This article is part of the themed issue 'Microdynamics of ice'. © 2016 The Author(s).
Giorio, Chiara; Kehrwald, Natalie; Barbante, Carlo; Kalberer, Markus; King, Amy C.F.; Thomas, Elizabeth R.; Wolff, Eric W.; Zennaro, Piero
2018-01-01
Polar ice cores provide information about past climate and environmental changes over periods ranging from a few years up to 800,000 years. The majority of chemical studies have focused on determining inorganic components, such as major ions and trace elements as well as on their isotopic fingerprint. In this paper, we review the different classes of organic compounds that might yield environmental information, discussing existing research and what is needed to improve knowledge. We also discuss the problems of sampling, analysis and interpretation of organic molecules in ice. This review highlights the great potential for organic compounds to be used as proxies for anthropogenic activities, past fire events from different types of biomass, terrestrial biogenic emissions and marine biological activity, along with the possibility of inferring past temperature fluctuations and even large-scale climate variability. In parallel, comprehensive research needs to be done to assess the atmospheric stability of these compounds, their ability to be transported long distances in the atmosphere, and their stability in the archive in order to better interpret their fluxes in ice cores. In addition, specific decontamination procedures, analytical methods with low detection limits (ng/L or lower), fast analysis time and low sample requests need to be developed in order to ensure a good time resolution in the archive.
NASA Astrophysics Data System (ADS)
Giorio, Chiara; Kehrwald, Natalie; Barbante, Carlo; Kalberer, Markus; King, Amy C. F.; Thomas, Elizabeth R.; Wolff, Eric W.; Zennaro, Piero
2018-03-01
Polar ice cores provide information about past climate and environmental changes over periods ranging from a few years up to 800,000 years. The majority of chemical studies have focused on determining inorganic components, such as major ions and trace elements as well as on their isotopic fingerprint. In this paper, we review the different classes of organic compounds that might yield environmental information, discussing existing research and what is needed to improve knowledge. We also discuss the problems of sampling, analysis and interpretation of organic molecules in ice. This review highlights the great potential for organic compounds to be used as proxies for anthropogenic activities, past fire events from different types of biomass, terrestrial biogenic emissions and marine biological activity, along with the possibility of inferring past temperature fluctuations and even large-scale climate variability. In parallel, comprehensive research needs to be done to assess the atmospheric stability of these compounds, their ability to be transported long distances in the atmosphere, and their stability in the archive in order to better interpret their fluxes in ice cores. In addition, specific decontamination procedures, analytical methods with low detection limits (ng/L or lower), fast analysis time and low sample requests need to be developed in order to ensure a good time resolution in the archive.
NASA Astrophysics Data System (ADS)
Lonsdale, Carol
The 2 Micron All Sky Survey (2MASS) project, a collaboration between the University of Massachusetts (Dr. Mike Skrutskie, PI) and the Infrared Processing and Analysis Center, JPL/Caltech funded primarily by NASA and the NSF, will scan the entire sky utilizing two new, highly automated 1.3m telescopes at Mt. Hopkins, AZ and at CTIO, Chile. Each telescope simultaneously scans the sky at J, H and Ks with a three channel camera using 256x256 arrays of HgCdTe detectors to detect point sources brighter than about 1 mJy (to SNR=10), with a pixel size of 2.0 arcseconds. The data rate is $\\sim 19$ Gbyte per night, with a total processed data volume of 13 Tbytes of images and 0.5 Tbyte of tabular data. The 2MASS data is archived nightly into the Infrared Science Information System at IPAC, which is based on an Informix database engine, judged at the time of purchase to have the best commercially available indexing and parallelization flexibility, and a 5 Tbyte-capacity RAID multi-threaded disk system with multi-server shared disk architecture. I will discuss the challenges of processing and archiving the 2MASS data, and of supporting intelligent query access to them by the astronomical community across the net, including possibilities for cross-correlation with other remote data sets.
The Interplay of Star formation and Accretion in the Local Universe
NASA Astrophysics Data System (ADS)
Green, Paul
2010-09-01
Galaxy evolution and supermassive black hole growth are closely linked, but the inter-relationships between active accretion and star formation, AGN outflows, and host morphological trends remain poorly understood. We propose to study an unprecedented sample of 615 low redshift SDSS galaxies and AGN detected in archival Chandra fields. We will measure diverse optical and X-ray spectroscopic properties spanning the artificial galaxy/AGN divide, and provide detailed results of our model fitting. We highlight tests of (1) an evolutionary sequence from star-forming through AGN to passive galaxy modes (2) narrow line Sy1 galaxies and new parallels between the accretion modes of AGN and stellar mass X-ray binaries and (3) the relationship of host morphology and mergers to accretion.
The beginnings of the Southern Child/Pediatric Neurology Society.
Dyken, Paul Richard; Bodensteiner, John B
2015-04-01
The founding and early development of the Southern Pediatric Neurology Society was in many ways parallel to that of the Child Neurology Society. The organization started out as the Southern Child Neurology Society but the name was changed at the time of incorporation so as to avoid confusion of identity and purpose with the larger Child Neurology Society. Although there are archives of early days and the later development of the Southern Pediatric Neurology Society, the details have never been set down in a narrative explaining the events that led to the development of the organization. In this paper, we try to produce a written record of the history of the founding and early development of the Southern Pediatric Neurology Society. © The Author(s) 2014.
VizieR Online Data Catalog: SG1120-1202 members HST imaging & 24um fluxes (Monroe+, 2017)
NASA Astrophysics Data System (ADS)
Monroe, J. T.; Tran, K.-V. H.; Gonzalez, A. H.
2017-09-01
We employ HST imaging of an ~8'x12' mosaic across three filters: F390W (WFC3/UVIS), F606W (ACS/WFC), and F814W (ACS/WFC) for a total of 44 pointings (combined primary and parallels) during cycles 14 (GO 10499) and 19 (GO 12470). We use the Spitzer MIPS 24um fluxes from Saintonge+ (2008ApJ...685L.113S) and Tran+ (2009ApJ...705..809T). The 24um observations were retrieved from the Spitzer archive. For details on spectroscopy from multi-band ground-based observations using Magellan (in 2006), MMT, and VLT/VIMOS (in 2003), we refer the reader to Tran+ (2009ApJ...705..809T). (1 data file).
NASA Technical Reports Server (NTRS)
Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce
2015-01-01
A Digital Divide has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or maps) and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported data rods project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectivesconstraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly (virtual) data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a years worth of time series for hourly data (9,000 time steps) in 90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.
2015-12-01
A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.
The Surface Ocean CO2 Atlas: Stewarding Underway Carbon Data from Collection to Archival
NASA Astrophysics Data System (ADS)
O'Brien, K.; Smith, K. M.; Pfeil, B.; Landa, C.; Bakker, D. C. E.; Olsen, A.; Jones, S.; Shrestha, B.; Kozyr, A.; Manke, A. B.; Schweitzer, R.; Burger, E. F.
2016-02-01
The Surface Ocean CO2 Atlas (SOCAT, www.socat.info) is a quality controlled, global surface ocean carbon dioxide (CO2) data set gathered on research vessels, SOOP and buoys. To the degree feasible SOCAT is comprehensive; it draws together and applies uniform QC procedures to all such observations made across the international community. The first version of SOCAT (version 1.5) was publicly released September 2011(Bakker et al., 2011) with 6.3 million observations. This was followed by the release of SOCAT version 2, expanded to over 10 million observations, in June 2013 (Bakker et al., 2013). Most recently, in September 2015 SOCAT version 3 was released containing over 14 millions observations spanning almost 60 years! The process of assembling, QC'ing and publishing V1.5 and V2 of SOCAT required an unsustainable level of manual effort. To ease the burden on data managers and data providers, the SOCAT community agreed to embark an automated data ingestion process which would create a streamlined workflow to improve data stewardship from ingestion to quality control and from publishing to archival. To that end, for version 3 and beyond, the SOCAT automation team created a framework which was based upon standards and conventions, yet at the same time allows scientists to work in the data formats they felt most comfortable with (ie, csv files). This automated workflow provides several advantages: 1) data ingestion into uniform and standards-based file formats; 2) ease of data integration into standard quality control system; 3) data ingestion and quality control can be performed in parallel; 4) provides uniform method of archiving carbon data and generation of digital object identifiers (DOI).In this presentation, we will discuss and demonstrate the SOCAT data ingestion dashboard and the quality control system. We will also discuss the standards, conventions, and tools that were leveraged to create a workflow that allows scientists to work in their own formats, yet provides a framework for creating high quality data products on an annual basis, while meeting or exceeding data requirements for access, documentation and archival.
Wang, Qian; Hisatomi, Takashi; Suzuki, Yohichi; Pan, Zhenhua; Seo, Jeongsuk; Katayama, Masao; Minegishi, Tsutomu; Nishiyama, Hiroshi; Takata, Tsuyoshi; Seki, Kazuhiko; Kudo, Akihiko; Yamada, Taro; Domen, Kazunari
2017-02-01
Development of sunlight-driven water splitting systems with high efficiency, scalability, and cost-competitiveness is a central issue for mass production of solar hydrogen as a renewable and storable energy carrier. Photocatalyst sheets comprising a particulate hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) embedded in a conductive thin film can realize efficient and scalable solar hydrogen production using Z-scheme water splitting. However, the use of expensive precious metal thin films that also promote reverse reactions is a major obstacle to developing a cost-effective process at ambient pressure. In this study, we present a standalone particulate photocatalyst sheet based on an earth-abundant, relatively inert, and conductive carbon film for efficient Z-scheme water splitting at ambient pressure. A SrTiO 3 :La,Rh/C/BiVO 4 :Mo sheet is shown to achieve unassisted pure-water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency (STH) of 1.2% at 331 K and 10 kPa, while retaining 80% of this efficiency at 91 kPa. The STH value of 1.0% is the highest among Z-scheme pure water splitting operating at ambient pressure. The working mechanism of the photocatalyst sheet is discussed on the basis of band diagram simulation. In addition, the photocatalyst sheet split pure water more efficiently than conventional powder suspension systems and photoelectrochemical parallel cells because H + and OH - concentration overpotentials and an IR drop between the HEP and OEP were effectively suppressed. The proposed carbon-based photocatalyst sheet, which can be used at ambient pressure, is an important alternative to (photo)electrochemical systems for practical solar hydrogen production.
NASA Astrophysics Data System (ADS)
Kumar, Sumit; Das, Aloke
2013-06-01
Non-covalent interactions play a key role in governing the specific functional structures of biomolecules as well as materials. Thus molecular level understanding of these intermolecular interactions can help in efficient drug design and material synthesis. It has been found from X-ray crystallography that pure hydrocarbon solids (i.e. benzene, hexaflurobenzene) have mostly slanted T-shaped (herringbone) packing arrangement whereas mixed solid hydrocarbon crystals (i.e. solid formed from mixtures of benzene and hexafluorobenzene) exhibit preferentially parallel displaced (PD) π-stacked arrangement. Gas phase spectroscopy of the dimeric complexes of the building blocks of solid pure benzene and mixed benzene-hexafluorobenzene adducts exhibit similar structural motifs observed in the corresponding crystal strcutures. In this talk, I will discuss about the jet-cooled dimeric complexes of indole with hexafluorobenzene and p-xylene in the gas phase using Resonant two photon ionzation and IR-UV double resonance spectroscopy combined with quantum chemistry calculations. In stead of studying benzene...p-xylene and benzene...hexafluorobenzene dimers, we have studied corresponding indole complexes because N-H group is much more sensitive IR probe compared to C-H group. We have observed that indole...hexafluorobenzene dimer has parallel displaced (PD) π-stacked structure whereas indole...p-xylene has slanted T-shaped structure. We have shown here selective switching of dimeric structure from T-shaped to π-stacked by changing the substituent from electron donating (-CH3) to electron withdrawing group (fluorine) in one of the complexing partners. Thus, our results demonstrate that efficient engineering of the non-covalent interactions can lead to efficient drug design and material synthesis.
NASA Astrophysics Data System (ADS)
Trenti, Michele
2017-08-01
Hubble's WFC3 has been a game changer for the study of early galaxy formation in the first 700 Myr after the Big Bang. Reliable samples of sources to redshift z 11, which can be discovered only from space, are now constraining the evolution of the galaxy luminosity function into the epoch of reionization. Unexpectedly but excitingly, the recent spectroscopic confirmations of L>L* galaxies at z>8.5 demonstrate that objects brighter than our own Galaxy are already present 500 Myr after the Big Bang, creating a challenge to current theoretical/numerical models that struggle to explain how galaxies can grow so luminous so quickly. Yet, the existing HST observations do not cover sufficient area, nor sample a large enough diversity of environments to provide an unbiased sample of sources, especially at z 9-11 where only a handful of bright candidates are known. To double this currently insufficient sample size, to constrain effectively the bright-end of the galaxy luminosity function at z 9-10, and to provide targets for follow-up imaging and spectroscopy with JWST, we propose a large-area pure-parallel survey that will discover the Brightest of Reionizing Galaxies (BoRG[4JWST]). We will observe 580 arcmin^2 over 125 sightlines in five WFC3 bands (0.35 to 1.7 micron) using high-quality pure-parallel opportunities available in the cycle (3 orbits or longer). These public observations will identify more than 80 intrinsically bright galaxies at z 8-11, investigate the connection between halo mass, star formation and feedback in progenitors of groups and clusters, and build HST lasting legacy of large-area, near-IR imaging.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
NASA Astrophysics Data System (ADS)
Calvi, V.; Trenti, M.; Stiavelli, M.; Oesch, P.; Bradley, L. D.; Schmidt, K. B.; Coe, D.; Brammer, G.; Bernard, S.; Bouwens, R. J.; Carrasco, D.; Carollo, C. M.; Holwerda, B. W.; MacKenty, J. W.; Mason, C. A.; Shull, J. M.; Treu, T.
2016-02-01
We present the first results and design from the redshift z ˜ 9-10 Brightest of the Reionizing Galaxies Hubble Space Telescope survey BoRG[z9-10], aimed at searching for intrinsically luminous unlensed galaxies during the first 700 Myr after the Big Bang. BoRG[z9-10] is the continuation of a multi-year pure-parallel near-IR and optical imaging campaign with the Wide Field Camera 3. The ongoing survey uses five filters, optimized for detecting the most distant objects and offering continuous wavelength coverage from λ = 0.35 μm to λ = 1.7 μm. We analyze the initial ˜130 arcmin2 of area over 28 independent lines of sight (˜25% of the total planned) to search for z\\gt 7 galaxies using a combination of Lyman-break and photometric redshift selections. From an effective comoving volume of (5-25) × 105 Mpc3 for magnitudes brighter than {m}{AB}=26.5{{{--}}}24.0 in the {H}{{160}}-band respectively, we find five galaxy candidates at z\\quad ˜ 8.3-10 detected at high confidence ({{S}}/{{N}}\\gt 8), including a source at z\\quad ˜ 8.4 with {m}{AB}=24.5 ({{S}}/{{N}} ˜ 22), which, if confirmed, would be the brightest galaxy identified at such early times (z\\gt 8). In addition, BoRG[z9-10] data yield four galaxies with 7.3≲ z≲ 8. These new Lyman-break galaxies with m≲ 26.5 are ideal targets for follow-up observations from ground and space-based observatories to help investigate the complex interplay between dark matter growth, galaxy assembly, and reionization.
Hubble's View of Little Blue Dots
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2018-02-01
The recent discovery of a new type of tiny, star-forming galaxy is the latest in a zoo of detections shedding light on our early universe. What can we learn from the unique little blue dots found in archival Hubble data?Peas, Berries, and DotsGreen pea galaxies identified by citizen scientists with Galaxy Zoo. [Richard Nowell Carolin Cardamone]As telescope capabilities improve and we develop increasingly deeper large-scale surveys of our universe, we continue to learn more about small, faraway galaxies. In recent years, increasing sensitivity first enabled the detection of green peas luminous, compact, low-mass (10 billion solar masses; compare this to the Milky Ways 1 trillion solar masses!) galaxies with high rates of star formation.Not long thereafter, we discovered galaxies that form stars similarly rapidly, but are even smaller only 330 million solar masses, spanning less than 3,000 light-years in size. These tiny powerhouses were termed blueberries for their distinctive color.Now, scientists Debra and Bruce Elmegreen (of Vassar College and IBM Research Division, respectively) report the discovery of galaxies that have even higher star formation rates and even lower masses: little blue dots.Exploring Tiny Star FactoriesThe Elmegreens discovered these unique galaxies by exploring archival Hubble data. The Hubble Frontier Fields data consist of deep images of six distant galaxy clusters and the parallel fields next to them. It was in the archival data for two Frontier Field Parallels, those for clusters Abell 2744 and MAS J0416.1-2403, that the authors noticed several galaxies that stand out as tiny, bright, blue objects that are nearly point sources.Top: a few examples of the little blue dots recently identified in two Hubble Frontier Field Parallels. Bottom: stacked images for three different groups of little blue dots. [Elmegreen Elmegreen 2017]The authors performed a search through the two Frontier Field Parallels, discovering a total of 55 little blue dots with masses spanning 105.8107.4solar masses, specific star formation rates of 10-7.4, and redshifts of 0.5 z 5.4.Exploring these little blue dots, the Elmegreens find that the galaxies sizes tend to be just a few hundred light-years across. They are gas-dominated; gas currently outweighs stars in these galaxies by perhaps a factor of five. Impressively, based on the incredibly high specific star formation rates observed in these little blue dots, they appear to have formed all of their stars in the last 1% of the age of the universe for them.An Origin for Globulars?Log-log plot of star formation rate vs. mass for the three main groups of little blue dots (red, green, and blue markers), a fourth group of candidates with different properties (brown markers), and previously discovered local blueberry galaxies. The three main groups of little blue dots appear to be low-mass analogs of blueberries. [Elmegreen Elmegreen 2017]Intriguingly, this rapid star formation might be the key to answering a long-standing question: where do globular clusters come from? The Elmegreens propose that little blue dots might actually be an explanation for the origin of these orbiting, spherical, low-metallicity clusters of stars.The authors demonstrate that, if the current star formation rates observed in little blue dots were to persist for another 50 Myr before feedback or gas exhaustion halted star production, the little blue dots could form enough stars to create clusters of roughly a million solar masses which is large enough to explain the globular clusters we observe today.If little blue dots indeed rapidly produced such star clusters in the past, the clusters could later be absorbed into the halos of todays spiral and elliptical galaxies, appearing to us as the low-metallicity globular clusters that orbit large galaxies today.CitationDebra Meloy Elmegreen and Bruce G. Elmegreen 2017 ApJL 851 L44. doi:10.3847/2041-8213/aaa0ce
Optoelectronic interconnects for 3D wafer stacks
NASA Astrophysics Data System (ADS)
Ludwig, David E.; Carson, John C.; Lome, Louis S.
1996-01-01
Wafer and chip stacking are envisioned as a means of providing increased processing power within the small confines of a three-dimensional structure. Optoelectronic devices can play an important role in these dense 3-D processing electronic packages in two ways. In pure electronic processing, optoelectronics can provide a method for increasing the number of input/output communication channels within the layers of the 3-D chip stack. Non-free space communication links allow the density of highly parallel input/output ports to increase dramatically over typical edge bus connections. In hybrid processors, where electronics and optics play a role in defining the computational algorithm, free space communication links are typically utilized for, among other reasons, the increased network link complexity which can be achieved. Free space optical interconnections provide bandwidths and interconnection complexity unobtainable in pure electrical interconnections. Stacked 3-D architectures can provide the electronics real estate and structure to deal with the increased bandwidth and global information provided by free space optical communications. This paper provides definitions and examples of 3-D stacked architectures in optoelectronics processors. The benefits and issues of these technologies are discussed.
Optoelectronic interconnects for 3D wafer stacks
NASA Astrophysics Data System (ADS)
Ludwig, David; Carson, John C.; Lome, Louis S.
1996-01-01
Wafer and chip stacking are envisioned as means of providing increased processing power within the small confines of a three-dimensional structure. Optoelectronic devices can play an important role in these dense 3-D processing electronic packages in two ways. In pure electronic processing, optoelectronics can provide a method for increasing the number of input/output communication channels within the layers of the 3-D chip stack. Non-free space communication links allow the density of highly parallel input/output ports to increase dramatically over typical edge bus connections. In hybrid processors, where electronics and optics play a role in defining the computational algorithm, free space communication links are typically utilized for, among other reasons, the increased network link complexity which can be achieved. Free space optical interconnections provide bandwidths and interconnection complexity unobtainable in pure electrical interconnections. Stacked 3-D architectures can provide the electronics real estate and structure to deal with the increased bandwidth and global information provided by free space optical communications. This paper will provide definitions and examples of 3-D stacked architectures in optoelectronics processors. The benefits and issues of these technologies will be discussed.
Thermoelectric magnetohydrodynamic effects on the crystal growth rate of undercooled Ni dendrites
NASA Astrophysics Data System (ADS)
Kao, A.; Gao, J.; Pericleous, K.
2018-01-01
In the undercooled solidification of pure metals, the dendrite tip velocity has been shown experimentally to have a strong dependence on the intensity of an external magnetic field, exhibiting several maxima and minima. In the experiments conducted in China, the undercooled solidification dynamics of pure Ni was studied using the glass fluxing method. Visual recordings of the progress of solidification are compared at different static fields up to 6 T. The introduction of microscopic convective transport through thermoelectric magnetohydrodynamics is a promising explanation for the observed changes of tip velocities. To address this problem, a purpose-built numerical code was used to solve the coupled equations representing the magnetohydrodynamic, thermal and solidification mechanisms. The underlying phenomena can be attributed to two competing flow fields, which were generated by orthogonal components of the magnetic field, parallel and transverse to the direction of growth. Their effects are either intensified or damped out with increasing magnetic field intensity, leading to the observed behaviour of the tip velocity. The results obtained reflect well the experimental findings. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
Fermilab Today - Related Content
Fermilab Today Related Content Subscribe | Contact Fermilab Today | Archive | Classifieds Search Experiment Profiles Current Archive Current Fermilab Today Archive of 2015 Archive of 2014 Archive of 2013 Archive of 2012 Archive of 2011 Archive of 2010 Archive of 2009 Archive of 2008 Archive of 2007 Archive of
Crown, Scott B; Long, Christopher P; Antoniewicz, Maciek R
2016-11-01
13 C-Metabolic flux analysis ( 13 C-MFA) is a widely used approach in metabolic engineering for quantifying intracellular metabolic fluxes. The precision of fluxes determined by 13 C-MFA depends largely on the choice of isotopic tracers and the specific set of labeling measurements. A recent advance in the field is the use of parallel labeling experiments for improved flux precision and accuracy. However, as of today, no systemic methods exist for identifying optimal tracers for parallel labeling experiments. In this contribution, we have addressed this problem by introducing a new scoring system and evaluating thousands of different isotopic tracer schemes. Based on this extensive analysis we have identified optimal tracers for 13 C-MFA. The best single tracers were doubly 13 C-labeled glucose tracers, including [1,6- 13 C]glucose, [5,6- 13 C]glucose and [1,2- 13 C]glucose, which consistently produced the highest flux precision independent of the metabolic flux map (here, 100 random flux maps were evaluated). Moreover, we demonstrate that pure glucose tracers perform better overall than mixtures of glucose tracers. For parallel labeling experiments the optimal isotopic tracers were [1,6- 13 C]glucose and [1,2- 13 C]glucose. Combined analysis of [1,6- 13 C]glucose and [1,2- 13 C]glucose labeling data improved the flux precision score by nearly 20-fold compared to widely use tracer mixture 80% [1- 13 C]glucose +20% [U- 13 C]glucose. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Raman-Ramsey multizone spectroscopy in a pure rubidium vapor cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Failache, H.; Lenci, L.; Lezama, A.
2010-02-15
In view of application to a miniaturized spectroscopy system, we consider an optical setup that splits a laser beam into several parallel narrow light sheets allowing an effective beam expansion and consequently longer atom-light interaction times. We analyze the multizone coherent population trapping (MZCPT) spectroscopy of alkali-metal-vapor atoms, without buffer gas, in the presence of a split light beam. We show that the MZCPT signal is largely insensitive to intensity broadening. Experimentally observed spectra are in qualitative agreement with the predictions of a simplified model that describes each spectrum as an integral over the atomic velocity distribution of Ramsey multizonemore » spectra.« less
Investigation of low-loss spectra and near-edge fine structure of polymers by PEELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heckmann, W.
Transmission electron microscopy has changed from a purely imaging method to an analytical method. This has been facilitated particularly by equipping electron microscopes with energy filters and with parallel electron energy loss spectrometers (PEELS). Because of their relatively high energy resolution (1 to 2 eV) they provide information not only on the elements present but also on the type of bonds between the molecular groups. Polymers are radiation sensitive and the molecular bonds change as the spectrum is being recorded. This can be observed with PEEL spectrometers that are able to record spectra with high sensitivity and in rapid succession.
Optical coherence refractometry.
Tomlins, Peter H; Woolliams, Peter; Hart, Christian; Beaumont, Andrew; Tedaldi, Matthew
2008-10-01
We introduce a novel approach to refractometry using a low coherence interferometer at multiple angles of incidence. We show that for plane parallel samples it is possible to measure their phase refractive index rather than the group index that is usually measured by interferometric methods. This is a significant development because it enables bulk refractive index measurement of scattering and soft samples, not relying on surface measurements that can be prone to error. Our technique is also noncontact and compatible with in situ refractive index measurements. Here, we demonstrate this new technique on a pure silica test piece and a highly scattering resin slab, comparing the results with standard critical angle refractometry.
Inelastic scattering of 61 MeV protons by pb-207
NASA Technical Reports Server (NTRS)
Owais, M.
1976-01-01
Differential cross sections for the excitation of the first four neutron-hole states and the doublet at 2.61 MeV by 61.2 MeV protons were measured. The data are analyzed in terms of both a purely collective model description and a microscopic model supplemented by macroscopic core polarization. A realistic two-body interaction is used and knock-on amplitudes are included. Core polarization is found to be important but represents a relatively smaller contribution than in most nuclei previously studied. A parallel analysis of similar data at lower proton bombarding energies reveals a surprisingly strong energy dependence of the reaction mechanisms.
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Kresse, Stine H; Namløs, Heidi M; Lorenz, Susanne; Berner, Jeanne-Marie; Myklebost, Ola; Bjerkehagen, Bodil; Meza-Zepeda, Leonardo A
2018-01-01
Nucleic acid material of adequate quality is crucial for successful high-throughput sequencing (HTS) analysis. DNA and RNA isolated from archival FFPE material are frequently degraded and not readily amplifiable due to chemical damage introduced during fixation. To identify optimal nucleic acid extraction kits, DNA and RNA quantity, quality and performance in HTS applications were evaluated. DNA and RNA were isolated from five sarcoma archival FFPE blocks, using eight extraction protocols from seven kits from three different commercial vendors. For DNA extraction, the truXTRAC FFPE DNA kit from Covaris gave higher yields and better amplifiable DNA, but all protocols gave comparable HTS library yields using Agilent SureSelect XT and performed well in downstream variant calling. For RNA extraction, all protocols gave comparable yields and amplifiable RNA. However, for fusion gene detection using the Archer FusionPlex Sarcoma Assay, the truXTRAC FFPE RNA kit from Covaris and Agencourt FormaPure kit from Beckman Coulter showed the highest percentage of unique read-pairs, providing higher complexity of HTS data and more frequent detection of recurrent fusion genes. truXTRAC simultaneous DNA and RNA extraction gave similar outputs as individual protocols. These findings show that although successful HTS libraries could be generated in most cases, the different protocols gave variable quantity and quality for FFPE nucleic acid extraction. Selecting the optimal procedure is highly valuable and may generate results in borderline quality specimens.
gPhoton: The GALEX Photon Data Archive
NASA Astrophysics Data System (ADS)
Million, Chase; Fleming, Scott W.; Shiao, Bernie; Seibert, Mark; Loyd, Parke; Tucker, Michael; Smith, Myron; Thompson, Randy; White, Richard L.
2016-12-01
gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database and to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.
Data acquisition and processing system for the HT-6M tokamak fusion experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu, Y.T.; Liu, G.C.; Pang, J.Q.
1987-08-01
This paper describes a high-speed data acquisition and processing system which has been successfully operated on the HT-6M tokamak fusion experimental device. The system collects, archives and analyzes up to 512 kilobytes of data from each shot of the experiment. A shot lasts 50-150 milliseconds and occurs every 5-10 minutes. The system consists of two PDP-11/24 computer systems. One PDP-11/24 is used for real-time data taking and on-line data analysis. It is based upon five CAMAC crates organized into a parallel branch. Another PDP-11/24 is used for off-line data processing. Both data acquisition software RSX-DAS and data processing software RSX-DAPmore » have modular, multi-tasking and concurrent processing features.« less
Energy recovery from waste glycerol by utilizing thermal water vapor plasma.
Tamošiūnas, Andrius; Valatkevičius, Pranas; Gimžauskaitė, Dovilė; Jeguirim, Mejdi; Mėčius, Vladas; Aikas, Mindaugas
2017-04-01
Glycerol, considered as a waste feedstock resulting from biodiesel production, has received much attention in recent years due to its properties, which offer to recover energy. The aim of this study was to investigate the use of a thermal water vapor plasma for waste (crude) glycerol conversion to synthesis gas, or syngas (H 2 + CO). In parallel of crude glycerol, a pure glycerol (99.5%) was used as a reference material in order to compare the concentrations of the formed product gas. A direct current (DC) arc plasma torch stabilized by a mixture of argon/water vapor was utilized for the effective glycerol conversion to hydrogen-rich synthesis gas. It was found that after waste glycerol treatment, the main reaction products were gases with corresponding concentrations of H 2 50.7%, CO 23.53%, CO 2 11.45%, and CH 4 3.82%, and traces of C 2 H 2 and C 2 H 6 , which concentrations were below 0.5%. The comparable concentrations of the formed gas products were obtained after pure glycerol conversion-H 2 46.4%, CO 26.25%, CO 2 11.3%, and CH 4 4.7%. The use of thermal water vapor plasma producing synthesis gas is an effective method to recover energy from both crude and pure glycerol. The performance of the glycerol conversion system was defined in terms of the produced gas yield, the carbon conversion efficiency, the cold gas efficiency, and the specific energy requirements.
Seismic anisotropy and mantle flow below subducting slabs
NASA Astrophysics Data System (ADS)
Walpole, Jack; Wookey, James; Kendall, J.-Michael; Masters, T.-Guy
2017-05-01
Subduction is integral to mantle convection and plate tectonics, yet the role of the subslab mantle in this process is poorly understood. Some propose that decoupling from the slab permits widespread trench parallel flow in the subslab mantle, although the geodynamical feasibility of this has been questioned. Here, we use the source-side shear wave splitting technique to probe anisotropy beneath subducting slabs, enabling us to test petrofabric models and constrain the geometry of mantle fow. Our global dataset contains 6369 high quality measurements - spanning ∼ 40 , 000 km of subduction zone trenches - over the complete range of available source depths (4 to 687 km) - and a large range of angles in the slab reference frame. We find that anisotropy in the subslab mantle is well characterised by tilted transverse isotropy with a slow-symmetry-axis pointing normal to the plane of the slab. This appears incompatible with purely trench-parallel flow models. On the other hand it is compatible with the idea that the asthenosphere is tilted and entrained during subduction. Trench parallel measurements are most commonly associated with shallow events (source depth < 50 km) - suggesting a separate region of anisotropy in the lithospheric slab. This may correspond to the shape preferred orientation of cracks, fractures, and faults opened by slab bending. Meanwhile the deepest events probe the upper lower mantle where splitting is found to be consistent with deformed bridgmanite.
Fermilab History and Archives Project | Norman F. Ramsey
Fermilab History and Archives Project Fermilab History and Archives Project Fermilab History and Archives Project Home About the Archives History and Archives Online Request Contact Us History & ; Archives Project Fermilab History and Archives Project Norman F. Ramsey Back to History and Archives
Vanneste, Sven; De Ridder, Dirk
2012-01-01
Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
NASA Astrophysics Data System (ADS)
Fang, W.; Quan, S. H.; Xie, C. J.; Ran, B.; Li, X. L.; Wang, L.; Jiao, Y. T.; Xu, T. W.
2017-05-01
The majority of the thermal energy released in an automotive internal combustion cycle is exhausted as waste heat through the tail pipe. This paper describes an automobile exhaust thermoelectric generator (AETEG), designed to recycle automobile waste heat. A model of the output characteristics of each thermoelectric device was established by testing their open circuit voltage and internal resistance, and combining the output characteristics. To better describe the relationship, the physical model was transformed into a topological model. The connection matrix was used to describe the relationship between any two thermoelectric devices in the topological structure. Different topological structures produced different power outputs; their output power was maximised by using an iterative algorithm to optimize the series-parallel electrical topology structure. The experimental results have shown that the output power of the optimal topology structure increases by 18.18% and 29.35% versus that of a pure in-series or parallel topology, respectively, and by 10.08% versus a manually defined structure (based on user experience). The thermoelectric conversion device increased energy efficiency by 40% when compared with a traditional car.
The impact of natural products upon modern drug discovery.
Ganesan, A
2008-06-01
In the period 1970-2006, a total of 24 unique natural products were discovered that led to an approved drug. We analyze these successful leads in terms of drug-like properties, and show that they can be divided into two equal subsets. The first falls in the 'Lipinski universe' and complies with the Rule of Five. The second is a 'parallel universe' that violates the rules. Nevertheless, the latter compounds remain largely compliant in terms of logP and H-bond donors, highlighting the importance of these two metrics in predicting bioavailability. Natural products are often cited as an exception to Lipinski's rules. We believe this is because nature has learned to maintain low hydrophobicity and intermolecular H-bond donating potential when it needs to make biologically active compounds with high molecular weight and large numbers of rotatable bonds. In addition, natural products are more likely than purely synthetic compounds to resemble biosynthetic intermediates or endogenous metabolites, and hence take advantage of active transport mechanisms. Interestingly, the natural product leads in the Lipinski and parallel universe had an identical success rate (50%) in delivering an oral drug.
On the Transition from Two-Dimensional to Three-Dimensional MHD Turbulence
NASA Technical Reports Server (NTRS)
Thess, A.; Zikanov, Oleg
2004-01-01
We report a theoretical investigation of the robustness of two-dimensional inviscid MHD flows at low magnetic Reynolds numbers with respect to three-dimensional perturbations. We analyze three model problems, namely flow in the interior of a triaxial ellipsoid, an unbounded vortex with elliptical streamlines, and a vortex sheet parallel to the magnetic field. We demonstrate that motion perpendicular to the magnetic field with elliptical streamlines becomes unstable with respect to the elliptical instability once the velocity has reached a critical magnitude whose value tends to zero as the eccentricity of the streamlines becomes large. Furthermore, vortex sheets parallel to the magnetic field, which are unstable for any velocity and any magnetic field, are found to emit eddies with vorticity perpendicular to the magnetic field and with an aspect ratio proportional to N(sup 1/2). The results suggest that purely two-dimensional motion without Joule energy dissipation is a singular type of flow which does not represent the asymptotic behaviour of three-dimensional MHD turbulence in the limit of infinitely strong magnetic fields.
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego; Hauck, Cory
2012-10-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy between parallel (to the magnetic field) and perpendicular directions (χ/χ˜10^10 in fusion plasmas). Recently, a Lagrangian Green's function approach, developed for the purely parallel transport case,footnotetextD. del-Castillo-Negrete, L. Chac'on, PRL, 106, 195004 (2011)^,footnotetextD. del-Castillo-Negrete, L. Chac'on, Phys. Plasmas, 19, 056112 (2012) has been extended to the anisotropic transport case in the tokamak-ordering limit with constant density.footnotetextL. Chac'on, D. del-Castillo-Negrete, C. Hauck, JCP, submitted (2012) An operator-split algorithm is proposed that allows one to treat Eulerian and Lagrangian components separately. The approach is shown to feature bounded numerical errors for arbitrary χ/χ ratios, which renders it asymptotic-preserving. In this poster, we will present the generalization of the Lagrangian approach to arbitrary magnetic fields. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry.
Anomalous Anticipatory Responses in Networked Random Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Roger D.; Bancel, Peter A.
2006-10-16
We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small butmore » significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation.« less
NASA Astrophysics Data System (ADS)
Škoda, Petr; Palička, Andrej; Koza, Jakub; Shakurova, Ksenia
2017-06-01
The current archives of LAMOST multi-object spectrograph contain millions of fully reduced spectra, from which the automatic pipelines have produced catalogues of many parameters of individual objects, including their approximate spectral classification. This is, however, mostly based on the global shape of the whole spectrum and on integral properties of spectra in given bandpasses, namely presence and equivalent width of prominent spectral lines, while for identification of some interesting object types (e.g. Be stars or quasars) the detailed shape of only a few lines is crucial. Here the machine learning is bringing a new methodology capable of improving the reliability of classification of such objects even in boundary cases. We present results of Spark-based semi-supervised machine learning of LAMOST spectra attempting to automatically identify the single and double-peak emission of Hα line typical for Be and B[e] stars. The labelled sample was obtained from archive of 2m Perek telescope at Ondřejov observatory. A simple physical model of spectrograph resolution was used in domain adaptation to LAMOST training domain. The resulting list of candidates contains dozens of Be stars (some are likely yet unknown), but also a bunch of interesting objects resembling spectra of quasars and even blazars, as well as many instrumental artefacts. The verification of a nature of interesting candidates benefited considerably from cross-matching and visualisation in the Virtual Observatory environment.
NASA Astrophysics Data System (ADS)
Tsvetkov, M. K.; Stavrev, K. Y.; Tsvetkova, K. P.; Semkov, E. H.; Mutatov, A. S.
The Wide-Field Plate Database (WFPDB) and the possibilities for its application as a research tool in observational astronomy are presented. Currently the WFPDB comprises the descriptive data for 400 000 archival wide field photographic plates obtained with 77 instruments, from a total of 1 850 000 photographs stored in 269 astronomical archives all over the world since the end of last century. The WFPDB is already accessible for the astronomical community, now only in batch mode through user requests sent by e-mail. We are working on on-line interactive access to the data via INTERNET from Sofia and parallel from the Centre de Donnees Astronomiques de Strasbourg. (Initial information can be found on World Wide Web homepage URL http://www.wfpa.acad.bg.) The WFPDB may be useful in studies of a variety of astronomical objects and phenomena, andespecially for long-term investigations of variable objects and for multi-wavelength research. We have analysed the data in the WFPDB in order to derive the overall characteristics of the totality of wide-field observations, such as the sky coverage, the distributions by observation time and date, by spectral band, and by object type. We have also examined the totality of wide-field observations from point of view of their quality, availability and digitisation. The usefulness of the WFPDB is demonstrated by the results of identification and investigation of the photometrical behaviour of optical analogues of gamma-ray bursts.
NASA Astrophysics Data System (ADS)
King, Nelson E.; Liu, Brent; Zhou, Zheng; Documet, Jorge; Huang, H. K.
2005-04-01
Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models that can address the problem of fault-tolerant storage for backup and recovery of clinical images. We have researched and developed a novel Data Grid testbed involving several federated PAC systems based on grid architecture. By integrating a grid computing architecture to the DICOM environment, a failed PACS archive can recover its image data from others in the federation in a timely and seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid architecture representing three federated PAC systems, the Fault-Tolerant PACS archive server at the Image Processing and Informatics Laboratory, Marina del Rey, the clinical PACS at Saint John's Health Center, Santa Monica, and the clinical PACS at the Healthcare Consultation Center II, USC Health Science Campus, will be presented. The successful demonstration of the Data Grid in the testbed will provide an understanding of the Data Grid concept in clinical image data backup as well as establishment of benchmarks for performance from future grid technology improvements and serve as a road map for expanded research into large enterprise and federation level data grids to guarantee 99.999 % up time.
NASA Astrophysics Data System (ADS)
Graham, David W.; Knapp, Charles W.; Christensen, Bent T.; McCluskey, Seánín; Dolfing, Jan
2016-02-01
Debate exists about whether agricultural versus medical antibiotic use drives increasing antibiotic resistance (AR) across nature. Both sectors have been inconsistent at antibiotic stewardship, but it is unclear which sector has most influenced acquired AR on broad scales. Using qPCR and soils archived since 1923 at Askov Experimental Station in Denmark, we quantified four broad-spectrum β-lactam AR genes (ARG; blaTEM, blaSHV, blaOXA and blaCTX-M) and class-1 integron genes (int1) in soils from manured (M) versus inorganic fertilised (IF) fields. “Total” β-lactam ARG levels were significantly higher in M versus IF in soils post-1940 (paired-t test; p < 0.001). However, dominant individual ARGs varied over time; blaTEM and blaSHV between 1963 and 1974, blaOXA slightly later, and blaCTX-M since 1988. These dates roughly parallel first reporting of these genes in clinical isolates, suggesting ARGs in animal manure and humans are historically interconnected. Archive data further show when non-therapeutic antibiotic use was banned in Denmark, blaCTX-M levels declined in M soils, suggesting accumulated soil ARGs can be reduced by prudent antibiotic stewardship. Conversely, int1 levels have continued to increase in M soils since 1990, implying direct manure application to soils should be scrutinized as part of future stewardship programs.
El Fallah, Rawa; Rouillon, Régis; Vouvé, Florence
2018-06-15
The fate of benzo(a)pyrene (BaP), a ubiquitous contaminant reported to be persistent in the environment, is largely controlled by its interactions with the soil organic matter. In the present study, the spectral characteristics of fluorophores present in the physical fractions of the soil organic matter were investigated in the presence of pure BaP solution. After extraction of humic substances (HSs), and their fractionation into fluvic acid (FA) and humic acid (HA), two fluorescent compounds (C 1 and C 2 ) were identified and characterized in each physical soil fraction, by means of fluorescence excitation-emission matrices (FEEMs) and Parallel Factor Analysis (PARAFAC). Then, to each type of fraction having similar DOC content, was added an increasing volume of pure BaP solution in attempt to assess the behavior of BaP with the fluorophores present in each one. The application of FEEMs-PARAFAC method validated a three-component model that consisted of the two resulted fluorophores from HSs, FA and HA (C 1 and C 2 ) and a BaP-like fluorophore (C 3 ). Spectral modifications were noted for components C 2 HSs (C 2 in humic substances fraction) (λex/λem: 420/490-520 nm), C 2 FA (C 2 in fulvic acid fraction) (λex/λem: 400/487(517) nm) and C 1 HA (C 1 in humic acid fraction) (λex/λem: 350/452(520) nm). We explored the impact of increasing the volume of the added pure BaP solution on the scores of the fluorophores present in the soil fractions. It was found that the scores of C 2 HSs, C 2 FA, and C 1 HA increased when the volume of the added pure BaP solution increased. Superposition of the excitation spectra of these fluorophores with the emission spectrum of BaP showed significant overlaps that might explain the observed interactions between BaP and the fluorescent compounds present in SOM physical fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
El Fallah, Rawa; Rouillon, Régis; Vouvé, Florence
2018-06-01
The fate of benzo(a)pyrene (BaP), a ubiquitous contaminant reported to be persistent in the environment, is largely controlled by its interactions with the soil organic matter. In the present study, the spectral characteristics of fluorophores present in the physical fractions of the soil organic matter were investigated in the presence of pure BaP solution. After extraction of humic substances (HSs), and their fractionation into fluvic acid (FA) and humic acid (HA), two fluorescent compounds (C1 and C2) were identified and characterized in each physical soil fraction, by means of fluorescence excitation-emission matrices (FEEMs) and Parallel Factor Analysis (PARAFAC). Then, to each type of fraction having similar DOC content, was added an increasing volume of pure BaP solution in attempt to assess the behavior of BaP with the fluorophores present in each one. The application of FEEMs-PARAFAC method validated a three-component model that consisted of the two resulted fluorophores from HSs, FA and HA (C1 and C2) and a BaP-like fluorophore (C3). Spectral modifications were noted for components C2HSs (C2 in humic substances fraction) (λex/λem: 420/490-520 nm), C2FA (C2 in fulvic acid fraction) (λex/λem: 400/487(517) nm) and C1HA (C1 in humic acid fraction) (λex/λem: 350/452(520) nm). We explored the impact of increasing the volume of the added pure BaP solution on the scores of the fluorophores present in the soil fractions. It was found that the scores of C2HSs, C2FA, and C1HA increased when the volume of the added pure BaP solution increased. Superposition of the excitation spectra of these fluorophores with the emission spectrum of BaP showed significant overlaps that might explain the observed interactions between BaP and the fluorescent compounds present in SOM physical fractions.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
Design and analysis of a global sub-mesoscale and tidal dynamics admitting virtual ocean.
NASA Astrophysics Data System (ADS)
Menemenlis, D.; Hill, C. N.
2016-02-01
We will describe the techniques used to realize a global kilometerscale ocean model configuration that includes representation of sea-ice and tidal excitation, and spans scales from planetary gyres to internal tides. A simulation using this model configuration provides a virtual ocean that admits some sub-mesoscale dynamics and tidal energetics not normally represented in global calculations. This extends simulated ocean behavior beyond broadly quasi-geostrophic flows and provides a preliminary example of a next generation computational approach to explicitly probing the interactions between instabilities that are usually parameterized and dominant energetic scales in the ocean. From previous process studies we have ascertained that this can lead to a qualitative improvement in the realism of many significant processes including geostrophic eddy dynamics, shelf-break exchange and topographic mixing. Computationally we exploit high-degrees of parallelism in both numerical evaluation and in recording model state to persistent disk storage. Together this allows us to compute and record a full three-dimensional model trajectory at hourly frequency for a timeperiod of 5 months with less than 9 million core hours of parallel computer time, using the present generation NASA Ames Research Center facilities. We have used this capability to create a 5 month trajectory archive, sampled at high spatial and temporal frequency for an ocean configuration that is initialized from a realistic data-assimilated state and driven with reanalysis surface forcing from ECMWF. The resulting database of model state provides a novel virtual laboratory for exploring coupling across scales in the ocean, and for testing ideas on the relationship between small scale fluxes and large scale state. The computation is complemented by counterpart computations that are coarsened two and four times respectively. In this presentation we will review the computational and numerical technologies employed and show how the high spatio-temporal frequency archive of model state can provide a new and promising tool for researching richer ocean dynamics at scale. We will also outline how computations of this nature could be combined with next generation computer hardware plans to help inform important climate process questions.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, J.
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the "shear zone." Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Three-dimensional models of deformation near strike-slip faults
ten Brink, Uri S.; Katzman, Rafael; Lin, Jian
1996-01-01
We use three-dimensional elastic models to help guide the kinematic interpretation of crustal deformation associated with strike-slip faults. Deformation of the brittle upper crust in the vicinity of strike-slip fault systems is modeled with the assumption that upper crustal deformation is driven by the relative plate motion in the upper mantle. The driving motion is represented by displacement that is specified on the bottom of a 15-km-thick elastic upper crust everywhere except in a zone of finite width in the vicinity of the faults, which we term the “shear zone.” Stress-free basal boundary conditions are specified within the shear zone. The basal driving displacement is either pure strike slip or strike slip with a small oblique component, and the geometry of the fault system includes a single fault, several parallel faults, and overlapping en echelon faults. We examine the variations in deformation due to changes in the width of the shear zone and due to changes in the shear strength of the faults. In models with weak faults the width of the shear zone has a considerable effect on the surficial extent and amplitude of the vertical and horizontal deformation and on the amount of rotation around horizontal and vertical axes. Strong fault models have more localized deformation at the tip of the faults, and the deformation is partly distributed outside the fault zone. The dimensions of large basins along strike-slip faults, such as the Rukwa and Dead Sea basins, and the absence of uplift around pull-apart basins fit models with weak faults better than models with strong faults. Our models also suggest that the length-to-width ratio of pull-apart basins depends on the width of the shear zone and the shear strength of the faults and is not constant as previously suggested. We show that pure strike-slip motion can produce tectonic features, such as elongate half grabens along a single fault, rotated blocks at the ends of parallel faults, or extension perpendicular to overlapping en echelon faults, which can be misinterpreted to indicate a regional component of extension. Zones of subsidence or uplift can become wider than expected for transform plate boundaries when a minor component of oblique motion is added to a system of parallel strike-slip faults.
Wolff, Anette S. B.; Kärner, Jaanika; Owe, Jone F.; Oftedal, Bergithe E.V.; Gilhus, Nils Erik; Erichsen, Martina M.; Kämpe, Olle; Meager, Anthony; Peterson, Pärt; Kisand, Kai; Willcox, Nick; Husebye, Eystein S.
2014-01-01
Patients with the autoimmune polyendocrine syndrome type I (APS-I), caused by mutations in the autoimmune regulator (AIRE) gene, and myasthenia gravis (MG) with thymoma, show intriguing but unexplained parallels. They include uncommon manifestations like autoimmune adrenal insufficiency (AI), hypoparathyroidism (HP), and chronic mucocutaneous candidiasis (CMC) plus autoantibodies neutralizing IL-17, IL-22 and type I interferons. Thymopoiesis in the absence of AIRE is implicated in both syndromes. To test whether these parallels extend further, we screened 247 patients with MG and/or thymoma for clinical features and organ-specific autoantibodies characteristic of APS-I patients, and assayed 26 thymoma samples for transcripts for AIRE and 16 peripheral tissue-specific autoantigens (TSAgs) by quantitative PCR. We found APS-I-typical autoantibodies and clinical manifestations, including CMC, AI and asplenia, respectively in 49/121 (40%) and 10/121 (8%) thymoma patients, but clinical features seldom co-occurred with the corresponding autoantibodies. Both were rare in other MG subgroups (N=126). In 38 APS-I patients, by contrast, we observed neither autoantibodies against muscle antigens nor any neuromuscular disorders. Whereas relative transcript levels for AIRE and 7 of 16 TSAgs showed the expected under-expression in thymomas, levels were increased for 4 of the 5 TSAgs most frequently targeted by these patients’ autoAbs. Hence the clinical and serologic parallels to APS-I in patients with thymomas are not explained purely by deficient TSAg transcription in these aberrant AIRE-deficient tumors. We therefore propose additional explanations for the unusual autoimmune biases they provoke. Thymoma patients should be monitored for potentially life-threatening APS-I manifestations such as AI and HP. PMID:25230752
Shim, Youngseon; Kim, Hyung J; Jung, Younjoon
2012-01-01
Supercapacitors with two single-sheet graphene electrodes in the parallel plate geometry are studied via molecular dynamics (MD) computer simulations. Pure 1-ethyl-3-methylimidazolium tetrafluoroborate (EMI+BF4-) and a 1.1 M solution of EMI+BF4- in acetonitrile are considered as prototypes of room-temperature ionic liquids (RTILs) and organic electrolytes. Electrolyte structure, charge density and associated electric potential are investigated by varying the charges and separation of the two electrodes. Multiple charge layers formed in the electrolytes in the vicinity of the electrodes are found to screen the electrode surface charge almost completely. As a result, the supercapacitors show nearly an ideal electric double layer behavior, i.e., the electric potential exhibits essentially a plateau behavior in the entire electrolyte region except for sharp changes in screening zones very close to the electrodes. Due to its small size and large charge separation, BF4- is considerably more efficient in shielding electrode charges than EMI+. In the case of the acetonitrile solution, acetonitrile also plays an important role by aligning its dipoles near the electrodes; however, the overall screening mainly arises from ions. Because of the disparity of shielding efficiency between cations and anions, the capacitance of the positively-charged anode is significantly larger than that of the negatively-charged cathode. Therefore, the total cell capacitance in the parallel plate configuration is primarily governed by the cathode. Ion conductivity obtained via the Green-Kubo (GK) method is found to be largely independent of the electrode surface charge. Interestingly, EMI+BF4- shows higher GK ion conductivity than the 1.1 M acetonitrile solution between two parallel plate electrodes.
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Kahn, B. H.; Green, R. O.; Chien, S.; Middleton, E.; Tran, D. Q.
2017-12-01
Clouds' variable ice and liquid content significantly influences their optical properties, evolution, and radiative forcing potential (Tan and Storelvmo, J. Atmos. Sci, 73, 2016). However, most remote measurements of thermodynamic phase have spatial resolutions of 1 km or more and are insensitive to mixed phases. This under-constrains important processes, such as spatial partitioning within mixed phase clouds, that carry outsize radiative forcing impacts. These uncertainties could shift Global Climate Model (GCM) predictions of future warming by over 1 degree Celsius (Tan et al., Science 352:6282, 2016). Imaging spectroscopy of reflected solar energy from the 1.4 - 1.8 μm shortwave infrared (SWIR) spectral range can address this observational gap. These observations can distinguish ice and water absorption, providing a robust and sensitive measurement of cloud top thermodynamic phase including mixed phases. Imaging spectrometers can resolve variations at scales of tens to hundreds of meters (Thompson et al., JGR-Atmospheres 121, 2016). We report the first such global high spatial resolution (30 m) survey, based on data from 2005-2015 acquired by the Hyperion imaging spectrometer onboard NASA's EO-1 spacecraft (Pearlman et al., Proc. SPIE 4135, 2001). Estimated seasonal and latitudinal distributions of cloud thermodynamic phase generally agree with observations made by other satellites such as the Atmospheric Infrared Sounder (AIRS). Variogram analyses reveal variability at different spatial scales. Our results corroborate previously observed zonal distributions, while adding insight into the spatial scales of processes governing cloud top thermodynamic phase. Figure: Thermodynamic phase retrievals. Top: Example of a cloud top thermodynamic phase map from the EO-1/Hyperion. Bottom: Latitudinal distributions of pure and mixed phase clouds, 2005-2015, showing Liquid Thickness Fraction (LTF). LTF=0 corresponds to pure ice absorption, while LTF=1 is pure liquid. The archive contains over 45,000 scenes. Copyright 2017, California Institute of Technology. Government Support Acknowledged.
NASA Astrophysics Data System (ADS)
Brächer, T.; Pirro, P.; Hillebrands, B.
2017-06-01
Magnonics and magnon spintronics aim at the utilization of spin waves and magnons, their quanta, for the construction of wave-based logic networks via the generation of pure all-magnon spin currents and their interfacing with electric charge transport. The promise of efficient parallel data processing and low power consumption renders this field one of the most promising research areas in spintronics. In this context, the process of parallel parametric amplification, i.e., the conversion of microwave photons into magnons at one half of the microwave frequency, has proven to be a versatile tool to excite and to manipulate spin waves. Its beneficial and unique properties such as frequency and mode-selectivity, the possibility to excite spin waves in a wide wavevector range and the creation of phase-correlated wave pairs, have enabled the achievement of important milestones like the magnon Bose-Einstein condensation and the cloning and trapping of spin-wave packets. Parallel parametric amplification, which allows for the selective amplification of magnons while conserving their phase is, thus, one of the key methods of spin-wave generation and amplification. The application of parallel parametric amplification to CMOS-compatible micro- and nano-structures is an important step towards the realization of magnonic networks. This is motivated not only by the fact that amplifiers are an important tool for the construction of any extended logic network but also by the unique properties of parallel parametric amplification. In particular, the creation of phase-correlated wave pairs allows for rewarding alternative logic operations such as a phase-dependent amplification of the incident waves. Recently, the successful application of parallel parametric amplification to metallic microstructures has been reported which constitutes an important milestone for the application of magnonics in practical devices. It has been demonstrated that parametric amplification provides an excellent tool to generate and to amplify spin waves in these systems in a wide wavevector range. In particular, the amplification greatly benefits from the discreteness of the spin-wave spectra since the size of the microstructures is comparable to the spin-wave wavelength. This opens up new, interesting routes of spin-wave amplification and manipulation. In this review, we will give an overview over the recent developments and achievements in this field.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
a Digital Pre-Inventory of Architectural Heritage in Kosovo Using DOCU-TOOLS®
NASA Astrophysics Data System (ADS)
Jäger-Klein, C.; Kryeziu, A.; Ymeri Hoxha, V.; Rant, M.
2017-08-01
Kosovo is one of the new states in transition in the Western Balkans and its state institutions are not yet fully functional. Although the territory has a rich architectural heritage, the documentation and inventory of this cultural legacy by the national monument protection institutions is insufficiently-structured and incomplete. Civil society has collected far more material than the state, but people are largely untrained in the terminology and categories of professional cultural inventories and in database systems and their international standards. What is missing is an efficient, user-friendly, low-threshold tool to gather together and integrate the various materials, archive them appropriately and make all the information suitably accessible to the public. Multiple groups of information-holders should be able to feed this open-access platform in an easy and self-explanatory way. In this case, existing systems such as the Arches Heritage Inventory and Management System would seem to be too complex, as it pre-supposes a certain understanding of the standard terminology and internationally used categories. Also, the platform as archive must be able to guarantee the integrity and authenticity of the inputted material to avoid abuse through unauthorized users with nationalistic views. Such an open-access lay-inventory would enable Kosovo to meet the urgent need for a national heritage inventory, which the state institutions have thus far been able to establish. The situation is time-sensitive, as Kosovo will soon repeat its attempt to join UNESCO, having failed to do so in 2015, receiving only a minimum number of votes in favour. In Austria, a program called docu-tools® was recently developed to tackle a similar problem. It can be used by non-professionals to document complicated and multi-structured cases within the building process. Its cloud and app-design structure allows archiving enormous numbers of images and documents in whatever format. Additionally, it allows parallel access by authorized users and avoids any hierarchy of structure or prerequisites for its users. The archived documents cannot be changed after input, which gave this documentation tool acclaimed court relevance. The following article is an attempt to explore the potential for this tool to prepare Kosovo for a comprehensive heritage inventory.
Climate variability in the subarctic area for the last 2 millennia
NASA Astrophysics Data System (ADS)
Nicolle, Marie; Debret, Maxime; Massei, Nicolas; Colin, Christophe; deVernal, Anne; Divine, Dmitry; Werner, Johannes P.; Hormes, Anne; Korhola, Atte; Linderholm, Hans W.
2018-01-01
To put recent climate change in perspective, it is necessary to extend the instrumental climate records with proxy data from paleoclimate archives. Arctic climate variability for the last 2 millennia has been investigated using statistical and signal analyses from three regionally averaged records from the North Atlantic, Siberia and Alaska based on many types of proxy data archived in the Arctic 2k database v1.1.1. In the North Atlantic and Alaska, the major climatic trend is characterized by long-term cooling interrupted by recent warming that started at the beginning of the 19th century. This cooling is visible in the Siberian region at two sites, warming at the others. The cooling of the Little Ice Age (LIA) was identified from the individual series, but it is characterized by wide-range spatial and temporal expression of climate variability, in contrary to the Medieval Climate Anomaly. The LIA started at the earliest by around AD 1200 and ended at the latest in the middle of the 20th century. The widespread temporal coverage of the LIA did not show regional consistency or particular spatial distribution and did not show a relationship with archive or proxy type either. A focus on the last 2 centuries shows a recent warming characterized by a well-marked warming trend parallel with increasing greenhouse gas emissions. It also shows a multidecadal variability likely due to natural processes acting on the internal climate system on a regional scale. A ˜ 16-30-year cycle is found in Alaska and seems to be linked to the Pacific Decadal Oscillation, whereas ˜ 20-30- and ˜ 50-90-year periodicities characterize the North Atlantic climate variability, likely in relation with the Atlantic Multidecadal Oscillation. These regional features are probably linked to the sea ice cover fluctuations through ice-temperature positive feedback.
Fermilab History and Archives Project | Home
Fermilab History and Archives Project Fermilab History and Archives Project Fermi National Accelerator Laboratory Home About the Archives History & Archives Online Request Contact Us Site Index SEARCH the site: History & Archives Project Fermilab History and Archives Project The History of
Robert Owen in the history of the social sciences: three presentist views.
Pūras, Adomas
2014-01-01
This paper argues that the present-day disagreements over the right course for sociology and its public role are reflected and paralleled in contemporary historiography of Robert Owen, British social reformer and a self-described social scientist. Historical accounts, written from the perspectives of public sociology, "pure science" sociology, and anti-Marxism, interpret Owen's historical role in mutually antithetical and self-serving ways. Contrasting the three presentist accounts, I engage in an analysis of "techniques of presentism"-history-structuring concepts, such as "disciplinary founder" and "disciplinary prehistory," that allow presentist authors to get their effects. Along the way, I elaborate Peter Baehr's classification of sociology's founders. © 2013 Wiley Periodicals, Inc.
On Lovelock analogs of the Riemann tensor
NASA Astrophysics Data System (ADS)
Camanho, Xián O.; Dadhich, Naresh
2016-03-01
It is possible to define an analog of the Riemann tensor for Nth order Lovelock gravity, its characterizing property being that the trace of its Bianchi derivative yields the corresponding analog of the Einstein tensor. Interestingly there exist two parallel but distinct such analogs and the main purpose of this note is to reconcile both formulations. In addition we will introduce a simple tensor identity and use it to show that any pure Lovelock vacuum in odd d=2N+1 dimensions is Lovelock flat, i.e. any vacuum solution of the theory has vanishing Lovelock-Riemann tensor. Further, in the presence of cosmological constant it is the Lovelock-Weyl tensor that vanishes.
Badarlis, Anastasios; Pfau, Axel; Kalfas, Anestis
2015-01-01
Measurement of gas density and viscosity was conducted using a micro-cantilever beam. In parallel, the validity of the proposed modeling approach was evaluated. This study also aimed to widen the database of the gases on which the model development of the micro-cantilever beams is based. The density and viscosity of gases are orders of magnitude lower than liquids. For this reason, the use of a very sensitive sensor is essential. In this study, a micro-cantilever beam from the field of atomic force microscopy was used. Although the current cantilever was designed to work with thermal activation, in the current investigation, it was activated with an electromagnetic force. The deflection of the cantilever beam was detected by an integrated piezo-resistive sensor. Six pure gases and sixteen mixtures of them in ambient conditions were investigated. The outcome of the investigation showed that the current cantilever beam had a sensitivity of 240 Hz/(kg/m3), while the accuracy of the determined gas density and viscosity in ambient conditions reached ±1.5% and ±2.0%, respectively. PMID:26402682
Thermally developing MHD peristaltic transport of nanofluids with velocity and thermal slip effects
NASA Astrophysics Data System (ADS)
Sher Akbar, Noreen; Bintul Huda, A.; Tripathi, D.
2016-09-01
We investigate the velocity slip and thermal slip effects on peristaltically driven thermal transport of nanofluids through the vertical parallel plates under the influence of transverse magnetic field. The wall surface is propagating with sinusoidal wave velocity c. The flow characteristics are governed by the mass, momentum and energy conservation principle. Low Reynolds number and large wavelength approximations are taken into consideration to simplify the non-linear terms. Analytical solutions for axial velocity, temperature field, pressure gradient and stream function are obtained under certain physical boundary conditions. Two types of nanoparticles, SiO2 and Ag, are considered for analysis with water as base fluid. This is the first article in the literature that discusses the SiO2 and Ag nanoparticles for a peristaltic flow with variable viscosity. The effects of physical parameters on velocity, temperature, pressure and trapping are discussed. A comparative study of SiO2 nanofluid, Ag nanofluid and pure water is also presented. This model is applicable in biomedical engineering to make thermal peristaltic pumps and other pumping devices like syringe pumps, etc. It is observed that pressure for pure water is maximum and pressure for Ag nanofluid is minimum.
Ragab, A; Shreef, E; Behiry, E; Zalat, S; Noaman, M
2009-01-01
To investigate the safety and efficacy of ozone therapy in adult patients with sudden sensorineural hearing loss. Prospective, randomised, double-blinded, placebo-controlled, parallel group, clinical trial. Forty-five adult patients presented with sudden sensorineural hearing loss, and were randomly allocated to receive either placebo (15 patients) or ozone therapy (auto-haemotherapy; 30 patients). For the latter treatment, 100 ml of the patient's blood was treated immediately with a 1:1 volume, gaseous mixture of oxygen and ozone (from an ozone generator) and re-injected into the patient by intravenous infusion. Treatments were administered twice weekly for 10 sessions. The following data were recorded: pre- and post-treatment mean hearing gains; air and bone pure tone averages; speech reception thresholds; speech discrimination scores; and subjective recovery rates. Significant recovery was observed in 23 patients (77 per cent) receiving ozone treatment, compared with six (40 per cent) patients receiving placebo (p < 0.05). Mean hearing gains, pure tone averages, speech reception thresholds and subjective recovery rates were significantly better in ozone-treated patients compared with placebo-treated patients (p < 0.05). Ozone therapy is a significant modality for treatment of sudden sensorineural hearing loss; no complications were observed.
GPU implementation of the simplex identification via split augmented Lagrangian
NASA Astrophysics Data System (ADS)
Sevilla, Jorge; Nascimento, José M. P.
2015-10-01
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
NASA Astrophysics Data System (ADS)
Geremariam Welearegay, Tesfalem; Cindemir, Umut; Österlund, Lars; Ionescu, Radu
2018-02-01
Here, we report for the first time the fabrication of ligand-functionalised ultrapure monodispersed metal nanoparticles (Au, Cu, and Pt) from their pure metal precursors using the advanced gas deposition technique. The experimental conditions during nanoparticle formation were adjusted in order to obtain ultrafine isolated nanoparticles on different substrates. The morphology and surface analysis of the as-deposited metal nanoparticles were investigated using scanning electron microscopy, x-ray diffraction and Fourier transform infra-red spectroscopy, which demonstrated the formation of highly ordered pure crystalline nanoparticles with a relatively uniform size distribution of ∼10 nm (Au), ∼4 nm (Cu) and ∼3 nm (Pt), respectively. A broad range of organic ligands containing thiol or amine functional groups were attached to the nanoparticles to form continuous networks of nanoparticle-ligand nanoassemblies, which were characterised by scanning electron microscopy and x-ray photoelectron spectroscopy. The electrical resistance of the functional nanoassemblies deposited in the gap spacing of two microfabricated parallel Au electrodes patterned on silicon substrates ranged between tens of kΩ and tens of MΩ, which is suitable for use in many applications including (bio)chemical sensors, surface-enhanced Raman spectroscopy and molecular electronic rectifiers.
NASA Astrophysics Data System (ADS)
Hussein, Heider A.; Demiroglu, Ilker; Johnston, Roy L.
2018-02-01
To contribute to the discussion of the high activity and reactivity of Au-Pd system, we have adopted the BPGA-DFT approach to study the structural and energetic properties of medium-sized Au-Pd sub-nanometre clusters with 11-18 atoms. We have examined the structural behaviour and stability as a function of cluster size and composition. The study suggests 2D-3D crossover points for pure Au clusters at 14 and 16 atoms, whereas pure Pd clusters are all found to be 3D. For Au-Pd nanoalloys, the role of cluster size and the influence of doping were found to be extensive and non-monotonic in altering cluster structures. Various stability criteria (e.g. binding energies, second differences in energy, and mixing energies) are used to evaluate the energetics, structures, and tendency of segregation in sub-nanometre Au-Pd clusters. HOMO-LUMO gaps were calculated to give additional information on cluster stability and a systematic homotop search was used to evaluate the energies of the generated global minima of mono-substituted clusters and the preferred doping sites, as well as confirming the validity of the BPGA-DFT approach.
NASA Astrophysics Data System (ADS)
Stepanova, L. V.
2017-12-01
Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is the Embedded Atom Method (EAM) potential. Plane specimens with an initial central crack are subjected to mixed-mode loadings. The simulation cell contains 400,000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide range of temperatures (from 0.1 K to 800 K) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields. The multi-parameter fracture criteria are based on the multi-parameter stress field description taking into account the higher order terms of the Williams series expansion of the crack tip fields.
Spin-independent transparency of pure spin current at normal/ferromagnetic metal interface
NASA Astrophysics Data System (ADS)
Hao, Runrun; Zhong, Hai; Kang, Yun; Tian, Yufei; Yan, Shishen; Liu, Guolei; Han, Guangbing; Yu, Shuyun; Mei, Liangmo; Kang, Shishou
2018-03-01
The spin transparency at the normal/ferromagnetic metal (NM/FM) interface was studied in Pt/YIG/Cu/FM multilayers. The spin current generated by the spin Hall effect (SHE) in Pt flows into Cu/FM due to magnetic insulator YIG blocking charge current and transmitting spin current via the magnon current. Therefore, the nonlocal voltage induced by an inverse spin Hall effect (ISHE) in FM can be detected. With the magnetization of FM parallel or antiparallel to the spin polarization of pure spin currents ({{\\boldsymbol{σ }}}sc}), the spin-independent nonlocal voltage is induced. This indicates that the spin transparency at the Cu/FM interface is spin-independent, which demonstrates that the influence of spin-dependent electrochemical potential due to spin accumulation on the interfacial spin transparency is negligible. Furthermore, a larger spin Hall angle of Fe20Ni80 (Py) than that of Ni is obtained from the nonlocal voltage measurements. Project supported by the National Basic Research Program of China (Grant No. 2015CB921502), the National Natural Science Foundation of China (Grant Nos. 11474184 and 11627805), the 111 Project, China (Grant No. B13029), and the Fundamental Research Funds of Shandong University, China.
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
36 CFR 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Archives Library? 1280.66 Section 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park are...
Getting Personal: Personal Archives in Archival Programs and Curricula
ERIC Educational Resources Information Center
Douglas, Jennifer
2017-01-01
In 2001, Catherine Hobbs referred to silences around personal archives, suggesting that these types of archives were not given as much attention as organizational archives in the development of archival theory and methodology. The aims of this article are twofold: 1) to investigate the extent to which such silences exist in archival education…
Role of copper oxides in contact killing of bacteria.
Hans, Michael; Erbe, Andreas; Mathews, Salima; Chen, Ying; Solioz, Marc; Mücklich, Frank
2013-12-31
The potential of metallic copper as an intrinsically antibacterial material is gaining increasing attention in the face of growing antibiotics resistance of bacteria. However, the mechanism of the so-called "contact killing" of bacteria by copper surfaces is poorly understood and requires further investigation. In particular, the influences of bacteria-metal interaction, media composition, and copper surface chemistry on contact killing are not fully understood. In this study, copper oxide formation on copper during standard antimicrobial testing was measured in situ by spectroscopic ellipsometry. In parallel, contact killing under these conditions was assessed with bacteria in phosphate buffered saline (PBS) or Tris-Cl. For comparison, defined Cu2O and CuO layers were thermally generated and characterized by grazing incidence X-ray diffraction. The antibacterial properties of these copper oxides were tested under the conditions used above. Finally, copper ion release was recorded for both buffer systems by inductively coupled plasma atomic absorption spectroscopy, and exposed copper samples were analyzed for topographical surface alterations. It was found that there was a fairly even growth of CuO under wet plating conditions, reaching 4-10 nm in 300 min, but no measurable Cu2O was formed during this time. CuO was found to significantly inhibit contact killing, compared to pure copper. In contrast, thermally generated Cu2O was essentially as effective in contact killing as pure copper. Copper ion release from the different surfaces roughly correlated with their antibacterial efficacy and was highest for pure copper, followed by Cu2O and CuO. Tris-Cl induced a 10-50-fold faster copper ion release compared to PBS. Since the Cu2O that primarily forms on copper under ambient conditions is as active in contact killing as pure copper, antimicrobial objects will retain their antimicrobial properties even after oxide formation.
VLSI-based video event triggering for image data compression
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
36 CFR § 1280.66 - May I use the National Archives Library?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Archives Library? § 1280.66 Section § 1280.66 Parks, Forests, and Public Property NATIONAL ARCHIVES AND... Facilities in the Washington, DC, Area? § 1280.66 May I use the National Archives Library? The National Archives Library facilities in the National Archives Building and in the National Archives at College Park...
NASA Astrophysics Data System (ADS)
Huck, Claire E.; van de Flierdt, Tina; Jiménez-Espejo, Francisco J.; Bohaty, Steven M.; Röhl, Ursula; Hammond, Samantha J.
2016-03-01
Fossil fish teeth from pelagic open ocean settings are considered a robust archive for preserving the neodymium (Nd) isotopic composition of ancient seawater. However, using fossil fish teeth as an archive to reconstruct seawater Nd isotopic compositions in different sedimentary redox environments and in terrigenous-dominated, shallow marine settings is less proven. To address these uncertainties, fish tooth and sediment samples from a middle Eocene section deposited proximal to the East Antarctic margin at Integrated Ocean Drilling Program Site U1356 were analyzed for major and trace element geochemistry, and Nd isotopes. Major and trace element analyses of the sediments reveal changing redox conditions throughout deposition in a shallow marine environment. However, variations in the Nd isotopic composition and rare earth element (REE) patterns of the associated fish teeth do not correspond to redox changes in the sediments. REE patterns in fish teeth at Site U1356 carry a typical mid-REE-enriched signature. However, a consistently positive Ce anomaly marks a deviation from a pure authigenic origin of REEs to the fish tooth. Neodymium isotopic compositions of cleaned and uncleaned fish teeth fall between modern seawater and local sediments and hence could be authigenic in nature, but could also be influenced by sedimentary fluxes. We conclude that the fossil fish tooth Nd isotope proxy is not sensitive to moderate changes in pore water oxygenation. However, combined studies on sediments, pore waters, fish teeth, and seawater are needed to fully understand processes driving the reconstructed signature from shallow marine sections in proximity to continental sources.
Moving Toward Real Time Data Handling: Data Management at the IRIS DMC
NASA Astrophysics Data System (ADS)
Ahern, T. K.; Benson, R. B.
2001-12-01
The IRIS Data Management Center at the University of Washington has become a major archive and distribution center for a wide variety of seismological data. With a mass storage system with a 360-terabyte capacity, the center is well positioned to manage the data flow, both inbound and outbound, from all anticipated seismic sources for the foreseeable future. As data flow in and out of the IRIS DMC at an increasing rate, new methods to deal with data using purely automated techniques are being developed. The on-line and self-service data repositories of SPYDERr and FARM are collections of seismograms for all larger events. The WWW tool WILBER and the client application WEED are examples of tools that provide convenient access to the 1/2 terabyte of SPYDERr and FARM data. The Buffer of Uniform Data (BUD) system provides access to continuous data available in real time from GSN, FDSN, US regional networks, and other globally distributed stations. Continuous data that have received quality control are always available from the archive of continuous data. This presentation will review current and future data access techniques supported at IRIS. One of the most difficult tasks at the DMC is the management of the metadata that describes all the stations, sensors, and data holdings. Demonstrations of tools that provide access to the metadata will be presented. This presentation will focus on the new techniques of data management now being developed at the IRIS DMC. We believe that these techniques are generally applicable to other types of geophysical data management as well.
Winkelmann, Andreas
2012-06-01
The Anatomische Gesellschaft (AG) is an international society for the anatomical sciences and at the same time the main organising body for German anatomists. This study analyses how the AG went through the years of National Socialism. As the society does not possess archival material from that time, the analysis is mainly based on the society proceedings (Verhandlungen der Anatomischen Gesellschaft) published annually after each meeting from 1934 to 1939 and again in 1950. During the period of National Socialism, the AG kept its international status against demands to make it a purely German society. It did not introduce anti-Jewish regulations or the Führer principle into its bylaws. The membership directories reveal that it was at least possible for members whose career was disrupted by Nazi policies to remain on the membership lists throughout the Nazi period. However, in contrast to later assumptions that no persecuted member of the AG was ever struck from its register, 17 of 57 persecuted members left the society between 1933 and 1939. The membership of six of these members was cancelled, officially for unpaid fees. However, other members with much longer arrears were not cancelled. To date, no additional historical information is available to assess the circumstances of these cancellations. In general, it remains remarkable that, in contrast to many other societies, the AG did not follow the path of preemptive obedience towards the new rulers. More archival sources need to be uncovered to elucidate the external influences and internal negotiations behind the published documents. Copyright © 2012. Published by Elsevier GmbH.
gPhoton: THE GALEX PHOTON DATA ARCHIVE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Million, Chase; Fleming, Scott W.; Shiao, Bernie
gPhoton is a new database product and software package that enables analysis of GALEX ultraviolet data at the photon level. The project’s stand-alone, pure-Python calibration pipeline reproduces the functionality of the original mission pipeline to reduce raw spacecraft data to lists of time-tagged, sky-projected photons, which are then hosted in a publicly available database by the Mikulski Archive at Space Telescope. This database contains approximately 130 terabytes of data describing approximately 1.1 trillion sky-projected events with a timestamp resolution of five milliseconds. A handful of Python and command-line modules serve as a front end to interact with the database andmore » to generate calibrated light curves and images from the photon-level data at user-defined temporal and spatial scales. The gPhoton software and source code are in active development and publicly available under a permissive license. We describe the motivation, design, and implementation of the calibration pipeline, database, and tools, with emphasis on divergence from prior work, as well as challenges created by the large data volume. We summarize the astrometric and photometric performance of gPhoton relative to the original mission pipeline. For a brief example of short time-domain science capabilities enabled by gPhoton, we show new flares from the known M-dwarf flare star CR Draconis. The gPhoton software has permanent object identifiers with the ASCL (ascl:1603.004) and DOI (doi:10.17909/T9CC7G). This paper describes the software as of version v1.27.2.« less
Brain mechanisms of recovery from pure alexia: A single case study with multiple longitudinal scans.
Cohen, Laurent; Dehaene, Stanislas; McCormick, Samantha; Durant, Szonya; Zanker, Johannes M
2016-10-01
Pure alexia is an acquired reading disorder, typically due to a left occipito-temporal lesion affecting the Visual Word Form Area (VWFA). It is unclear whether the VWFA acts as a unique bottleneck for reading, or whether alternative routes are available for recovery. Here, we address this issue through the single-case longitudinal study of a neuroscientist who experienced pure alexia and participated in 17 behavioral, 9 anatomical, and 9 fMRI assessment sessions over a period of two years. The origin of the impairment was assigned to a small left fusiform lesion, accompanied by a loss of VWFA responsivity and by the degeneracy of the associated white matter pathways. fMRI experiments allowed us to image longitudinally the visual perception of words, as compared to other classes of stimuli, as well as the mechanisms of letter-by-letter reading. The progressive improvement of reading was not associated with the re-emergence of a new area selective to words, but with increasing responses in spared occipital cortex posterior to the lesion and in contralateral right occipital cortex. Those regions showed a non-specific increase of activations over time and an increase in functional correlation with distant language areas. Those results confirm the existence of an alternative occipital route for reading, bypassing the VWFA, but they also point to its key limitation: the patient remained a slow letter-by-letter reader, thus supporting the critical importance of the VWFA for the efficient parallel recognition of written words. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vonderschen, Katrin; Wagner, Hermann
2012-04-25
Birds and mammals exploit interaural time differences (ITDs) for sound localization. Subsequent to ITD detection by brainstem neurons, ITD processing continues in parallel midbrain and forebrain pathways. In the barn owl, both ITD detection and processing in the midbrain are specialized to extract ITDs independent of frequency, which amounts to a pure time delay representation. Recent results have elucidated different mechanisms of ITD detection in mammals, which lead to a representation of small ITDs in high-frequency channels and large ITDs in low-frequency channels, resembling a phase delay representation. However, the detection mechanism does not prevent a change in ITD representation at higher processing stages. Here we analyze ITD tuning across frequency channels with pure tone and noise stimuli in neurons of the barn owl's auditory arcopallium, a nucleus at the endpoint of the forebrain pathway. To extend the analysis of ITD representation across frequency bands to a large neural population, we employed Fourier analysis for the spectral decomposition of ITD curves recorded with noise stimuli. This method was validated using physiological as well as model data. We found that low frequencies convey sensitivity to large ITDs, whereas high frequencies convey sensitivity to small ITDs. Moreover, different linear phase frequency regimes in the high-frequency and low-frequency ranges suggested an independent convergence of inputs from these frequency channels. Our results are consistent with ITD being remodeled toward a phase delay representation along the forebrain pathway. This indicates that sensory representations may undergo substantial reorganization, presumably in relation to specific behavioral output.
Internal viscoelastic loading in cat papillary muscle.
Chiu, Y L; Ballou, E W; Ford, L E
1982-01-01
The passive mechanical properties of myocardium were defined by measuring force responses to rapid length ramps applied to unstimulated cat papillary muscles. The immediate force changes following these ramps recovered partially to their initial value, suggesting a series combination of viscous element and spring. Because the stretched muscle can bear force at rest, the viscous element must be in parallel with an additional spring. The instantaneous extension-force curves measured at different lengths were nonlinear, and could be made to superimpose by a simple horizontal shift. This finding suggests that the same spring was being measured at each length, and that this spring was in series with both the viscous element and its parallel spring (Voigt configuration), so that the parallel spring is held nearly rigid by the viscous element during rapid steps. The series spring in the passive muscle could account for most of the series elastic recoil in the active muscle, suggesting that the same spring is in series with both the contractile elements and the viscous element. It is postulated that the viscous element might be coupled to the contractile elements by a compliance, so that the load imposed on the contractile elements by the passive structures is viscoelastic rather than purely viscous. Such a viscoelastic load would give the muscle a length-independent, early diastolic restoring force. The possibility is discussed that the length-independent restoring force would allow some of the energy liberated during active shortening to be stored and released during relaxation. Images FIGURE 7 FIGURE 8 PMID:7171707
ModelArchiver—A program for facilitating the creation of groundwater model archives
Winston, Richard B.
2018-03-01
ModelArchiver is a program designed to facilitate the creation of groundwater model archives that meet the requirements of the U.S. Geological Survey (USGS) policy (Office of Groundwater Technical Memorandum 2016.02, https://water.usgs.gov/admin/memo/GW/gw2016.02.pdf, https://water.usgs.gov/ogw/policy/gw-model/). ModelArchiver version 1.0 leads the user step-by-step through the process of creating a USGS groundwater model archive. The user specifies the contents of each of the subdirectories within the archive and provides descriptions of the archive contents. Descriptions of some files can be specified automatically using file extensions. Descriptions also can be specified individually. Those descriptions are added to a readme.txt file provided by the user. ModelArchiver moves the content of the archive to the archive folder and compresses some folders into .zip files.As part of the archive, the modeler must create a metadata file describing the archive. The program has a built-in metadata editor and provides links to websites that can aid in creation of the metadata. The built-in metadata editor is also available as a stand-alone program named FgdcMetaEditor version 1.0, which also is described in this report. ModelArchiver updates the metadata file provided by the user with descriptions of the files in the archive. An optional archive list file generated automatically by ModelMuse can streamline the creation of archives by identifying input files, output files, model programs, and ancillary files for inclusion in the archive.
NASA Astrophysics Data System (ADS)
Shipley, Heath V.; Lange-Vagle, Daniel; Marchesini, Danilo; Brammer, Gabriel B.; Ferrarese, Laura; Stefanon, Mauro; Kado-Fong, Erin; Whitaker, Katherine E.; Oesch, Pascal A.; Feinstein, Adina D.; Labbé, Ivo; Lundgren, Britt; Martis, Nicholas; Muzzin, Adam; Nedkova, Kalina; Skelton, Rosalind; van der Wel, Arjen
2018-03-01
We present Hubble multi-wavelength photometric catalogs, including (up to) 17 filters with the Advanced Camera for Surveys and Wide Field Camera 3 from the ultra-violet to near-infrared for the Hubble Frontier Fields and associated parallels. We have constructed homogeneous photometric catalogs for all six clusters and their parallels. To further expand these data catalogs, we have added ultra-deep K S -band imaging at 2.2 μm from the Very Large Telescope HAWK-I and Keck-I MOSFIRE instruments. We also add post-cryogenic Spitzer imaging at 3.6 and 4.5 μm with the Infrared Array Camera (IRAC), as well as archival IRAC 5.8 and 8.0 μm imaging when available. We introduce the public release of the multi-wavelength (0.2–8 μm) photometric catalogs, and we describe the unique steps applied for the construction of these catalogs. Particular emphasis is given to the source detection band, the contamination of light from the bright cluster galaxies (bCGs), and intra-cluster light (ICL). In addition to the photometric catalogs, we provide catalogs of photometric redshifts and stellar population properties. Furthermore, this includes all the images used in the construction of the catalogs, including the combined models of bCGs and ICL, the residual images, segmentation maps, and more. These catalogs are a robust data set of the Hubble Frontier Fields and will be an important aid in designing future surveys, as well as planning follow-up programs with current and future observatories to answer key questions remaining about first light, reionization, the assembly of galaxies, and many more topics, most notably by identifying high-redshift sources to target.
DOMe: A deduplication optimization method for the NewSQL database backups
Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng
2017-01-01
Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307
Data archiving for animal cognition research: report of an NIMH workshop.
Kurtzman, Howard S; Church, Russell M; Crystal, Jonathon D
2002-11-01
In July 2001, the National Institute of Mental Health sponsored a workshop titled "Data Archiving for Animal Cognition Research." Participants included scientists as well as experts in archiving, publishing, policy, and law. As is described in this report, the workshop resulted in a set of conclusions and recommendations concerning (A) the impact of data archiving on research, (B) how to incorporate data archiving into research practice, (C) contents of data archives, (D) technical and archival standards, and (E) organizational, financing, and policy issues. The animal cognition research community is encouraged to begin now to establish archives, deposit data and related materials, and make use of archived materials in new scientific projects.
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture
NASA Astrophysics Data System (ADS)
Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel
2003-11-01
Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.
Gudziol, H; Gottschall, R; Luther, E
2017-01-01
Introduction: The history of the first operating microscopes from Zeiss is often confusing, not painstaking and partly contradictory because of the parallel development of Zeiss Jena (East Germany) and Zeiss Oberkochen (West Germany). Methods: To investigate the early beginnings of the construction of the operating microscopes documents of the Carl Zeiss Archive and the Optical Museum in Jena, the memoirs of Prof. Dr. Rosemarie Albrecht and some relevant publications were used. Results: The development of the first Jena operating microscope was initiated in 1949 by the ENT-physician Prof. Dr. Rosemarie Albrecht in the Soviet occupation zone. The first prototype was tested in the University ENT Clinic, Jena since summer of 1950. On the Leipzig Trade Fair in autumn 1952 the VEB Optik Carl Zeiss Jena presented the first operating microscope nationally and internationally. Series production began in 1953. The first operating microscope of Zeiss Oberkochen was primarily developed by technical designers (H. Littmann) as a colposcope. But in the Carl Zeiss Archive no documents could be found related to the cooperation with gynecologists. 1953 the operating microscope (OPMI 1) came into public and its series production started. From this date on, it was adopted by the otologist Prof. Dr. Horst Ludwig Wullstein to the needs of Otorhinolaryngology. Conclusion: The first Zeiss operating microscope came from Jena. The operating microscope from Zeiss Oberkochen had some advantages for the surgeons and won the competition in the future. © Georg Thieme Verlag KG Stuttgart · New York.
A Correction for IUE UV Flux Distributions from Comparisons with CALSPEC
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Bianchi, Luciana
2018-04-01
A collection of spectral energy distributions (SEDs) is available in the Hubble Space Telescope (HST) CALSPEC database that is based on calculated model atmospheres for pure hydrogen white dwarfs (WDs). A much larger set (∼100,000) of UV SEDs covering the range (1150–3350 Å) with somewhat lower quality are available in the IUE database. IUE low-dispersion flux distributions are compared with CALSPEC to provide a correction that places IUE fluxes on the CALSPEC scale. While IUE observations are repeatable to only 4%–10% in regions of good sensitivity, the average flux corrections have a precision of 2%–3%. Our re-calibration places the IUE flux scale on the current UV reference standard and is relevant for any project based on IUE archival data, including our planned comparison of GALEX to the corrected IUE fluxes. IUE SEDs may be used to plan observations and cross-calibrate data from future missions, so the IUE flux calibration must be consistent with HST instrumental calibrations to the best possible precision.
A geometric measure of dark energy with pairs of galaxies.
Marinoni, Christian; Buzzi, Adeline
2010-11-25
Observations indicate that the expansion of the Universe is accelerating, which is attributed to a ‘dark energy’ component that opposes gravity. There is a purely geometric test of the expansion of the Universe (the Alcock–Paczynski test), which would provide an independent way of investigating the abundance (Ω(X)) and equation of state (W(X)) of dark energy. It is based on an analysis of the geometrical distortions expected from comparing the real-space and redshift-space shape of distant cosmic structures, but it has proved difficult to implement. Here we report an analysis of the symmetry properties of distant pairs of galaxies from archival data. This allows us to determine that the Universe is flat. By alternately fixing its spatial geometry at Ω(k)≡0 and the dark energy equation-of-state parameter at W(X)≡-1, and using the results of baryon acoustic oscillations, we can establish at the 68.3% confidence level that and -0.85>W(X)>-1.12 and 0.60<Ω(X)<0.80.
Fossil black smoker yields oxygen isotopic composition of Neoproterozoic seawater.
Hodel, F; Macouin, M; Trindade, R I F; Triantafyllou, A; Ganne, J; Chavagnac, V; Berger, J; Rospabé, M; Destrigneville, C; Carlut, J; Ennih, N; Agrinier, P
2018-04-13
The evolution of the seawater oxygen isotopic composition (δ 18 O) through geological time remains controversial. Yet, the past δ 18 O seawater is key to assess past seawater temperatures, providing insights into past climate change and life evolution. Here we provide a new and unprecedentedly precise δ 18 O value of -1.33 ± 0.98‰ for the Neoproterozoic bottom seawater supporting a constant oxygen isotope composition through time. We demonstrate that the Aït Ahmane ultramafic unit of the ca. 760 Ma Bou Azzer ophiolite (Morocco) host a fossil black smoker-type hydrothermal system. In this system we analyzed an untapped archive for the ocean oxygen isotopic composition consisting in pure magnetite veins directly precipitated from a Neoproterozoic seawater-derived fluid. Our results suggest that, while δ 18 O seawater and submarine hydrothermal processes were likely similar to present day, Neoproterozoic oceans were 15-30 °C warmer on the eve of the Sturtian glaciation and the major life diversification that followed.
The DAFT/FADA Survey status and latest results
NASA Astrophysics Data System (ADS)
Guennou, L.
2011-12-01
We present here the latest results obtained from the American French collaboration called the Dark energy American French Team/French American DArk energy Team (DAFT/FADA). The goal of the DAFT/FADA collaboration is to carry out a weak lensing tomography survey of z = 0.4-0.9 rich clusters of galaxies. Unlike supernovae or other methods such as cluster of galaxy counts, weak lensing tomography is purely based on geometry and does not depend on knowledge of the physics of the objects used as distance indicators. In addition, the reason for analyzing observations in the direction of clusters is that the shear signal is enhanced by about 10 over the field. Our work will eventually contain results obtained on 91 rich clusters from the HST archive combined with ground based work to obtain photo-zs. This combination of photo-z and weak lensing tomography will enable us to constrain the equation of state of dark energy. We present here the latest results obtained so far in this study.
Cluster-lensing: A Python Package for Galaxy Clusters and Miscentering
NASA Astrophysics Data System (ADS)
Ford, Jes; VanderPlas, Jake
2016-12-01
We describe a new open source package for calculating properties of galaxy clusters, including Navarro, Frenk, and White halo profiles with and without the effects of cluster miscentering. This pure-Python package, cluster-lensing, provides well-documented and easy-to-use classes and functions for calculating cluster scaling relations, including mass-richness and mass-concentration relations from the literature, as well as the surface mass density {{Σ }}(R) and differential surface mass density {{Δ }}{{Σ }}(R) profiles, probed by weak lensing magnification and shear. Galaxy cluster miscentering is especially a concern for stacked weak lensing shear studies of galaxy clusters, where offsets between the assumed and the true underlying matter distribution can lead to a significant bias in the mass estimates if not accounted for. This software has been developed and released in a public GitHub repository, and is licensed under the permissive MIT license. The cluster-lensing package is archived on Zenodo. Full documentation, source code, and installation instructions are available at http://jesford.github.io/cluster-lensing/.
Asymptotic-preserving Lagrangian approach for modeling anisotropic transport in magnetized plasmas
NASA Astrophysics Data System (ADS)
Chacon, Luis; Del-Castillo-Negrete, Diego
2011-10-01
Modeling electron transport in magnetized plasmas is extremely challenging due to the extreme anisotropy introduced by the presence of the magnetic field (χ∥ /χ⊥ ~1010 in fusion plasmas). Recently, a novel Lagrangian method has been proposed to solve the local and non-local purely parallel transport equation in general 3D magnetic fields. The approach avoids numerical pollution (in fact, it respects transport barriers -flux surfaces- exactly by construction), is inherently positivity-preserving, and is scalable algorithmically (i.e., work per degree-of-freedom is grid-independent). In this poster, we discuss the extension of the Lagrangian approach to include perpendicular transport and sources. We present an asymptotic-preserving numerical formulation that ensures a consistent numerical discretization temporally and spatially for arbitrary χ∥ /χ⊥ ratios. This is of importance because parallel and perpendicular transport terms in the transport equation may become comparable in regions of the plasma (e.g., at incipient islands), while remaining disparate elsewhere. We will demonstrate the potential of the approach with various challenging configurations, including the case of transport across a magnetic island in cylindrical geometry. D. del-Castillo-Negrete, L. Chacón, PRL, 106, 195004 (2011); DPP11 invited talk by del-Castillo-Negrete.
Wesson, R.L.
1988-01-01
Preliminary measurements of the stress orientation at a depth of 2 km interpreted to indicate that the regional orientation of the maximum compression is normal to the fault, and taken as evidence for a very weak fault. The orientation expected from plate tectonic arguments is about 66?? NE from the strike of the fault. Geodetic data indicate that the orientation of maximum compressive strain rate is about 43?? NE from the strike of the fault, and show nearly pure right-lateral shear acting parallel to the fault. These apparent conflicts in the inferred orientation of the axis of maximum compression may be explained in part by a model in which the fault zone is locked over a depth interval in the range of 2-5 to 15 km, but is very weak above and below that interval. This solution does require, however, a few mm/yr of creep at the surface on the San Andreas or nearby sub-parallel faults (such as the San Jacinto), which has not yet been observed, or a shallow zone near the faults of distributed deformation. -from Author
BULGELESS GIANT GALAXIES CHALLENGE OUR PICTURE OF GALAXY FORMATION BY HIERARCHICAL CLUSTERING ,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kormendy, John; Cornell, Mark E.; Drory, Niv
2010-11-01
To better understand the prevalence of bulgeless galaxies in the nearby field, we dissect giant Sc-Scd galaxies with Hubble Space Telescope (HST) photometry and Hobby-Eberly Telescope (HET) spectroscopy. We use the HET High Resolution Spectrograph (resolution R {identical_to} {lambda}/FWHM {approx_equal} 15, 000) to measure stellar velocity dispersions in the nuclear star clusters and (pseudo)bulges of the pure-disk galaxies M 33, M 101, NGC 3338, NGC 3810, NGC 6503, and NGC 6946. The dispersions range from 20 {+-} 1 km s{sup -1} in the nucleus of M 33 to 78 {+-} 2 km s{sup -1} in the pseudobulge of NGC 3338.more » We use HST archive images to measure the brightness profiles of the nuclei and (pseudo)bulges in M 101, NGC 6503, and NGC 6946 and hence to estimate their masses. The results imply small mass-to-light ratios consistent with young stellar populations. These observations lead to two conclusions. (1) Upper limits on the masses of any supermassive black holes are M{sub .} {approx}< (2.6 {+-} 0.5) x 10{sup 6} M{sub sun} in M 101 and M{sub .} {approx}< (2.0 {+-} 0.6) x 10{sup 6} M{sub sun} in NGC 6503. (2) We show that the above galaxies contain only tiny pseudobulges that make up {approx}<3% of the stellar mass. This provides the strongest constraints to date on the lack of classical bulges in the biggest pure-disk galaxies. We inventory the galaxies in a sphere of radius 8 Mpc centered on our Galaxy to see whether giant, pure-disk galaxies are common or rare. We find that at least 11 of 19 galaxies with V{sub circ} > 150 km s{sup -1}, including M 101, NGC 6946, IC 342, and our Galaxy, show no evidence for a classical bulge. Four may contain small classical bulges that contribute 5%-12% of the light of the galaxy. Only four of the 19 giant galaxies are ellipticals or have classical bulges that contribute {approx}1/3 of the galaxy light. We conclude that pure-disk galaxies are far from rare. It is hard to understand how bulgeless galaxies could form as the quiescent tail of a distribution of merger histories. Recognition of pseudobulges makes the biggest problem with cold dark matter galaxy formation more acute: How can hierarchical clustering make so many giant, pure-disk galaxies with no evidence for merger-built bulges? Finally, we emphasize that this problem is a strong function of environment: the Virgo cluster is not a puzzle, because more than 2/3 of its stellar mass is in merger remnants.« less
Earth observation archive activities at DRA Farnborough
NASA Technical Reports Server (NTRS)
Palmer, M. D.; Williams, J. M.
1993-01-01
Space Sector, Defence Research Agency (DRA), Farnborough have been actively involved in the acquisition and processing of Earth Observation data for over 15 years. During that time an archive of over 20,000 items has been built up. This paper describes the major archive activities, including: operation and maintenance of the main DRA Archive, the development of a prototype Optical Disc Archive System (ODAS), the catalog systems in use at DRA, the UK Processing and Archive Facility for ERS-1 data, and future plans for archiving activities.
Marginal instability threshold condition of the aperiodic ordinary mode in equal-mass plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vafin, S.; Schlickeiser, R.; Yoon, P. H.
The purely growing ordinary (O) mode instability for counter-streaming bi-Maxwellian plasma particle distribution functions has recently received renewed attention due to its importance for the solar wind plasma. Here, the analytical marginal instability condition is derived for magnetized plasmas consisting of equal-mass charged particles, distributed in counter-streams with equal temperatures. The equal-mass composition assumption enormously facilitates the theoretical analysis due to the equality of the values of the electron and positron (positive and negative ion) plasma and gyrofrequencies. The existence of a new instability domain of the O-mode at small plasma beta values is confirmed, when the parallel counter-stream freemore » energy exceeds the perpendicular bi-Maxwellian free energy.« less
Chameleon's behavior of modulable nonlinear electrical transmission line
NASA Astrophysics Data System (ADS)
Togueu Motcheyo, A. B.; Tchinang Tchameu, J. D.; Fewo, S. I.; Tchawoua, C.; Kofane, T. C.
2017-12-01
We show that modulable discrete nonlinear transmission line can adopt Chameleon's behavior due to the fact that, without changing its appearance structure, it can become alternatively purely right or left handed line which is different to the composite one. Using a quasidiscrete approximation, we derive a nonlinear Schrödinger equation, that predicts accurately the carrier frequency threshold from the linear analysis. It appears that the increasing of the linear capacitor in parallel in the series branch induced the selectivity of the filter in the right-handed region while it increases band pass filter in the left-handed region. Numerical simulations of the nonlinear model confirm the forward wave in the right handed line and the backward wave in the left handed one.
Goulding, F S; Stone, Y
1970-10-16
The past decade has seen the rapid development and exploitation of one of the most significant tools of nuclear physics, the semiconductor radiation detector. Applications of the device to the analysis of materials promises to be one of the major contributions of nuclear research to technology, and may even assist in some aspects of our environmental problems. In parallel with the development of these applications, further developments in detectors for nuclear research are taking place: the use of very thin detectors for heavyion identification, position-sensitive detectors for nuclear-reaction studies, and very pure germanium for making more satisfactory detectors for many applications suggest major future contributions to physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkham, R.; Siddons, D.; Dunn, P.A.
2010-06-23
The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less
Preferred crystallographic orientation in the ice I ← II transformation and the flow of ice II
Bennett, K.; Wenk, H.-R.; Durham, W.B.; Stern, L.A.; Kirby, S.H.
1997-01-01
The preferred crystallographic orientation developed during the ice I ← II transformation and during the plastic flow of ice II was measured in polycrystalline deuterium oxide (D2O) specimens using low-temperature neutron diffraction. Samples partially transformed from ice I to II under a non-hydrostatic stress developed a preferred crystallographic orientation in the ice II. Samples of pure ice II transformed from ice I under a hydrostatic stress and then when compressed axially, developed a strong preferred orientation of compression axes parallel to (1010). A match to the observed preferred orientation using the viscoplastic self-consistent theory was obtained only when (1010) [0001] was taken as the predominant slip system in ice II.
3D Global Fluid Simulations of Turbulence in LAPD
NASA Astrophysics Data System (ADS)
Rogers, Barrett; Ricci, Paolo; Li, Bo
2009-05-01
We present 3D global fluid simulations of the UCLA upgraded Large Plasma Device (LAPD). This device confines an 18-m-long, cylindrically symmetric plasma with a uniform magnetic field. The plasma in the simulations is generated by density and temperature sources inside the computational domain, and sheath boundary conditions are applied at the ends of the plasma column. In 3D simulations of the entire plasma, we observe strong, rotating intermittent density and temperature fluctuations driven by resistive driftwave turbulence with finite parallel wavenumbers. Analogous simulations carried out in the 2D limit (that is, assuming that the motions are purely interchange-like) display much weaker mode activity driven a Kelvin-Helmholtz instability. The properties and scaling of the turbulence and transport will be discussed.
Novel surface diffusion characteristics for a robust pentacene derivative on Au(1 1 1) surfaces
NASA Astrophysics Data System (ADS)
Miller, Ryan A.; Larson, Amanda; Pohl, Karsten
2017-06-01
Molecular dynamics simulations have been performed in both the ab initio and classical mechanics frameworks of 5,6,7-trithiapentacene-13-one (TTPO) molecules on flat Au(1 1 1) surfaces. Results show new surface diffusion characteristics including a strong preference for the molecule to align its long axis parallel to the sixfold Au(1 1 1) symmetry directions and subsequently diffuse along these close-packed directions, and a calculated activation energy for diffusion of 0.142 eV, about four times larger than that for pure pentacene on Au. The temperature-dependent diffusion coefficients were calculated to help quantify the molecular mobility during the experimentally observed process of forming self-assembled monolayers on gold electrodes.
Design and implementation of scalable tape archiver
NASA Technical Reports Server (NTRS)
Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio
1996-01-01
In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.
Lessons learned from planetary science archiving
NASA Astrophysics Data System (ADS)
Zender, J.; Grayzeck, E.
2006-01-01
The need for scientific archiving of past, current, and future planetary scientific missions, laboratory data, and modeling efforts is indisputable. To quote from a message by G. Santayama carved over the entrance of the US Archive in Washington DC “Those who can not remember the past are doomed to repeat it.” The design, implementation, maintenance, and validation of planetary science archives are however disputed by the involved parties. The inclusion of the archives into the scientific heritage is problematic. For example, there is the imbalance between space agency requirements and institutional and national interests. The disparity of long-term archive requirements and immediate data analysis requests are significant. The discrepancy between the space missions archive budget and the effort required to design and build the data archive is large. An imbalance exists between new instrument development and existing, well-proven archive standards. The authors present their view on the problems and risk areas in the archiving concepts based on their experience acquired within NASA’s Planetary Data System (PDS) and ESA’s Planetary Science Archive (PSA). Individual risks and potential problem areas are discussed based on a model derived from a system analysis done upfront. The major risk for a planetary mission science archive is seen in the combination of minimal involvement by Mission Scientists and inadequate funding. The authors outline how the risks can be reduced. The paper ends with the authors view on future planetary archive implementations including the archive interoperability aspect.
Distributed data mining on grids: services, tools, and applications.
Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo
2004-12-01
Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.
A video event trigger for high frame rate, high resolution video technology
NASA Astrophysics Data System (ADS)
Williams, Glenn L.
1991-12-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
Historical aspects of human presence in Space
NASA Astrophysics Data System (ADS)
Harsch, V.
2007-02-01
Purpose: This paper presents the development of human presence in Space from its beginnings. Study hypotheses were based on historical findings on scientific, medical, cultural, and political aspects of manned Space flight due to the different attitudes of Space minded nations and organizations. Impacts of aerospace medicine on the advances of biomedical sciences will be touched upon, as well as the historical development of aviation and Space medical achievements which are described briefly and visions for future developments are given. Methods: An overview was gained by literature-study, archives research and oral history taking. Results: Aviation Medicine evolved parallel to Man's ability to fly. War-triggered advancements in aviation brought mankind to the edge of space-equivalent conditions within a few decades of the first motor-flight, which took place in the USA in 1903 [V. Harsch, Aerospace medicine in Germany: from the very beginnings, Aviation and Space Environment Medicine 71 (2000) 447-450 [1
BOREAS RSS-20 POLDER Helicopter-Mounted Measurements of Surface BRDF
NASA Technical Reports Server (NTRS)
Leroy, Marc; Hall, Forrest G. (Editor); Nickerson, Jaime (Editor); Smith, David E. (Technical Monitor)
2000-01-01
The BOREAS HYD-9 team collected data on precipitation and strearnflow over portions of the NSA and SSA. This data set contains Cartesian maps of rain accumulation for I -hour and daily periods during the summer of 1994 over the SSA only (not the full view of the radar). A parallel set of 1-hour maps for the whole radar view has been prepared and is available upon request from the HYD-09 personnel. An incidental benefit of the areal selection was the elimination of some of the less accurate data,because for various reasons the radar rain estimates degrade considerably outside a range of about 100 km. The data are available in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
Handling of huge multispectral image data volumes from a spectral hole burning device (SHBD)
NASA Astrophysics Data System (ADS)
Graff, Werner; Rosselet, Armel C.; Wild, Urs P.; Gschwind, Rudolf; Keller, Christoph U.
1995-06-01
We use chlorin-doped polymer films at low temperatures as the primary imaging detector. Based on the principles of persistent spectral hole burning, this system is capable of storing spatial and spectral information simultaneously in one exposure with extremely high resolution. The sun as an extended light source has been imaged onto the film. The information recorded amounts to tens of GBytes. This data volume is read out by scanning the frequency of a tunable dye laser and reading the images with a digital CCD camera. For acquisition, archival, processing, and visualization, we use MUSIC (MUlti processor System with Intelligent Communication), a single instruction multiple data parallel processor system equipped with the necessary I/O facilities. The huge amount of data requires the developemnt of sophisticated algorithms to efficiently calibrate the data and to extract useful and new information for solar physics.
A video event trigger for high frame rate, high resolution video technology
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1991-01-01
When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahler, S. W., E-mail: stephen.kahler@kirtland.af.mil
Prompt onsets and short rise times to peak intensities Ip have been noted in a few solar energetic (E > 10 MeV) particle (SEP) events from far behind (≥25°) the west limb. We discuss 15 archival and recent examples of these prompt events, giving their source longitudes, onset and rise times, and associated coronal mass ejection (CME) speeds. Their timescales and CME speeds are not exceptional in comparison with a larger set of SEP events from behind the west limb. A further statistical comparison of observed timescales of SEP events from behind the west limb with events similarly poorly magneticallymore » connected to the eastern hemisphere (EH) shows the longer timescales of the latter group. We interpret this result in terms of a difference between SEP production at parallel shocks on the eastern flanks of western backside events and at perpendicular shocks on the western flanks of EH events.« less
Origin of Lamellar Magnetism (Invited)
NASA Astrophysics Data System (ADS)
McEnroe, S. A.; Robinson, P.; Fabian, K.; Harrison, R. J.
2010-12-01
The theory of lamellar magnetism arose through search for the origin of the strong and extremely stable remanent magnetization (MDF>100 mT) recorded in igneous and metamorphic rocks containing ilmenite with exsolution lamellae of hematite, or hematite with exsolution lamellae of ilmenite. Properties of rocks producing major remanent magnetic anomalies could not be explained by PM ilmenite or CAF hematite alone. Monte Carlo modeling of chemical and magnetic interactions in such intergrowths at high temperature indicated the presence of "contact layers" one cation layer thick at (001) interfaces of the two phases. Contact layers, with chemical composition different from layers in the adjacent phases, provide partial relief of ionic charge imbalance at interfaces, and can be common, not only in magnetic minerals. In rhombohedral Fe-Ti oxides, magnetic moments of 2 Fe2+Fe3+ contact layers (2 x 4.5µB) on both sides of a lamella, are balanced by the unbalanced magnetic moment of 1 Fe3+ hematite layer (1 x 5µB), to produce a net uncompensated ferrimagnetic "lamellar moment" of 4µB. Bulk lamellar moment is not proportional to the amount of magnetic oxide, but to the quantity of magnetically "in-phase" lamellar interfaces, with greater abundance and smaller thickness of lamellae, extending down to 1-2 nm. The proportion of "magnetically in-phase" lamellae relates to the orientation of (001) interfaces to the magnetizing field during exsolution, hence highest in samples with a strong lattice-preferred orientation of (001) parallel to the field during exsolution. The nature of contact layers, ~0.23 nm thick, with Fe2+Fe3+ charge ordering postulated by the Monte Carlo models, was confirmed by bond-valence and DFT calculations, and, their presence confirmed by Mössbauer measurements. Hysteresis experiments on hematite with nanoscale ilmenite at temperatures below 57 K, where ilmenite becomes AF, demonstrate magnetic exchange bias produced by strong coupling across phase interfaces. Interface coupling, with nominal magnetic moments perpendicular and parallel to (001), is facilitated by magnetic moments in hematite near interfaces that are a few degrees out of the (001) plane, proved by neutron diffraction experiments. When a ~b.y.-old sample, with a highly stable NRM, is ZF cooled below 57 K, it shows bimodal exchange bias, indicating the presence of two lamellar populations that are magnetically "out-of-phase", and incidentally proving the existence of lamellar magnetism. Lamellar magnetism may enhance the strength and stability of remanence in samples with magnetite or maghemite lamellae in pure hematite, or magnetite lamellae in ilmenite, where coarse magnetite or maghemite alone would be multi-domain. Here the "contact layers" should be a complex hybrid of 2/3-filled rhombohedral layers parallel to (001) and 3/4-filled cubic octahedral layers parallel to (111), with a common octahedral orientation confirmed by TEM observations. Here, because of different layer populations, the calculated lamellar moment may be higher than in the purely rhombohedral example.
(Per)Forming Archival Research Methodologies
ERIC Educational Resources Information Center
Gaillet, Lynee Lewis
2012-01-01
This article raises multiple issues associated with archival research methodologies and methods. Based on a survey of recent scholarship and interviews with experienced archival researchers, this overview of the current status of archival research both complicates traditional conceptions of archival investigation and encourages scholars to adopt…
Bova, G Steven; Eltoum, Isam A; Kiernan, John A; Siegal, Gene P; Frost, Andra R; Best, Carolyn J M; Gillespie, John W; Emmert-Buck, Michael R
2005-01-01
Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of pancreatic malignancy and other biological phenomena. This chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed-over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification. High-quality tissue microdissection does not necessarily mean high-quality samples to analyze. The quality of biomaterials obtained for analysis is highly dependent on steps upstream and downstream from tissue microdissection. We provide protocols for each of these steps, and encourage you to improve upon these. It is worth the effort of every laboratory to optimize and document its technique at each stage of the process, and we provide a starting point for those willing to spend the time to optimize. In our view, poor documentation of tissue and cell type of origin and the use of nonoptimized protocols is a source of inefficiency in current life science research. Even incremental improvement in this area will increase productivity significantly.
Hot Molecular Gas in the Circumnuclear Disk
NASA Astrophysics Data System (ADS)
Mills, Elisabeth A. C.; Togi, Aditya; Kaufman, Michael
2017-12-01
We present an analysis of archival Infrared Space Observatory observations of H2 for three 14\\prime\\prime × 20\\prime\\prime pointings in the central 3 pc of the Galaxy: toward the southwest region and northeast region of the Galactic center circumnuclear disk (CND), and toward the supermassive black hole Sgr A*. We detect pure rotational lines from 0-0 S(0) to S(13), as well as a number of rovibrationally excited transitions. Using the pure rotational lines, we perform both fits to a discrete temperature distribution (measuring up to three temperature components with T = 500-600 K, T = 1250-1350 K, and T > 2600 K) and fits to a continuous temperature distribution, assuming a power-law distribution of temperatures. We measure power-law indices of n = 3.22 for the northeast region and n = 2.83 for the southwest region. These indices are lower than those measured for other galaxies or other Galactic center clouds, indicating a larger fraction of gas at high temperatures. We also test whether extrapolating this temperature distribution can yield a reasonable estimate of the total molecular mass, as has been recently done for H2 observations in other galaxies. Extrapolating to a cutoff temperature of 50 K in the southwest (northeast) region, we would measure 32% (140%) of the total molecular gas mass inferred from the dust emission, and 26% (125%) of the total molecular gas mass inferred from the CO emission. Ultimately, the inconsistency of the masses inferred in this way suggests that a simple application of this method cannot yield a reliable estimate of the mass of the CND.
The UCL NASA 3D-RPIF Imaging Centre - a status report.
NASA Astrophysics Data System (ADS)
Muller, J.-P.; Grindrod, P.
2013-09-01
The NASA RPIF (Regional Planetary Imaging Facility) network of 9 US and 8 international centres were originally set-up in 1977 to "maintain photographic and digital data as well as mission documentation and cartographic data. Each facility's general holding contains images and maps of planets and their satellites taken by solar system exploration spacecraft. These planetary image facilities are open to the public. The facilities are primarily reference centers for browsing, studying, and selecting lunar and planetary photographic and cartographic materials. Experienced staff can assist scientists, educators, students, media, and the public in ordering materials for their own use." In parallel, the NASA Planetary Data System (PDS) and ESA Planetary Science Archive (PSA) were set-up to distribute digital data initially on media such as CDROM and DVD but now entirely online. The UK NASA RPIF was the first RPIF to be established outside of the US, in 1980. In [1], the 3D-RPIF is described. Some example products derived using this equipment are illustrated here. In parallel, at MSSL a large linux cluster and associated RAID_based system has been created to act as a mirror PDS Imaging node so that huge numbers of rover imagery (from MER & MSL to begin with) and very high resolution (large size) data is available to users of the RPIF and a variety of EU-FP7 projects based at UCL.
Simunek, M; Hossfeld, U; Wissemann, V
2011-11-01
The 'rediscovery' of Mendel's laws in 1900 is seen as a turning point in modern research on heredity and genetics. In the first half of the 20th century it was generally held that the 'rediscovery' was made several times, independently, and in a parallel fashion by three European botanists (Carl Correns, Hugo de Vries and Erich von Tschermak-Seysenegg). Since the 1950s, however, serious questions have arisen concerning both the chronology and the specific conceptual contribution of the scientists involved. Not only the independence but also parallelism was analysed in the context of individual research programmes of all three of these scholars. The youngest of them, Austrian botanist Erich von Tschermak-Seysenegg, was excluded from the rank of 'rediscoverers'. It is the aim of this paper to use new archival evidence and add important facts both to the chronology and conceptual framework of Erich von Tschermak-Seysenegg's work. An entirely new aspect is added by identifying his older brother, the physiologist Armin von Tschermak-Seysenegg (1870-1952), as a significant spiritus movens of the events of 1900 and 1901. A selected part of their correspondence, covering the period from 13 March 1898 until 19 November 1901, is made available in transcriptions. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
76 FR 15349 - Advisory Committee on the Electronic Records Archives (ACERA); Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-21
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA); Meeting AGENCY: National Archives and Records Administration. ACTION: Notice of Meeting. SUMMARY... Archives and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...
Web Archiving for the Rest of Us: How to Collect and Manage Websites Using Free and Easy Software
ERIC Educational Resources Information Center
Dunn, Katharine; Szydlowski, Nick
2009-01-01
Large-scale projects such as the Internet Archive (www.archive.org) send out crawlers to gather snapshots of much of the web. This massive collection of archived websites may include content of interest to one's patrons. But if librarians want to control exactly when and what is archived, relying on someone else to do the archiving is not ideal.…
Interaction-component analysis of the hydration and urea effects on cytochrome c
NASA Astrophysics Data System (ADS)
Yamamori, Yu; Ishizuka, Ryosuke; Karino, Yasuhito; Sakuraba, Shun; Matubayasi, Nobuyuki
2016-02-01
Energetics was analyzed for cytochrome c in pure-water solvent and in a urea-water mixed solvent to elucidate the solvation effect in the structural variation of the protein. The solvation free energy was computed through all-atom molecular dynamics simulation combined with the solution theory in the energy representation, and its correlations were examined over sets of protein structures against the electrostatic and van der Waals components in the average interaction energy of the protein with the solvent and the excluded-volume component in the solvation free energy. It was observed in pure-water solvent that the solvation free energy varies in parallel to the electrostatic component with minor roles played by the van der Waals and excluded-volume components. The effect of urea on protein structure was then investigated in terms of the free-energy change upon transfer of the protein solute from pure-water solvent to the urea-water mixed solvent. The decomposition of the transfer free energy into the contributions from urea and water showed that the urea contribution is partially canceled by the water contribution and governs the total free energy of transfer. When correlated against the change in the solute-solvent interaction energy upon transfer and the corresponding changes in the electrostatic, van der Waals, and excluded-volume components, the transfer free energy exhibited strong correlations with the total change in the solute-solvent energy and its van der Waals component. The solute-solvent energy was decomposed into the contributions from the protein backbone and side chain, furthermore, and neither of the contributions was seen to be decisive in the correlation to the transfer free energy.
NASA Astrophysics Data System (ADS)
Kato, Y.; Takenaka, T.; Yano, K.; Kiriyama, R.; Kurisu, Y.; Nozaki, D.; Muramatsu, M.; Kitagawa, A.; Uchida, T.; Yoshida, Y.; Sato, F.; Iida, T.
2012-11-01
Multiply charged ions to be used prospectively are produced from solid pure material in an electron cyclotron resonance ion source (ECRIS). Recently a pure iron source is also required for the production of caged iron ions in the fullerene in order to control cells in vivo in bio-nano science and technology. We adopt directly heating iron rod by induction heating (IH) because it has non-contact with insulated materials which are impurity gas sources. We choose molybdenum wire for the IH coils because it doesn't need water cooling. To improve power efficiency and temperature control, we propose to the new circuit without previously using the serial and parallel dummy coils (SPD) for matching and safety. We made the circuit consisted of inductively coupled coils which are thin-flat and helix shape, and which insulates the IH power source from the evaporator. This coupling coils circuit, i.e. insulated induction heating coil transformer (IHCT), can be move mechanically. The secondary current can be adjusted precisely and continuously. Heating efficiency by using the IHCT is much higher than those of previous experiments by using the SPD, because leakage flux is decreased and matching is improved simultaneously. We are able to adjust the temperature in heating the vapor source around melting point. And then the vapor pressure can be controlled precisely by using the IHCT. We can control ±10K around 1500°C by this method, and also recognize to controlling iron vapor flux experimentally in the extreme low pressures. Now we come into next stage of developing induction heating vapor source for materials with furthermore high temperature melting points above 2000K with the IHCT, and then apply it in our ECRIS.
76 FR 19147 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... Electronic Records Archives (ACERA). The meeting has been consolidated into one day. This meeting will be... number of individuals planning to attend must be submitted to the Electronic Records Archives Program at...
Optical Studies of Pure Fluids about Their Critical Points
NASA Astrophysics Data System (ADS)
Pang, Kian Tiong
Three optical experiments were performed on pure fluids near their critical points. In the first two setups, CH_3F and H_2C:CF _2 were each tested in a temperature -controlled, prism-shaped cell and a thin parallel-windows cell. In the prism cell, a laser beam was additionally deflected by the fluid present. From the deflection data, the refractive index was related to the density to find the Lorentz-Lorenz function. Critical temperature (T _{c}), density, refractive index and electronic polarizability were found. In the second experiment, a critically-filled, thin parallel-windows cell was placed in one arm of a Mach-Zehnder interoferometer. Fluid density was monitored by changes in the fringe pattern with changing cell temperature. The aim was to improve on the precision of T_{c}: T_{c}{rm (CH}_3 F) = (44cdot9087 +/- 0cdot0002)C; T _{c}{rm(H}_2C:CF _2) = (29cdot7419 +/- 0cdot0001)C; and, to study the coexistence curve and diameter as close to T_{c} as possible. The critical behaviour was compared to the theoretical renormalization group calculations. The derived coefficients were tested against a proposed three-body interaction to explain the field-mixing term in the diameter near the critical point. It was found that H_2C:CF_2 behaved as predicted by such an interaction; CH _3F (and CHF_3) did not. The third experiment was a feasibility study to find out if (critical) isotherms could be measured optically in a setup which combined the prism and parallel-windows cells. The aim was to map isotherms in as wide a range of pressure and density as possible and to probe the critical region directly. Pressure was monitored by a precise digital pressure gauge. CH_3F and CHF _3 were tested in this system. It was found that at low densities, the calculated second and third virial coefficients agreed with reference values. However, the data around the critical point were not accurate enough for use to calculate the critical exponent, delta . The calculated value was consistently smaller than the expected value. It was believed that the present setup had thermal isolation problems. Suggestions were made as to the improvements of this isotherm cell setup. Lastly, a joint project with the Department of Ophthalmology, UBC to assemble a vitreous fluorophotometer is discussed in Appendix F. The upgrading of the instrument took up the initial two years of this PhD programme.
Impedance spectroscopy of reduced monoclinic zirconia.
Eder, Dominik; Kramer, Reinhard
2006-10-14
Zirconia doped with low-valent cations (e.g. Y3+ or Ca2+) exhibits an exceptionally high ionic conductivity, making them ideal candidates for various electrochemical applications including solid oxide fuel cells (SOFC) and oxygen sensors. It is nevertheless important to study the undoped, monoclinic ZrO2 as a model system to construct a comprehensive picture of the electrical behaviour. In pure zirconia a residual number of anion vacancies remains because of contaminants in the material as well as the thermodynamic disorder equilibrium, but electronic conduction may also contribute to the observed conductivity. Reduction of zirconia in hydrogen leads to the adsorption of hydrogen and to the formation of oxygen vacancies, with their concentration affected by various parameters (e.g. reduction temperature and time, surface area, and water vapour pressure). However, there is still little known about the reactivities of defect species and their effect on the ionic and electronic conduction. Thus, we applied electrochemical impedance spectroscopy to investigate the electric performance of pure monoclinic zirconia with different surface areas in both oxidizing and reducing atmospheres. A novel equivalent circuit model including parallel ionic and electronic conduction has previously been developed for titania and is used herein to decouple the conduction processes. The concentration of defects and their formation energies were measured using volumetric oxygen titration and temperature programmed oxidation/desorption.
NASA Astrophysics Data System (ADS)
LeBoeuf, J. L.; Brodusch, N.; Gauvin, R.; Quitoriano, N. J.
2014-12-01
A novel method has been optimized so that adhesion layers are no longer needed to reliably deposit patterned gold structures on amorphous substrates. Using this technique allows for the fabrication of amorphous oxide templates known as micro-crucibles, which confine a vapor-liquid-solid (VLS) catalyst of nominally pure gold to a specific geometry. Within these confined templates of amorphous materials, faceted silicon crystals have been grown laterally. The novel deposition technique, which enables the nominally pure gold catalyst, involves the undercutting of an initial chromium adhesion layer. Using electron backscatter diffraction it was found that silicon nucleated in these micro-crucibles were 30% single crystals, 45% potentially twinned crystals and 25% polycrystals for the experimental conditions used. Single, potentially twinned, and polycrystals all had an aversion to growth with the {1 0 0} surface parallel to the amorphous substrate. Closer analysis of grain boundaries of potentially twinned and polycrystalline samples revealed that the overwhelming majority of them were of the 60° Σ3 coherent twin boundary type. The large amount of coherent twin boundaries present in the grown, two-dimensional silicon crystals suggest that lateral VLS growth occurs very close to thermodynamic equilibrium. It is suggested that free energy fluctuations during growth or cooling, and impurities were the causes for this twinning.
Large-scale anisotropy in stably stratified rotating flows
Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...
2014-08-28
We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less
Kinetics of carbonate mineral dissolution in CO2-acidified brines at storage reservoir conditions.
Peng, Cheng; Anabaraonye, Benaiah U; Crawshaw, John P; Maitland, Geoffrey C; Trusler, J P Martin
2016-10-20
We report experimental measurements of the dissolution rate of several carbonate minerals in CO 2 -saturated water or brine at temperatures between 323 K and 373 K and at pressures up to 15 MPa. The dissolution kinetics of pure calcite were studied in CO 2 -saturated NaCl brines with molalities of up to 5 mol kg -1 . The results of these experiments were found to depend only weakly on the brine molality and to conform reasonably well with a kinetic model involving two parallel first-order reactions: one involving reactions with protons and the other involving reaction with carbonic acid. The dissolution rates of dolomite and magnesite were studied in both aqueous HCl solution and in CO 2 -saturated water. For these minerals, the dissolution rates could be explained by a simpler kinetic model involving only direct reaction between protons and the mineral surface. Finally, the rates of dissolution of two carbonate-reservoir analogue minerals (Ketton limestone and North-Sea chalk) in CO 2 -saturated water were found to follow the same kinetics as found for pure calcite. Vertical scanning interferometry was used to study the surface morphology of unreacted and reacted samples. The results of the present study may find application in reactive-flow simulations of CO 2 -injection into carbonate-mineral saline aquifers.
The Dynamic Nature of the Ligustilide Complex
Schinkovitz, Andreas; Pro, Samuel M.; Main, Matthew; Chen, Shao-Nong; Jaki, Birgit U.; Lankin, David C.; Pauli, Guido F.
2008-01-01
Monomeric phthalides like Z-ligustilide (1) and Z-butylidenephthalide (2) are major constituents of medicinal plants of the Apiaceae family. While 1 has been associated with a variety of observed biological effects, it is also known for its instability and rapid chemical degradation. For the purpose of isolating pure 1 and 2, a gentle and rapid 2-step countercurrent isolation procedure was developed. From a supercritical CO2 fluid extract of Angelica sinensis roots, the phthalides were isolated with high GC-MS purities of 99.4 % for 1 and 98.9 % for 2, and consistently lower qHNMR purities of 98.1 % and 96.4 %, respectively. Taking advantage of molarity-based qHNMR methodology, a time-resolved study of the dynamic changes and residual complexity of pure 1 was conducted. GC-MS and (qH)NMR analysis of artificially degraded 1 provided evidence for the phthalide degradation pathways and optimized storing conditions. Parallel qHNMR analysis led to the recognition of variations in time- and process-dependant sample purity, and has impact on the overall assessment of time dependent changes in complex natural products systems. The study underscores the importance of independent quantitative monitoring as a prerequisite for the biological evaluation of labile natural products such as monomeric phthalides. PMID:18781813
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
NASA Astrophysics Data System (ADS)
Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.
2012-04-01
The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.
Suomi Npp and Jpss Pre-Launch Test Data Collection and Archive
NASA Astrophysics Data System (ADS)
Denning, M.; Ullman, R.; Guenther, B.; Kilcoyne, H.; Chandler, C.; Adameck, J.
2012-12-01
During the development of each Suomi National Polar-orbiting Partnership (Suomi NPP) instrument, significant testing was performed, both in ambient and simulated orbital (thermal-vacuum) conditions, at the instrument factory, and again after integration with the spacecraft. The NPOESS Integrated Program Office (IPO), and later the NASA Joint Polar Satellite System (JPSS) Program Office, defined two primary objectives with respect to capturing instrument and spacecraft test data during these test events. The first objective was to disseminate test data and auxiliary documentation to an often distributed network of scientists to permit timely production of independent assessments of instrument performance, calibration, data quality, and test progress. The second goal was to preserve the data and documentation in a catalogued government archive for the life of the mission, to aid in the resolution of anomalies and to facilitate the comparison of on-orbit instrument operating characteristics to those observed prior to launch. In order to meet these objectives, Suomi NPP pre-launch test data collection, distribution, processing, and archive methods included adaptable support infrastructures to quickly and completely transfer test data and documentation from the instrument and spacecraft factories to sensor scientist teams on-site at the factory and around the country. These methods were unique, effective, and low in cost. These efforts supporting pre-launch instrument calibration permitted timely data quality assessments and technical feedback from contributing organizations within the government, academia, and industry, and were critical in supporting timely sensor development. Second, in parallel to data distribution to the sensor science teams, pre-launch test data were transferred and ingested into the central Suomi NPP calibration and validation (cal/val) system, known as the Government Resource for Algorithm Verification, Independent Testing, and Evaluation (GRAVITE), where they will reside for the life of the mission. As a result, data and documentation are available for query, analysis, and download by the cal/val community via the command-line GRAVITE Transfer Protocol (GTP) tool or via the NOAA-collaborative website "CasaNOSA". Instrument and spacecraft test data, telemetry, and ground support equipment information were collected and organized with detailed test procedures, logs, analyses, characterizations, and reports. This 45 Terabyte archive facilitates the comparison of on-orbit Suomi NPP operating characteristics with that observed prior to launch, and will serve as a resource to aid in the assessment of pre-launch JPSS-1 sensor performance. In summary, this paper will present the innovative pre-launch test data campaign infrastructures employed for Suomi NPP and planned for JPSS-1.
Ethics and Truth in Archival Research
ERIC Educational Resources Information Center
Tesar, Marek
2015-01-01
The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…
A Vision of Archival Education at the Millennium.
ERIC Educational Resources Information Center
Tibbo, Helen R.
1997-01-01
Issues critical to the development of an archival education degree program are discussed including number of credit hours and courses. Archival educators continue to revise the Society of American Archivists (SAA) Master's of Archival Studies (M.A.S.) guidelines as higher education and the world changes. Archival educators must cooperate with…
Examining Activism in Practice: A Qualitative Study of Archival Activism
ERIC Educational Resources Information Center
Novak, Joy Rainbow
2013-01-01
While archival literature has increasingly discussed activism in the context of archives, there has been little examination of the extent to which archivists in the field have accepted or incorporated archival activism into practice. Scholarship that has explored the practical application of archival activism has predominately focused on case…
Enhancement of real-time EPICS IOC PV management for the data archiving system
NASA Astrophysics Data System (ADS)
Kim, Jae-Ha
2015-10-01
The operation of a 100-MeV linear proton accelerator, the major driving values and experimental data need to be archived. According to the experimental conditions, different data are required. Functions that can add new data and delete data in real time need to be implemented. In an experimental physics and industrial control system (EPICS) input output controller (IOC), the value of process variables (PVs) are matched with the driving values and data. The PV values are archived in text file format by using the channel archiver. There is no need to create a database (DB) server, just a need for large hard disk. Through the web, the archived data can be loaded, and new PV values can be archived without stopping the archive engine. The details of the implementation of a data archiving system with channel archiver are presented, and some preliminary results are reported.
Proba-V Mission Exploitation Platform
NASA Astrophysics Data System (ADS)
Goor, Erwin; Dries, Jeroen
2017-04-01
VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (a Copernicus contributing mission) EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. Furthermore data from the Copernicus Global Land Service is in scope of the platform. From November 2015 an operational Proba-V MEP environment, as an ESA operation service, is gradually deployed at the VITO data center with direct access to the complete data archive. Since autumn 2016 the platform is operational and yet several applications are released to the users, e.g. - A time series viewer, showing the evolution of Proba-V bands and derived vegetation parameters from the Copernicus Global Land Service for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains on a powerfull Hadoop/Spark backend e.g. for the calculation of N-daily composites. - Virtual Machines can be provided with access to the data archive and tools to work with this data, e.g. various toolboxes (GDAL, QGIS, GrassGIS, SNAP toolbox, …) and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. - A prototype of jupyter Notebooks is available with some examples worked out to show the potential of the data. Today the platform is used by several third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. In parallel the platform is further improved and extended. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be available as well soon. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components (e.g. Geotrellis). The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP. Furthermore demonstrations will be done. Platform URL: https://proba-v-mep.esa.int/
Chemically Dissected Rotation Curves of the Galactic Bulge from Main-sequence Proper Motions
NASA Astrophysics Data System (ADS)
Clarkson, William I.; Calamida, Annalisa; Sahu, Kailash C.; Brown, Thomas M.; Gennaro, Mario; Avila, Roberto J.; Valenti, Jeff; Debattista, Victor P.; Rich, R. Michael; Minniti, Dante; Zoccali, Manuela; Aufdemberge, Emily R.
2018-05-01
We report results from an exploratory study implementing a new probe of Galactic evolution using archival Hubble Space Telescope imaging observations. Precise proper motions are combined with photometric relative metallicity and temperature indices, to produce the proper-motion rotation curves of the Galactic bulge separately for metal-poor and metal-rich main-sequence samples. This provides a “pencil-beam” complement to large-scale wide-field surveys, which to date have focused on the more traditional bright giant branch tracers. We find strong evidence that the Galactic bulge rotation curves drawn from “metal-rich” and “metal-poor” samples are indeed discrepant. The “metal-rich” sample shows greater rotation amplitude and a steeper gradient against line-of-sight distance, as well as possibly a stronger central concentration along the line of sight. This may represent a new detection of differing orbital anisotropy between metal-rich and metal-poor bulge objects. We also investigate selection effects that would be implied for the longitudinal proper-motion cut often used to isolate a “pure-bulge” sample. Extensive investigation of synthetic stellar populations suggests that instrumental and observational artifacts are unlikely to account for the observed rotation curve differences. Thus, proper-motion-based rotation curves can be used to probe chemodynamical correlations for main-sequence tracer stars, which are orders of magnitude more numerous in the Galactic bulge than the bright giant branch tracers. We discuss briefly the prospect of using this new tool to constrain detailed models of Galactic formation and evolution. Based on observations made with the NASA/ESA Hubble Space Telescope and obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.
Madore, Bruno
2004-08-01
This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.
Liu-Zeng, J.; Zhang, Z.; Wen, L.; Tapponnier, P.; Sun, Jielun; Xing, X.; Hu, G.; Xu, Q.; Zeng, L.; Ding, L.; Ji, C.; Hudnut, K.W.; van der Woerd, J.
2009-01-01
The Ms 8.0, Wenchuan earthquake, which devastated the mountainous western rim of the Sichuan basin in central China, produced a surface rupture over 200??km-long with oblique thrust/dextral slip and maximum scarp heights of ~ 10??m. It thus ranks as one of the world's largest continental mega-thrust events in the last 150??yrs. Field investigation shows clear surface breaks along two of the main branches of the NE-trending Longmen Shan thrust fault system. The principal rupture, on the NW-dipping Beichuan fault, displays nearly equal amounts of thrust and right-lateral slip. Basin-ward of this rupture, another continuous surface break is observed for over 70??km on the parallel, more shallowly NW-dipping Pengguan fault. Slip on this latter fault was pure thrusting, with a maximum scarp height of ~ 3.5??m. This is one of the very few reported instances of crustal-scale co-seismic slip partitioning on parallel thrusts. This out-of-sequence event, with distributed surface breaks on crustal mega-thrusts, highlights regional, ~ EW-directed, present day crustal shortening oblique to the Longmen Shan margin of Tibet. The long rupture and large offsets with strong horizontal shortening that characterize the Wenchuan earthquake herald a re-evaluation of tectonic models anticipating little or no active shortening of the upper crust along this edge of the plateau, and require a re-assessment of seismic hazard along potentially under-rated active faults across the densely populated western Sichuan basin and mountains. ?? 2009 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Ma, Q.; Boulet, C.; Tipping, R. H.
2017-01-01
Line shape parameters including the half-widths and the off-diagonal elements of the relaxation matrix have been calculated for self-broadened NH3 lines in the perpendicular v4 band. As in the pure rotational and the parallel v1 bands, the small inversion splitting in this band causes a complete failure of the isolated line approximation. As a result, one has to use formalisms not relying on this approximation. However, due to differences between parallel and perpendicular bands of NH3, the applicability of the formalism used in our previous studies of the v1 band and other parallel bands must be carefully verified. We have found that, as long as potential models only contain components with K1 equals K2 equals 0, whose matrix elements require the selection rule delta k equals 0, the formalism is applicable for the v4 band with some minor adjustments. Based on both theoretical considerations and results from numerical calculations, the non-diagonality of the relaxation matrices in all the PP, RP, PQ, RQ, PR, and RR branches is discussed. Theoretically calculated self-broadened half-widths are compared with measurements and the values listed in HITRAN 2012. With respect to line coupling effects, we have compared our calculated intra-doublet off-diagonal elements of the relaxation matrix with reliable measurements carried out in the PP branch where the spectral environment is favorable. The agreement is rather good since our results do well reproduce the observed k and j dependences of these elements, thus validating our formalism.
VLBA Archive &Distribution Architecture
NASA Astrophysics Data System (ADS)
Wells, D. C.
1994-01-01
Signals from the 10 antennas of NRAO's VLBA [Very Long Baseline Array] are processed by a Correlator. The complex fringe visibilities produced by the Correlator are archived on magnetic cartridges using a low-cost architecture which is capable of scaling and evolving. Archive files are copied to magnetic media to be distributed to users in FITS format, using the BINTABLE extension. Archive files are labelled using SQL INSERT statements, in order to bind the DBMS-based archive catalog to the archive media.
Zenker, Sven
2010-08-01
Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.
García-Grajales, Julián A.; Rucabado, Gabriel; García-Dopico, Antonio; Peña, José-María; Jérusalem, Antoine
2015-01-01
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, functions of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells, Neurite, has only very recently been proposed. In this paper, we present the implementation details of this model: a finite difference parallel program for simulating electrical signal propagation along neurites under mechanical loading. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite—explicit and implicit—were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between electrophysiology and mechanics. This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted. PMID:25680098
Beating the tyranny of scale with a private cloud configured for Big Data
NASA Astrophysics Data System (ADS)
Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag
2015-04-01
The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.
EarthServer: a Summary of Achievements in Technology, Services, and Standards
NASA Astrophysics Data System (ADS)
Baumann, Peter
2015-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data, according to ISO and OGC defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timese ries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The transatlantic EarthServer initiative, running from 2011 through 2014, has united 11 partners to establish Big Earth Data Analytics. A key ingredient has been flexibility for users to ask whatever they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level, standards-based query languages which unify data and metadata search in a simple, yet powerful way. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing cod e has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, the pioneer and leading Array DBMS built for any-size multi-dimensional raster data being extended with support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level coverage query language. Reviewers have attested EarthServer that "With no doubt the project has been shaping the Big Earth Data landscape through the standardization activities within OGC, ISO and beyond". We present the project approach, its outcomes and impact on standardization and Big Data technology, and vistas for the future.
Obstacles to the Access, Use and Transfer of Information from Archives: A RAMP Study.
ERIC Educational Resources Information Center
Duchein, Michel
This publication reviews means of access to information contained in the public archives (current administrative documents and archival records) and private archives (manuscripts of personal or family origin) of many countries and makes recommendations for improving access to archival information. Sections describe: (1) the origin and development…
NASA Astrophysics Data System (ADS)
Heather, David; Besse, Sebastien; Vallat, Claire; Barbarisi, Isa; Arviset, Christophe; De Marchi, Guido; Barthelemy, Maud; Coia, Daniela; Costa, Marc; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; MacFarlane, Alan; Martinez, Santa; Rios, Carlos; Vallejo, Fran; Saiz, Jaime
2017-04-01
The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces at http://psa.esa.int. All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. The PSA is currently implementing a number of significant improvements, mostly driven by the evolution of the PDS standard, and the growing need for better interfaces and advanced applications to support science exploitation. As of the end of 2016, the PSA is hosting data from all of ESA's planetary missions. This includes ESA's first planetary mission Giotto that encountered comet 1P/Halley in 1986 with a flyby at 800km. Science data from Venus Express, Mars Express, Huygens and the SMART-1 mission are also all available at the PSA. The PSA also contains all science data from Rosetta, which explored comet 67P/Churyumov-Gerasimenko and asteroids Steins and Lutetia. The year 2016 has seen the arrival of the ExoMars 2016 data in the archive. In the upcoming years, at least three new projects are foreseen to be fully archived at the PSA. The BepiColombo mission is scheduled for launch in 2018. Following that, the ExoMars Rover Surface Platform (RSP) in 2020, and then the JUpiter ICy moon Explorer (JUICE). All of these will archive their data in the PSA. In addition, a few ground-based support programmes are also available, especially for the Venus Express and Rosetta missions. The newly designed PSA will enhance the user experience and will significantly reduce the complexity for users to find their data promoting one-click access to the scientific datasets with more customized views when needed. This includes a better integration with Planetary GIS analysis tools and Planetary interoperability services (search and retrieve data, supporting e.g. PDAP, EPN-TAP). It will also be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's ExoMars and upcoming BepiColombo missions. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. The new PSA interface was released in January 2017. The home page provides a direct and simple access to the scientific data, aiming to help scientists to discover and explore its content. The archive can be explored through a set of parameters that allow the selection of products through space and time. Quick views provide information needed for the selection of appropriate scientific products. During 2017, the PSA team will focus their efforts on developing a map search interface using GIS technologies to display ESA planetary datasets, an image gallery providing navigation through images to explore the datasets, and interoperability with international partners. This will be done in parallel with additional metadata searchable through the interface (i.e., geometry), and with a dedication to improve the content of 20 years of space exploration.
Current status of the international Halley Watch infrared net archive
NASA Technical Reports Server (NTRS)
Mcguinness, Brian B.
1988-01-01
The primary purposes of the Halley Watch have been to promote Halley observations, coordinate and standardize the observing where useful, and to archive the results in a database readily accessible to cometary scientists. The intention of IHW is to store the observations themselves, along with any information necessary to allow users to understand and use the data, but to exclude interpretations of these data. Each of the archives produced by the IHW will appear in two versions: a printed archive and a digital archive on CD-ROMs. The archive is expected to have a very long lifetime. The IHW has already produced an archive for P/Crommelin. This consists of one printed volume and two 1600 bpi tapes. The Halley archive will contain at least twenty gigabytes of information.
ERIC Educational Resources Information Center
Choudhury, G. Sayeed; DiLauro, Tim; Droettboom, Michael; Fujinaga, Ichiro; MacMillan, Karl; Nelson, Michael L.; Maly, Kurt; Thibodeau, Kenneth; Thaller, Manfred
2001-01-01
These articles describe the experiences of the Johns Hopkins University library in digitizing their collection of sheet music; motivation for buckets, Smart Object, Dumb Archive (SODA) and the Open Archives Initiative (OAI), and initial experiences using them in digital library (DL) testbeds; requirements for archival institutions, the National…
Internet FAQ Archives - Online Education - faqs.org
faqs.org Internet FAQ Archives - Online Education faqs.org faqs.org - Internet FAQ Archives Internet FAQ Archives Online Education Internet RFC Index Usenet FAQ Index Other FAQs Documents Tools IFC Rated FAQs Internet RFC/STD/FYI/BCP Archives The Internet RFC series of documents is also available from
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher Slominski
2009-10-01
Archiving a large fraction of the EPICS signals within the Jefferson Lab (JLAB) Accelerator control system is vital for postmortem and real-time analysis of the accelerator performance. This analysis is performed on a daily basis by scientists, operators, engineers, technicians, and software developers. Archiving poses unique challenges due to the magnitude of the control system. A MySQL Archiving system (Mya) was developed to scale to the needs of the control system; currently archiving 58,000 EPICS variables, updating at a rate of 11,000 events per second. In addition to the large collection rate, retrieval of the archived data must also bemore » fast and robust. Archived data retrieval clients obtain data at a rate over 100,000 data points per second. Managing the data in a relational database provides a number of benefits. This paper describes an archiving solution that uses an open source database and standard off the shelf hardware to reach high performance archiving needs. Mya has been in production at Jefferson Lab since February of 2007.« less
Utilizing data grid architecture for the backup and recovery of clinical image data.
Liu, Brent J; Zhou, M Z; Documet, J
2005-01-01
Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models. However, there has been limited investigation into the impact of this emerging technology in medical imaging and informatics. In particular, PACS technology, an established clinical image repository system, while having matured significantly during the past ten years, still remains weak in the area of clinical image data backup. Current solutions are expensive or time consuming and the technology is far from foolproof. Many large-scale PACS archive systems still encounter downtime for hours or days, which has the critical effect of crippling daily clinical operations. In this paper, a review of current backup solutions will be presented along with a brief introduction to grid technology. Finally, research and development utilizing the grid architecture for the recovery of clinical image data, in particular, PACS image data, will be presented. The focus of this paper is centered on applying a grid computing architecture to a DICOM environment since DICOM has become the standard for clinical image data and PACS utilizes this standard. A federation of PACS can be created allowing a failed PACS archive to recover its image data from others in the federation in a seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid is composed of one research laboratory and two clinical sites. The Globus 3.0 Toolkit (Co-developed by the Argonne National Laboratory and Information Sciences Institute, USC) for developing the core and user level middleware is utilized to achieve grid connectivity. The successful implementation and evaluation of utilizing data grid architecture for clinical PACS data backup and recovery will provide an understanding of the methodology for using Data Grid in clinical image data backup for PACS, as well as establishment of benchmarks for performance from future grid technology improvements. In addition, the testbed can serve as a road map for expanded research into large enterprise and federation level data grids to guarantee CA (Continuous Availability, 99.999% up time) in a variety of medical data archiving, retrieval, and distribution scenarios.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control
NASA Astrophysics Data System (ADS)
He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.
Building a COTS archive for satellite data
NASA Technical Reports Server (NTRS)
Singer, Ken; Terril, Dave; Kelly, Jack; Nichols, Cathy
1994-01-01
The goal of the NOAA/NESDIS Active Archive was to provide a method of access to an online archive of satellite data. The archive had to manage and store the data, let users interrogate the archive, and allow users to retrieve data from the archive. Practical issues of the system design such as implementation time, cost and operational support were examined in addition to the technical issues. There was a fixed window of opportunity to create an operational system, along with budget and staffing constraints. Therefore, the technical solution had to be designed and implemented subject to constraint imposed by the practical issues. The NOAA/NESDIS Active Archive came online in July of 1994, meeting all of its original objectives.
ERIC Educational Resources Information Center
McCarthy, Gavan; Evans, Joanne
2007-01-01
This article examines the evolution of a national register of the archives of science and technology in Australia and the related development of an archival informatics focused initially on people and their relationships to archival materials. The register was created in 1985 as an in-house tool for the Australian Science Archives Project of the…
ERIC Educational Resources Information Center
Brodhead, Michael J.; Zink, Steven D.
1993-01-01
Discusses the National Archives and Records Administration (NARA) through an interview with the Archivist of the United States, Don Wilson. Topics addressed include archival independence and congressional relations; national information policy; expansion plans; machine-readable archival records; preservation activities; and relations with other…
Adaptability in the Development of Data Archiving Services at Johns Hopkins University
NASA Astrophysics Data System (ADS)
Petters, J.; DiLauro, T.; Fearon, D.; Pralle, B.
2015-12-01
Johns Hopkins University (JHU) Data Management Services provides archiving services for institutional researchers through the JHU Data Archive, thereby increasing the access to and use of their research data. From its inception our unit's archiving service has evolved considerably. While some of these changes have been internally driven so that our unit can archive quality data collections more efficiently, we have also developed archiving policies and procedures on the fly in response to researcher needs. Providing our archiving services for JHU research groups from a variety of research disciplines have surfaced different sets of expectations and needs. We have used each interaction to help us refine our services and quickly satisfy the researchers we serve (following the first agile principle). Here we discuss the development of our newest archiving service model, its implementation over the past several months, and the processes by which we have continued to refine and improve our archiving services since its implementation. Through this discussion we will illustrate the benefits of planning, structure and flexibility in development of archiving services that maximize the potential value of research data. We will describe interactions with research groups, including those from environmental engineering and international health, and how we were able to rapidly modify and develop our archiving services to meet their needs (e.g. in an 'agile' way). For example, our interactions with both of these research groups led first to discussion in regular standing meetings and eventually development of new archiving policies and procedures. These policies and procedures centered on limiting access to archived research data while associated manuscripts progress through peer-review and publication.
NASA Astrophysics Data System (ADS)
Bonano, Manuela; Buonanno, Sabatino; Ojha, Chandrakanta; Berardino, Paolo; Lanari, Riccardo; Zeni, Giovanni; Manunta, Michele
2017-04-01
The advanced DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm has already largely demonstrated its effectiveness to carry out multi-scale and multi-platform surface deformation analyses relevant to both natural and man-made hazards. Thanks to its capability to generate displacement maps and long-term deformation time series at both regional (low resolution analysis) and local (full resolution analysis) spatial scales, it allows to get more insights on the spatial and temporal patterns of localized displacements relevant to single buildings and infrastructures over extended urban areas, with a key role in supporting risk mitigation and preservation activities. The extensive application of the multi-scale SBAS-DInSAR approach in many scientific contexts has gone hand in hand with new SAR satellite mission development, characterized by different frequency bands, spatial resolution, revisit times and ground coverage. This brought to the generation of huge DInSAR data stacks to be efficiently handled, processed and archived, with a strong impact on both the data storage and the computational requirements needed for generating the full resolution SBAS-DInSAR results. Accordingly, innovative and effective solutions for the automatic processing of massive SAR data archives and for the operational management of the derived SBAS-DInSAR products need to be designed and implemented, by exploiting the high efficiency (in terms of portability, scalability and computing performances) of the new ICT methodologies. In this work, we present a novel parallel implementation of the full resolution SBAS-DInSAR processing chain, aimed at investigating localized displacements affecting single buildings and infrastructures relevant to very large urban areas, relying on different granularity level parallelization strategies. The image granularity level is applied in most steps of the SBAS-DInSAR processing chain and exploits the multiprocessor systems with distributed memory. Moreover, in some processing steps very heavy from the computational point of view, the Graphical Processing Units (GPU) are exploited for the processing of blocks working on a pixel-by-pixel basis, requiring strong modifications on some key parts of the sequential full resolution SBAS-DInSAR processing chain. GPU processing is implemented by efficiently exploiting parallel processing architectures (as CUDA) for increasing the computing performances, in terms of optimization of the available GPU memory, as well as reduction of the Input/Output operations on the GPU and of the whole processing time for specific blocks w.r.t. the corresponding sequential implementation, particularly critical in presence of huge DInSAR datasets. Moreover, to efficiently handle the massive amount of DInSAR measurements provided by the new generation SAR constellations (CSK and Sentinel-1), we perform a proper re-design strategy aimed at the robust assimilation of the full resolution SBAS-DInSAR results into the web-based Geonode platform of the Spatial Data Infrastructure, thus allowing the efficient management, analysis and integration of the interferometric results with different data sources.
An electrostatic deceleration lens for highly charged ions.
Rajput, J; Roy, A; Kanjilal, D; Ahuja, R; Safvan, C P
2010-04-01
The design and implementation of a purely electrostatic deceleration lens used to obtain beams of highly charged ions at very low energies is presented. The design of the lens is such that it can be used with parallel as well as diverging incoming beams and delivers a well focused low energy beam at the target. In addition, tuning of the final energy of the beam over a wide range (1 eV/q to several hundred eV/q, where q is the beam charge state) is possible without any change in hardware configuration. The deceleration lens was tested with Ar(8+), extracted from an electron cyclotron resonance ion source, having an initial energy of 30 keV/q and final energies as low as 70 eV/q have been achieved.
Formation of silicon nanodots via ion beam sputtering of ultrathin gold thin film coatings on Si
2011-01-01
Ion beam sputtering of ultrathin film Au coatings used as a physical catalyst for self-organization of Si nanostructures has been achieved by tuning the incident particle energy. This approach holds promise as a scalable nanomanufacturing parallel processing alternative to candidate nanolithography techniques. Structures of 11- to 14-nm Si nanodots are formed with normal incidence low-energy Ar ions of 200 eV and fluences above 2 × 1017 cm-2. In situ surface characterization during ion irradiation elucidates early stage ion mixing migration mechanism for nanodot self-organization. In particular, the evolution from gold film islands to the formation of ion-induced metastable gold silicide followed by pure Si nanodots formed with no need for impurity seeding. PMID:21711934
State-independent uncertainty relations and entanglement detection
NASA Astrophysics Data System (ADS)
Qian, Chen; Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
The uncertainty relation is one of the key ingredients of quantum theory. Despite the great efforts devoted to this subject, most of the variance-based uncertainty relations are state-dependent and suffering from the triviality problem of zero lower bounds. Here we develop a method to get uncertainty relations with state-independent lower bounds. The method works by exploring the eigenvalues of a Hermitian matrix composed by Bloch vectors of incompatible observables and is applicable for both pure and mixed states and for arbitrary number of N-dimensional observables. The uncertainty relation for the incompatible observables can be explained by geometric relations related to the parallel postulate and the inequalities in Horn's conjecture on Hermitian matrix sum. Practical entanglement criteria are also presented based on the derived uncertainty relations.
Self-assembly of skyrmion-dressed chiral nematic colloids with tangential anchoring.
Pandey, M B; Porenta, T; Brewer, J; Burkart, A; Copar, S; Zumer, S; Smalyukh, Ivan I
2014-06-01
We describe dipolar nematic colloids comprising mutually bound solid microspheres, three-dimensional skyrmions, and point defects in a molecular alignment field of chiral nematic liquid crystals. Nonlinear optical imaging and numerical modeling based on minimization of Landau-de Gennes free energy reveal that the particle-induced skyrmions resemble torons and hopfions, while matching surface boundary conditions at the interfaces of liquid crystal and colloidal spheres. Laser tweezers and videomicroscopy reveal that the skyrmion-colloidal hybrids exhibit purely repulsive elastic pair interactions in the case of parallel dipoles and an unexpected reversal of interaction forces from repulsive to attractive as the center-to-center distance decreases for antiparallel dipoles. The ensuing elastic self-assembly gives rise to colloidal chains of antiparallel dipoles with particles entangled by skyrmions.
VizieR Online Data Catalog: Milky Way L/T/M-dwarfs identified in BoRG survey (Holwerda+, 2014)
NASA Astrophysics Data System (ADS)
Holwerda, B. W.; Trenti, M.; Clarkson, W.; Sahu, K.; Bradley, L.; Stiavelli, M.; Pirzkal, N.; de Marchi, G.; Andersen, M.; Bouwens, R.; Ryan, R.
2017-07-01
Our principal data set is the WFC3 data from the BoRG (HST GO/PAR-11700; Trenti et al. 2011ApJ...727L..39T; Bradley et al. 2012ApJ...760..108B) survey to identify Milky Way dwarf stars from their morphology and color. The BoRG observations are undithered HST/WFC3 conducted in pure-parallel with the telescope pointing to a primary spectroscopic target with the Cosmic Origin Spectrograph (COS; typically a high-z QSO at high Galactic latitude). The limitations for such observations are primarily that no dithering strategy can be used (final images are at WFC3 native pixel scale) and total exposure times are dictated by the primary program. (5 data files).
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Li, J. C.; Diamond, P. H.
2017-03-23
Here, negative compressibility ITG turbulence in a linear plasma device (CSDX) can induce a negative viscosity increment. However, even with this negative increment, we show that the total axial viscosity remains positive definite, i.e. no intrinsic axial flow can be generated by pure ITG turbulence in a straight magnetic field. This differs from the case of electron drift wave (EDW) turbulence, where the total viscosity can turn negative, at least transiently. When the flow gradient is steepened by any drive mechanism, so that the parallel shear flow instability (PSFI) exceeds the ITG drive, the flow profile saturates at a level close to the value above which PSFI becomes dominant. This saturated flow gradient exceeds the PSFI linear threshold, and grows withmore » $$\
Flow of Dense Granular Suspensions on an Inclined Plane
NASA Astrophysics Data System (ADS)
Bonnoit, C.; Lanuza, J.; Lindner, A.; Clément, E.
2008-07-01
We investigate the flow behavior of dense granular suspensions, by the use of an inclined plane. The suspensions are prepared at high packing fractions and consist of spherical non-Brownian particles density matched with the suspending fluid. On the inclined plane, we perform a systematic study of the surface velocity as a function of the layer thickness for various flow rates and tilt angles. We perform measurements on a classical rheometer (parallel-plate rheometer) that is shown to be in good agreement with existing models, up to a volume fraction of 50%. Comparing these results, we show that the flow on an inclined plane can, up to a volume fraction of 50%, indeed be described by a purely viscous model in agreement with the results from classical rheometry.
NASA Astrophysics Data System (ADS)
Khodas, M.; Levchenko, A.; Catelani, G.
2012-06-01
We study the transport in ultrathin disordered film near the quantum critical point induced by the Zeeman field. We calculate corrections to the normal state conductivity due to quantum pairing fluctuations. The fluctuation-induced transport is mediated by virtual rather than real quasiparticle excitations. We find that at zero temperature, where the corrections come from purely quantum fluctuations, the Aslamazov-Larkin paraconductivity term, the Maki-Thompson interference contribution, and the density of states effects are all of the same order. The total correction leads to the negative magnetoresistance. This result is in qualitative agreement with the recent transport observations in the parallel magnetic field of the homogeneously disordered amorphous films and superconducting two-dimensional electron gas realized at the oxide interfaces.
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz; Brown, Anthony
2016-06-01
The Gaia archive is being designed and implemented by the DPAC Consortium. The purpose of the archive is to maximize the scientific exploitation of the Gaia data by the astronomical community. Thus, it is crucial to gather and discuss with the community the features of the Gaia archive as much as possible. It is especially important from the point of view of the GENIUS project to gather the feedback and potential use cases for the archive. This paper presents very briefly the general ideas behind the Gaia archive and presents which tools are already provided to the community.
Tabrizi, Sepehr N; Law, Irwin; Buadromo, Eka; Stevens, Matthew P; Fong, James; Samuela, Josaia; Patel, Mahomed; Mulholland, E Kim; Russell, Fiona M; Garland, Suzanne M
2011-09-01
There is currently limited information about human papillomavirus (HPV) genotype distribution in women in the South Pacific region. This study's objective was to determine HPV genotypes present in cervical cancer (CC) and precancers (cervical intraepithelial lesion (CIN) 3) in Fiji. Cross-sectional analysis evaluated archival CC and CIN3 biopsy samples from 296 women of Melanesian Fijian ethnicity (n=182, 61.5%) and Indo-Fijian ethnicity (n=114, 38.5%). HPV genotypes were evaluated using the INNO-LiPA assay in archival samples from CC (n=174) and CIN3 (n=122) among women in Fiji over a 5-year period from 2003 to 2007. Overall, 99% of the specimens tested were HPV DNA-positive for high-risk genotypes, with detection rates of 100%, 97.4% and 100% in CIN3, squamous cell carcinoma (SCC) and adenosquamous carcinoma biopsies, respectively. Genotypes 16 and 18 were the most common (77%), followed by HPV 31 (4.3%). Genotype HPV 16 was the most common identified (59%) in CIN3 specimens, followed by HPV 31 (9%) and HPV 52 (6.6%). Multiple genotypes were detected in 12.5-33.3% of specimens, depending on the pathology. These results indicated that the two most prevalent CC-associated HPV genotypes in Fiji parallel those described in other regions worldwide, with genotype variations thereafter. These data suggest that the currently available bivalent and quadrivalent HPV vaccines could potentially reduce cervical cancers in Fiji by over 80% and reduce precancers by at least 60%.
Masic, Izet; Milinovic, Katarina
2012-01-01
Most of medical journals now has it’s electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals. PMID:23322957
75 FR 12573 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-16
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a [[Page 12574
Mega-precovery and data mining of near-Earth asteroids and other Solar System objects
NASA Astrophysics Data System (ADS)
Popescu, M.; Vaduvescu, O.; Char, F.; Curelaru, L.; Euronear Team
2014-07-01
The vast collection of CCD images and photographic plate archives available from the world-wide archives and telescopes is still insufficiently exploited. Within the EURONEAR project we designed two data mining software with the purpose to search very large collections of archives for images which serendipitously include known asteroids or comets in their field, with the main aims to extend the arc and improve the orbits. In this sense, ''Precovery'' (published in 2008, aiming to search all known NEAs in few archives via IMCCE's SkyBoT server) and ''Mega-Precovery'' (published in 2010, querying the IMCCE's Miriade server) were made available to the community via the EURONEAR website (euronear.imcce.fr). Briefly, Mega-Precovery aims to search one or a few known asteroids or comets in a mega-collection including millions of images from some of the largest observatory archives: ESO (15 instruments served by ESO Archive including VLT), NVO (8 instruments served by U.S. NVO Archive), CADC (11 instruments, including HST and Gemini), plus other important instrument archives: SDSS, CFHTLS, INT-WFC, Subaru-SuprimeCam and AAT-WFI, adding together 39 instruments and 4.3 million images (Mar 2014), and our Mega-Archive is growing. Here we present some of the most important results obtained with our data-mining software and some new planned search options of Mega-Precovery. Particularly, the following capabilities will be added soon: the ING archive (all imaging cameras) will be included and new search options will be made available (such as query by orbital elements and by observations) to be able to target new Solar System objects such as Virtual Impactors, bolides, planetary satellites, TNOs (besides the comets added recently). In order to better characterize the archives, we introduce the ''AOmegaA'' factor (archival etendue) proportional to the AOmega (etendue) and the number of images in an archive. With the aim to enlarge the Mega-Archive database, we invite the observatories (particularly those storing their images online and also those that own plate archives which could be scanned on request) to contact us in order to add their instrument archives (consisting of an ASCII file with telescope pointings in a simple format) to our Mega-Precovery open project. We intend for the future to synchronise our service with the Virtual Observatory.
A generic archive protocol and an implementation
NASA Technical Reports Server (NTRS)
Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.
1992-01-01
Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.
High-Performance Design Patterns for Modern Fortran
Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...
2015-01-01
This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less
Extensional tectonics and collapse structures in the Suez Rift (Egypt)
NASA Technical Reports Server (NTRS)
Chenet, P. Y.; Colletta, B.; Desforges, G.; Ousset, E.; Zaghloul, E. A.
1985-01-01
The Suez Rift is a 300 km long and 50 to 80 km wide basin which cuts a granitic and metamorphic shield of Precambrian age, covered by sediments of Paleozoic to Paleogene age. The rift structure is dominated by tilted blocks bounded by NW-SE normal faults. The reconstruction of the paleostresses indicates a N 050 extension during the whole stage of rifting. Rifting began 24 My ago with dikes intrusions; main faulting and subsidence occurred during Early Miocene producing a 80 km wide basin (Clysmic Gulf). During Pliocene and Quaternary times, faulting is still active but subsidence is restricted to a narrower area (Present Gulf). On the Eastern margin of the gulf, two sets of fault trends are predominant: (1) N 140 to 150 E faults parallel to the gulf trend with pure dip-slip displacement; and (2) cross faults, oriented NOO to N 30 E that have a strike-slip component consistent with the N 050 E distensive stress regime. The mean dip cross fault is steeper (70 to 80 deg) than the dip of the faults parallel to the Gulf (30 to 70 deg). These two sets of fault define diamond shaped tilted block. The difference of mechanical behavior between the basement rocks and the overlying sedimentary cover caused structural disharmony and distinct fault geometries.
Scaling of Electron Heating During Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Ohia, O.; Le, A.; Daughton, W. S.; Egedal, J.
2016-12-01
While magnetic reconnection plays a major role in accelerating and heating magnetospheric plasma, it remains poorly understood how the level of particle energization depends on the plasma conditions. Meanwhile, a recent survey of THEMIS magnetopause reconnection observations [Phan et al. GRL 2013] and a numerical study [Shay et al. PoP 2014] found empirically that the electron heating scales with the square of the upstream Alfven speed. Equivalently for weak guide fields, the fractional electron temperature increase is inversely proportional to the upstream electron beta (ratio of electron to magnetic pressure). We present models for symmetric reconnection with moderate [Ohia et al., GRL 2015] or zero guide field that predict the electron bulk heating. In the models, adiabatically trapped electrons gain energy from parallel electric fields in the inflowing region. For purely anti-parallel reconnection, meandering electrons receive additional energy from the reconnection electric field. The predicted scalings are in quantitative agreement with fluid and kinetic simulations, as well as spacecraft observations. Using kinetic simulations, we extend this work to explore how the layer dynamics and electron bulk heating vary as functions of the magnetic shear and plasma and magnetic pressure asymmetry across the reconnection layer. These results are pertinent to recent Magnetospheric Multiscale (MMS) Mission measurements of electron dynamics during dayside magnetopause reconnection.
NASA Astrophysics Data System (ADS)
Yager-Elorriaga, D. A.; Lau, Y. Y.; Zhang, P.; Campbell, P. C.; Steiner, A. M.; Jordan, N. M.; McBride, R. D.; Gilgenbach, R. M.
2018-05-01
In this paper, we present experimental results on axially magnetized (Bz = 0.5 - 2.0 T), thin-foil (400 nm-thick) cylindrical liner-plasmas driven with ˜600 kA by the Michigan Accelerator for Inductive Z-Pinch Experiments, which is a linear transformer driver at the University of Michigan. We show that: (1) the applied axial magnetic field, irrespective of its direction (e.g., parallel or anti-parallel to the flow of current), reduces the instability amplitude for pure magnetohydrodynamic (MHD) modes [defined as modes devoid of the acceleration-driven magneto-Rayleigh-Taylor (MRT) instability]; (2) axially magnetized, imploding liners (where MHD modes couple to MRT) generate m = 1 or m = 2 helical modes that persist from the implosion to the subsequent explosion stage; (3) the merging of instability structures is a mechanism that enables the appearance of an exponential instability growth rate for a longer than expected time-period; and (4) an inverse cascade in both the axial and azimuthal wavenumbers, k and m, may be responsible for the final m = 2 helical structure observed in our experiments. These experiments are particularly relevant to the magnetized liner inertial fusion program pursued at Sandia National Laboratories, where helical instabilities have been observed.
FX-87 performance measurements: data-flow implementation. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammel, R.T.; Gifford, D.K.
1988-11-01
This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less
NASA Astrophysics Data System (ADS)
Khan, Muhammad Riaz; Zaib, Sumera; Rauf, Muhammad Khawar; Ebihara, Masahiro; Badshah, Amin; Zahid, Muhammad; Nadeem, Muhammad Arif; Iqbal, Jamshed
2018-07-01
An efficient and facile microwave-assisted solution phase parallel synthesis for a 38-member library of N-aroyl-N‧-aryl thioureas was accomplished successfully. These analogues (1-38) were synthesized under identical set of conditions. It has been observed that the reaction time was drastically reduced from 8 to 12 h for conventional methods to only 10-15 mins. Products obtained were more than 98% pure, as characterized by elemental analysis along with FT-IR and 1H, 13C NMR. The solid-phase structural analysis was accomplished by single crystal XRD analysis. The urease inhibitory potential of synthetic compounds was tested and compounds were found to inhibit urease in moderate to significant manner. Compound 17 was the most potent inhibitor of urease having an IC50 value of 0.17 ± 0.1 μM. To check the cytotoxic profile of the derivatives, lungs cancer cell lines were used. Cytotoxicity analysis revealed remarkable toxicity of the compounds against tested lungs carcinoma and compounds showed variation in inhibition activity due to the substituents attached. The molecular docking studies were carried out to identify the possible binding modes of potent inhibitors in the active site of enzyme. The results suggested that the compounds can be further investigated and used against different cancers.
NASA Astrophysics Data System (ADS)
Sawada, Ikuo
2012-10-01
We measured the radial distribution of electron density in a 200 mm parallel plate CCP and compared it with results from numerical simulations. The experiments were conducted with pure Ar gas with pressures ranging from 15 to 100 mTorr and 60 MHz applied at the top electrode with powers from 500 to 2000W. The measured electron profile is peaked in the center, and the relative non-uniformity is higher at 100 mTorr than at 15 mTorr. We compare the experimental results with simulations with both HPEM and Monte-Carlo/PIC codes. In HPEM simulations, we used either fluid or electron Monte-Carlo module, and the Poisson or the Electromagnetic solver. None of the models were able to duplicate the experimental results quantitatively. However, HPEM with the electron Monte-Carlo module and PIC qualitatively matched the experimental results. We will discuss the results from these models and how they illuminate the mechanism of enhanced electron central peak.[4pt] [1] T. Oshita, M. Matsukuma, S.Y. Kang, I. Sawada: The effect of non-uniform RF voltage in a CCP discharge, The 57^th JSAP Spring Meeting 2010[4pt] [2] I. Sawada, K. Matsuzaki, S.Y. Kang, T. Ohshita, M. Kawakami, S. Segawa: 1-st IC-PLANTS, 2008
NASA Astrophysics Data System (ADS)
Layes, Vincent; Monje, Sascha; Corbella, Carles; Schulz-von der Gathen, Volker; von Keudell, Achim; de los Arcos, Teresa
2017-05-01
In-vacuum characterization of magnetron targets after High Power Impulse Magnetron Sputtering (HiPIMS) has been performed by X-ray photoelectron spectroscopy (XPS). Al-Cr composite targets (circular, 50 mm diameter) mounted in two different geometries were investigated: an Al target with a small Cr disk embedded at the racetrack position and a Cr target with a small Al disk embedded at the racetrack position. The HiPIMS discharge and the target surface composition were characterized in parallel for low, intermediate, and high power conditions, thus covering both the Ar-dominated and the metal-dominated HiPIMS regimes. The HiPIMS plasma was investigated using optical emission spectroscopy and fast imaging using a CCD camera; the spatially resolved XPS surface characterization was performed after in-vacuum transfer of the magnetron target to the XPS chamber. This parallel evaluation showed that (i) target redeposition of sputtered species was markedly more effective for Cr atoms than for Al atoms; (ii) oxidation at the target racetrack was observed even though the discharge ran in pure Ar gas without O2 admixture, the oxidation depended on the discharge power and target composition; and (iii) a bright emission spot fixed on top of the inserted Cr disk appeared for high power conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.
2007-04-01
In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dettmer, Simon L.; Keyser, Ulrich F.; Pagliara, Stefano
In this article we present methods for measuring hindered Brownian motion in the confinement of complex 3D geometries using digital video microscopy. Here we discuss essential features of automated 3D particle tracking as well as diffusion data analysis. By introducing local mean squared displacement-vs-time curves, we are able to simultaneously measure the spatial dependence of diffusion coefficients, tracking accuracies and drift velocities. Such local measurements allow a more detailed and appropriate description of strongly heterogeneous systems as opposed to global measurements. Finite size effects of the tracking region on measuring mean squared displacements are also discussed. The use of thesemore » methods was crucial for the measurement of the diffusive behavior of spherical polystyrene particles (505 nm diameter) in a microfluidic chip. The particles explored an array of parallel channels with different cross sections as well as the bulk reservoirs. For this experiment we present the measurement of local tracking accuracies in all three axial directions as well as the diffusivity parallel to the channel axis while we observed no significant flow but purely Brownian motion. Finally, the presented algorithm is suitable also for tracking of fluorescently labeled particles and particles driven by an external force, e.g., electrokinetic or dielectrophoretic forces.« less
Sánchez-De la Torre, Fernando; De la Rosa, Javier Rivera; Kharisov, Boris I; Lucio-Ortiz, Carlos J
2013-09-30
Ni- and Cu/alumina powders were prepared and characterized by X-ray diffraction (XRD), scanning electronic microscope (SEM), and N₂ physisorption isotherms were also determined. The Ni/Al₂O₃ sample reveled agglomerated (1 μm) of nanoparticles of Ni (30-80 nm) however, NiO particles were also identified, probably for the low temperature during the H 2 reduction treatment (350 °C), the Cu/Al₂O₃ sample presented agglomerates (1-1.5 μm) of nanoparticles (70-150 nm), but only of pure copper. Both surface morphologies were different, but resulted in mesoporous material, with a higher specificity for the Ni sample. The surfaces were used in a new proposal for producing copper and nickel phthalocyanines using a parallel-plate reactor. Phthalonitrile was used and metallic particles were deposited on alumina in ethanol solution with CH₃ONa at low temperatures; ≤60 °C. The mass-transfer was evaluated in reaction testing with a recent three-resistance model. The kinetics were studied with a Langmuir-Hinshelwood model. The activation energy and Thiele modulus revealed a slow surface reaction. The nickel sample was the most active, influenced by the NiO morphology and phthalonitrile adsorption.
76 FR 52991 - Renewal of Advisory Committee on Electronic Records Archives
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Renewal of Advisory Committee on Electronic Records... Records Administration's (NARA) Advisory Committee on Electronic Records Archives. In accordance with... Committee on Electronic Records Archives in NARA's ceiling of discretionary advisory committees. FOR FURTHER...
76 FR 46855 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-03
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Agency Information Collection Activities: Proposed Collection; Comment Request AGENCY: National Archives and Records Administration (NARA). ACTION: Notice... original archival records in a National Archives and Records Administration facility. The public is invited...
ERIC Educational Resources Information Center
Clement, Tanya; Hagenmaier, Wendy; Knies, Jennie Levine
2013-01-01
With this piece, we seek to interrogate the sites at which library, archival, and scholarly work occurs in order to consider the changing nature of the future of the archive. First, we consider the work of the archive from the perspective of the long-standing tradition of scholarly publication and scholarly editing in archives and libraries.…
NASA Technical Reports Server (NTRS)
Singley, P. T.; Bell, J. D.; Daugherty, P. F.; Hubbs, C. A.; Tuggle, J. G.
1993-01-01
This paper will discuss user interface development and the structure and use of metadata for the Atmospheric Radiation Measurement (ARM) Archive. The ARM Archive, located at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, is the data repository for the U.S. Department of Energy's (DOE's) ARM Project. After a short description of the ARM Project and the ARM Archive's role, we will consider the philosophy and goals, constraints, and prototype implementation of the user interface for the archive. We will also describe the metadata that are stored at the archive and support the user interface.
The Self-Organized Archive: SPASE, PDS and Archive Cooperatives
NASA Astrophysics Data System (ADS)
King, T. A.; Hughes, J. S.; Roberts, D. A.; Walker, R. J.; Joy, S. P.
2005-05-01
Information systems with high quality metadata enable uses and services which often go beyond the original purpose. There are two types of metadata: annotations which are items that comment on or describe the content of a resource and identification attributes which describe the external properties of the resource itself. For example, annotations may indicate which columns are present in a table of data, whereas an identification attribute would indicate source of the table, such as the observatory, instrument, organization, and data type. When the identification attributes are collected and used as the basis of a search engine, a user can constrain on an attribute, the archive can then self-organize around the constraint, presenting the user with a particular view of the archive. In an archive cooperative where each participating data system or archive may have its own metadata standards, providing a multi-system search engine requires that individual archive metadata be mapped to a broad based standard. To explore how cooperative archives can form a larger self-organized archive we will show how the Space Physics Archive Search and Extract (SPASE) data model will allow different systems to create a cooperative and will use Planetary Data System (PDS) plus existing space physics activities as a demonstration.
Interoperability at ESA Heliophysics Science Archives: IVOA, HAPI and other implementations
NASA Astrophysics Data System (ADS)
Martinez-Garcia, B.; Cook, J. P.; Perez, H.; Fernandez, M.; De Teodoro, P.; Osuna, P.; Arnaud, M.; Arviset, C.
2017-12-01
The data of ESA heliophysics science missions are preserved at the ESAC Science Data Centre (ESDC). The ESDC aims for the long term preservation of those data, which includes missions such as Ulysses, Soho, Proba-2, Cluster, Double Star, and in the future, Solar Orbiter. Scientists have access to these data through web services, command line and graphical user interfaces for each of the corresponding science mission archives. The International Virtual Observatory Alliance (IVOA) provides technical standards that allow interoperability among different systems that implement them. By adopting some IVOA standards, the ESA heliophysics archives are able to share their data with those tools and services that are VO-compatible. Implementation of those standards can be found in the existing archives: Ulysses Final Archive (UFA) and Soho Science Archive (SSA). They already make use of VOTable format definition and Simple Application Messaging Protocol (SAMP). For re-engineered or new archives, the implementation of services through Table Access Protocol (TAP) or Universal Worker Service (UWS) will leverage this interoperability. This will be the case for the Proba-2 Science Archive (P2SA) and the Solar Orbiter Archive (SOAR). We present here which IVOA standards were already used by the ESA Heliophysics archives in the past and the work on-going.
ROSETTA: How to archive more than 10 years of mission
NASA Astrophysics Data System (ADS)
Barthelemy, Maud; Heather, D.; Grotheer, E.; Besse, S.; Andres, R.; Vallejo, F.; Barnes, T.; Kolokolova, L.; O'Rourke, L.; Fraga, D.; A'Hearn, M. F.; Martin, P.; Taylor, M. G. G. T.
2018-01-01
The Rosetta spacecraft was launched in 2004 and, after several planetary and two asteroid fly-bys, arrived at comet 67P/Churyumov-Gerasimenko in August 2014. After escorting the comet for two years and executing its scientific observations, the mission ended on 30 September 2016 through a touch down on the comet surface. This paper describes how the Planetary Science Archive (PSA) and the Planetary Data System - Small Bodies Node (PDS-SBN) worked with the Rosetta instrument teams to prepare the science data collected over the course of the Rosetta mission for inclusion in the science archive. As Rosetta is an international mission in collaboration between ESA and NASA, all science data from the mission are fully archived within both the PSA and the PDS. The Rosetta archiving process, supporting tools, archiving systems, and their evolution throughout the mission are described, along with a discussion of a number of the challenges faced during the Rosetta implementation. The paper then presents the current status of the archive for each of the science instruments, before looking to the improvements planned both for the archive itself and for the Rosetta data content. The lessons learned from the first 13 years of archiving on Rosetta are finally discussed with an aim to help future missions plan and implement their science archives.
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
75 FR 63208 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-14
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...
76 FR 65218 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-20
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives (ACERA) AGENCY: National Archives and Records Administration. ACTION: Notice of meeting. SUMMARY: In... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic...
Bank, Steffen; Nexø, Bjørn Andersen; Andersen, Vibeke; Vogel, Ulla; Andersen, Paal Skytt
2013-06-01
The recovery of biological samples for genetic epidemiological studies can be cumbersome. Blood clots are routinely collected for serological examinations. However, the extraction of DNA from blood clots can be difficult and often results in low yields. The aim was to compare the efficiency of commercial purification kits for extracting DNA from long-term frozen clotted blood. Serum tubes with clotted blood were stored at -20°C for 1 to 2.5 years before DNA extraction. DNA was extracted from 10 blood clot samples using PureGene (Qiagen) with and without glycogen, the QIAamp DNA Micro kit (Qiagen), and the Nucleospin 96 Blood kit (Macherey-Nagel). Furthermore, blood clots from 1055 inflammatory bowel disease patients were purified using the Maxwell 16 Blood purification kit (Promega). The DNA was extracted according to the manufacturers` instructions and real-time PCR and the A(260)/A(280) ratio were used to evaluate the quality of the extracted DNA. The highest DNA yield was obtained by the Maxwell 16 Blood purification kit (Promega) with a median of 4.90 μg (range 0.8-25 μg) pr 300 μL total blood. PureGene with glycogen (Qiagen) had the second highest yield with a median of 0.65 μg (range 0.5-2.6 μg) pr 300 μL total blood. The yield obtained by the different commercial kits varied considerably. Our work demonstrates that high-quality and -quantity DNA can be extracted with the Maxwell 16 Blood purification kit (Promega) from cryopreserved blood clots, even after prolonged storage. The recovered DNA served as a reliable PCR template for single-nucleotide polymorphism assays.
David, Semjén; András, Farkas; Endre, Kalman; Balint, Kaszas; Árpad, Kovács; Csaba, Pusztai; Karoly, Szuhai; Tamás, Tornóczky
2017-07-01
Benign testicular teratomas are always thought to be pediatric neoplasms and previously all the teratoid tumors in the adult testis regarded as malignant. Recently, three publications reported benign testicular teratomas in adulthood and the latest WHO classification refers them as "prepubertal type of teratomas" which rarely appear in adulthood. These neoplasms behave benign and seemingly analogous independently whether they appear in pre- or postpubertal patients. The aim of our study was to investigate the frequency of benign testicular teratomas both in children and adults. 593 cases of testicular neoplasms were found in a period of 17 years ranging from 1998 to 2014 in the archive of our department (Department of Pathology, Medical Center, Pécs University). 543 cases diagnosed as germ cell tumor which have all been further evaluated in conjunction with the clinical data available. Of all germ cell tumor cases 14 (2.5 %) were pure teratomas. Ten out of 14 were the WHO-defined "conventional" teratoma, 4 of the 14 were the "benign or the so called prepubertal type" from which three occurred in adult patients. Only one of the 14 occurred in childhood, indicating that benign prepubertal type teratomas -which are regarded generally as childhood tumors- are more frequently detected in adults than in children. Benign adult testicular teratomas comprised 21 % of all pure teratoma cases in our series. Practicioners in the field have to be aware of its existence also in adulthood to avoid overtreatment and not to expose their patients to unnecessary chemotherapy, retroperitoneal lymphadenectomy (RLA) and the potential complications of these interventions.
Royzman, Edward; Atanasov, Pavel; Landy, Justin F; Parks, Amanda; Gepty, Andrew
2014-10-01
The CAD triad hypothesis (Rozin, Lowery, Imada, & Haidt, 1999) stipulates that, cross-culturally, people feel anger for violations of autonomy, contempt for violations of community, and disgust for violations of divinity. Although the disgust-divinity link has received some measure of empirical support, the results have been difficult to interpret in light of several conceptual and design flaws. Taking a revised methodological approach, including use of newly validated (Study 1), pathogen-free violations of the divinity code, we found (Study 2) little evidence of disgust-related phenomenology (nausea, gagging, loss of appetite) or action tendency (desire to move away), but much evidence of anger-linked desire to retaliate, as a major component of individuals' projected response to "pure" (pathogen-free) violations of the divinity code. Study 3 replicated these results using faces in lieu of words as a dependent measure. Concordant findings emerged from an archival study (Study 4) examining the aftermath of a real-life sacred violation-the burning of Korans by U.S. military personnel. Study 5 further corroborated these results using continuous measures based on everyday emotion terms and new variants of the divinity-pure scenarios featuring sacrilegious acts committed by a theologically irreverent member of one's own group rather than an ideologically opposed member of another group. Finally, a supplemental study found the anger-dominant attribution pattern to remain intact when the impious act being judged was the judge's own. Based on these and related results, we posit anger to be the principal emotional response to moral transgressions irrespective of the normative content involved. PsycINFO Database Record (c) 2014 APA, all rights reserved.
ERIC Educational Resources Information Center
Devarrewaere, Anthony; Roelly, Aude
2005-01-01
The Archives Departementales de la Cote-d'Or chose as a priority for its automation plan the acquisition of a search engine, to publish online archival descriptions and the library catalogue. The Archives deliberately opted for a practical approach, using for the encoding of the finding aids an automatic data export from an archival management…
NASA Astrophysics Data System (ADS)
Calcara, Massimo; Borgia, Andrea
2013-04-01
Current global warming theories have produced some benefits: among them, detailed studies on CO2 and its properties, possible applications and perspectives. Starting from its use as a "green solvent" (for instance in decaffeination process), to enhance system in oil recovery, to capture and storage enough amount of CO2 in geological horizon. So, a great debate is centred around this molecule. One More useful research in natural horizon studies is its theorised use in Enhanced Geothermal Systems with CO2 as the only working fluid. In any case, the CO2 characteristics should be deeply understood, before injecting a molecule prone to change easily its aggregation state at relatively shallow depth. CO2 Rock interaction becomes therefore a focal point in approaching research sectors linked in some manner to natural or induced presence of carbon dioxide in geological horizons. Possible chemical interactions between fluids and solids have always been a central topic in defining evolution of the system as a whole in terms of dissolutions, reactions, secondary mineral formation and, in case of whichever plant, scaling. Questions arise in case of presence of CO2 with host rocks. Chemical and molecular properties are strategic. CO2 Rock interactions are based on eventual solubility capability of pure liquid and supercritical CO2 seeking and eventually quantifying its polar and/or ionic solvent capabilities. Single molecule at STP condition is linear, with central carbon atom and oxygen atoms at opposite site on a straight line with a planar angle. It has a quadrupolar moment due to the electronegativity difference between carbon and oxygen. As soon as CO2 forms bond with water, it deforms even at atmospheric pressure, assuming an induced dipole moment with a value around 0.02 Debye. Hydrated CO2 forms a hydrophilic bond; it deforms with an angle of 178 degrees. Pure CO2 forms self aggregates. In the simplest case a dimer, with two molecules of CO2 exerting mutual attraction and forming at a first impact a structure defined parallel or slipped parallel or a more stable T shaped. As soon as pressure is applied, density changes and appears a stable (induced) dipole moment 0.22 Debye: phase changes and CO2 dipole moment reaches 0.85 Debye dipole moment. Pure CO2, here the only liquid phase, assumes Lewis acid/base properties. Polar solvent properties seem to be real, and some experiments have observed this characteristics. This stated, present work try to show computer aided simulation in chemical and physical evolution of a portion of rock with liquid and supercritical CO2, with and without water, in granite and oceanic basalt formations.
About Fermilab - History and Archives Project
Fermilab Organization Chart Diversity Architecture History and Archives Project Sustainability Nature Accommodations Recreation Architecture & History Nature/Ecology Order Fermilab Merchandise Online Education K Fermilab History and Archives Project Archives Project main page | Fermilab History main page A Brief
78 FR 22345 - Advisory Committee on the Electronic Records Archives (ACERA)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-15
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic... United States, on technical, mission, and service issues related to the Electronic Records Archives (ERA...
77 FR 21812 - Advisory Committee on the Electronic Records Archives (ACERA).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Advisory Committee on the Electronic Records Archives... and Records Administration (NARA) announces a meeting of the Advisory Committee on the Electronic... United States, on technical, mission, and service issues related to the Electronic Records Archives (ERA...
Stewardship of very large digital data archives
NASA Technical Reports Server (NTRS)
Savage, Patric
1991-01-01
An archive is a permanent store. There are relatively few very large digital data archives in existence. Most business records are expired within five or ten years. Many kinds of business records that do have long lives are embedded in data bases that are continually updated and re-issued cyclically. Also, a great deal of permanent business records are actually archived as microfilm, fiche, or optical disk images - their digital version being an operational convenience rather than an archive. The problems forseen in stewarding the very large digital data archives that will accumulate during the mission of the Earth Observing System (EOS) are addressed. It focuses on the function of shepherding archived digital data into an endless future. Stewardship entails storing and protecting the archive and providing meaningful service to the community of users. The steward will (1) provide against loss due to physical phenomena; (2) assure that data is not lost due to storage technology obsolescence; and (3) maintain data in a current formatting methodology.
A new archival infrastructure for highly-structured astronomical data
NASA Astrophysics Data System (ADS)
Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo
2018-03-01
With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.
Gypsum ground: a new occurrence of gypsum sediment in playas of central Australia
NASA Astrophysics Data System (ADS)
Xiang Yang Chen; Bowler, James M.; Magee, John W.
1991-06-01
There are many playas (dry salt lakes) in arid central Australia (regional rainfall about 250 mm/y and pan evaporation around 3000 mm/y). Highly soluble salts, such as halite, only appear as a thin (several centimetres thick), white, ephemeral efflorescent crust on the dry surface. Gypsum is the major evaporite precipitating both at present and preserved in sediment sequences. One type of gypsum deposit forms a distinctive surface feature, which is here termed "gypsum ground". It consists of a thick (up to 80 cm) gypsum zone which rises from the surrounding smooth white playa surface and is overlain by a heaved brown crust. The gypsum zone, with an average gypsum content above 60%, consists of pure gypsum sublayers and interlayered clastic bands of sandy clay. The gypsum crystals are highly corroded, especially in the direction parallel to the c-axis and on the upper sides where illuviated clay has accumulated in corrosion hollows. Overgrowth parallel to the a- and b-axes is very common, forming highly discoidal habits. These secondary changes (corrosion and overgrowth) are well-developed in the vadose zone and absent from crystals below the long-term watertable (depth around 40 cm). These crystal characteristics indicate a rainwater leaching process. At Lake Amadeus, one of the largest playas (800 km 2) of central Australia, such gypsum ground occupies 16% of the total area. The gypsum ground is interpreted as an alteration of a pre-existing gypsum deposit which probably extended across the whole playa before breaking down, leaving a playa marginal terrace and several terrace islands within the gypsum ground. This pre-existing gypsum deposit, preserved in the residual islands, consists of pure, pale, sand-sized lenticular crystals. It is believed to have been deposited during an episode of high regional watertable, causing active groundwater seepage and more frequent surface brine in the playa. A later fall in watertable, probably resulting from climatic change, caused the degradation of the gypsum deposit by dissolution and leaching processes. The common distribution of the gypsum ground and marginal terraces in the playas of central Australia demonstrates the extent of this hydrologic and climatic history.
Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J
2011-08-01
Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.
Control of Wind Tunnel Operations Using Neural Net Interpretation of Flow Visualization Records
NASA Technical Reports Server (NTRS)
Buggele, Alvin E.; Decker, Arthur J.
1994-01-01
Neural net control of operations in a small subsonic/transonic/supersonic wind tunnel at Lewis Research Center is discussed. The tunnel and the layout for neural net control or control by other parallel processing techniques are described. The tunnel is an affordable, multiuser platform for testing instrumentation and components, as well as parallel processing and control strategies. Neural nets have already been tested on archival schlieren and holographic visualizations from this tunnel as well as recent supersonic and transonic shadowgraph. This paper discusses the performance of neural nets for interpreting shadowgraph images in connection with a recent exercise for tuning the tunnel in a subsonic/transonic cascade mode of operation. That mode was operated for performing wake surveys in connection with NASA's Advanced Subsonic Technology (AST) noise reduction program. The shadowgraph was presented to the neural nets as 60 by 60 pixel arrays. The outputs were tunnel parameters such as valve settings or tunnel state identifiers for selected tunnel operating points, conditions, or states. The neural nets were very sensitive, perhaps too sensitive, to shadowgraph pattern detail. However, the nets exhibited good immunity to variations in brightness, to noise, and to changes in contrast. The nets are fast enough so that ten or more can be combined per control operation to interpret flow visualization data, point sensor data, and model calculations. The pattern sensitivity of the nets will be utilized and tested to control wind tunnel operations at Mach 2.0 based on shock wave patterns.
Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.
2011-01-01
Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703
Metadata and Buckets in the Smart Object, Dumb Archive (SODA) Model
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Maly, Kurt; Croom, Delwin R., Jr.; Robbins, Steven W.
2004-01-01
We present the Smart Object, Dumb Archive (SODA) model for digital libraries (DLs), and discuss the role of metadata in SODA. The premise of the SODA model is to "push down" many of the functionalities generally associated with archives into the data objects themselves. Thus the data objects become "smarter", and the archives "dumber". In the SODA model, archives become primarily set managers, and the objects themselves negotiate and handle presentation, enforce terms and conditions, and perform data content management. Buckets are our implementation of smart objects, and da is our reference implementation for dumb archives. We also present our approach to metadata translation for buckets.
Finding "Science" in the Archives of the Spanish Monarchy.
Portuondo, Maria M
2016-03-01
This essay explores the history of several archives that house the early modern records of Spanish imperial science. The modern "archival turn" urges us to think critically about archives and to recognize in the history of these collections an embedded, often implicit, history that--unless properly recognized, acknowledged, and understood--can distort the histories we are trying to tell. This essay uses a curious episode in the history of science to illustrate how Spanish archives relate to each other and shape the collections they house. During the late eighteenth century a young navy officer, Martín Fernández de Navarrete, was dispatched to all the principal archives of the Spanish monarchy with a peculiar mission: he was to search for evidence that the Spanish in fact had a scientific tradition. This essay uses his mission to explain how the original purpose of an archive--the archive's telos--may persist as a strong and potentially deterministic force in the work of historians of science. In the case of the archives discussed, this telos was shaped by issues as wide ranging as defending a nation's reputation against claims of colonial neglect and as idiosyncratic as an archivist's selection criteria.
Yang, Joshua S.; McDaniel, Patricia A.; Malone, Ruth E.
2012-01-01
Background The global community is beginning to address non-communicable diseases, but how to increase the accountability of multinational enterprises (MNEs) for the health impacts of their products and practices remains unclear. We examine the Organization for Economic Cooperation and Development’s (OECD) efforts to do so through voluntary MNE guidelines. Methods We developed a historical case study of how the OECD Guidelines for Multinational Enterprises were developed and revised from 1973–2000 through an analysis of publicly available archived OECD and tobacco industry documents. Results The first edition of the Guidelines was a purely economic instrument. Outside pressures and a desire to ward off more stringent regulatory efforts resulted in the addition over time of guidelines related to the environment, consumer interests, sustainable development, and human rights. Conclusion Despite their voluntary nature, the Guidelines can play a role in efforts to help balance the interests of MNEs and public health by providing a starting point for efforts to create binding provisions addressing MNEs’ contributions to disease burden or disease reduction. PMID:23046298
High performance Python for direct numerical simulations of turbulent flows
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Langtangen, Hans Petter
2016-06-01
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.
Stability investigations of airfoil flow by global analysis
NASA Technical Reports Server (NTRS)
Morzynski, Marek; Thiele, Frank
1992-01-01
As the result of global, non-parallel flow stability analysis the single value of the disturbance growth-rate and respective frequency is obtained. This complex value characterizes the stability of the whole flow configuration and is not referred to any particular flow pattern. The global analysis assures that all the flow elements (wake, boundary and shear layer) are taken into account. The physical phenomena connected with the wake instability are properly reproduced by the global analysis. This enhances the investigations of instability of any 2-D flows, including ones in which the boundary layer instability effects are known to be of dominating importance. Assuming fully 2-D disturbance form, the global linear stability problem is formulated. The system of partial differential equations is solved for the eigenvalues and eigenvectors. The equations, written in the pure stream function formulation, are discretized via FDM using a curvilinear coordinate system. The complex eigenvalues and corresponding eigenvectors are evaluated by an iterative method. The investigations performed for various Reynolds numbers emphasize that the wake instability develops into the Karman vortex street. This phenomenon is shown to be connected with the first mode obtained from the non-parallel flow stability analysis. The higher modes are reflecting different physical phenomena as for example Tollmien-Schlichting waves, originating in the boundary layer and having the tendency to emerge as instabilities for the growing Reynolds number. The investigations are carried out for a circular cylinder, oblong ellipsis and airfoil. It is shown that the onset of the wake instability, the waves in the boundary layer, the shear layer instability are different solutions of the same eigenvalue problem, formulated using the non-parallel theory. The analysis offers large potential possibilities as the generalization of methods used till now for the stability analysis.
A Background to Motion Picture Archives.
ERIC Educational Resources Information Center
Fletcher, James E.; Bolen, Donald L., Jr.
The emphasis of archives is on the maintenance and preservation of materials for scholarly research and professional reference. Archives may be established as separate entities or as part of a library or museum. Film archives may include camera originals (positive and negative), sound recordings, outtakes, scripts, contracts, advertising…
Digitized Archival Primary Sources in STEM: A Selected Webliography
ERIC Educational Resources Information Center
Jankowski, Amy
2017-01-01
Accessibility and findability of digitized archival resources can be a challenge, particularly for students or researchers not familiar with archival formats and digital interfaces, which adhere to different descriptive standards than more widely familiar library resources. Numerous aggregate archival collection databases exist, which provide a…
NASA Astrophysics Data System (ADS)
Verma, R. V.
2018-04-01
The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.
Status of worldwide Landsat archive
Warriner, Howard W.
1987-01-01
In cooperation with the International Landsat community, and through the Landsat Technical Working Group (LTWG), NOAA is assembling information about the status of the Worldwide Landsat Archive. During LTWG 9, member nations agreed to participate in a survey of International Landsat data holding and of their archive experiences with Landsat data. The goal of the effort was two-fold; one, to document the Landsat archive to date, and, two, to ensure that specific nations' experience with long-term Landsat archival problems were available to others. The survey requested details such as amount of data held, the format of the archive holdings by Spacecraft/Sensor, and acquisition years; the estimated costs to accumulated process, and replace the data (if necessary); the storage space required, and any member nation's plans that would establish the insurance of continuing quality. As a group, the LTWG nations are concerned about the characteristics and reliability of long-term magnetic media storage. Each nation's experience with older data retrieval is solicited in the survey. This information will allow nations to anticipate and plan for required changes to their archival holdings. Also solicited were reports of any upgrades to a nation's archival system that are currently planned and all results of attempts to reduce archive holdings including methodology, current status, and the planned access rates and product support that are anticipated for responding to future archival usage.
You Can See Film through Digital: A Report from Where the Archiving of Motion Picture Film Stands
NASA Astrophysics Data System (ADS)
Tochigi, Akira
In recent years, digital technology has brought drastic change to the archiving of motion picture film. By collecting digital media as well as film, many conventional film archives have transformed themselves into moving image archives or audiovisual archives. As well, digital technology has expanded the possibility of the restoration of motion picture film in comparison with conventional photochemical (analog) restoration. This paper first redefines some fundamental terms regarding the archiving of motion picture film and discusses the conditions which need consideration for film archiving in Japan. With a few examples of the recent restoration projects conducted by National Film Center of the National Museum of Modern Art, Tokyo, this paper then clarifies new challenges inherent in digital restoration and urges the importance of better appreciation of motion picture film.
Not the time or the place: the missing spatio-temporal link in publicly available genetic data.
Pope, Lisa C; Liggins, Libby; Keyse, Jude; Carvalho, Silvia B; Riginos, Cynthia
2015-08-01
Genetic data are being generated at unprecedented rates. Policies of many journals, institutions and funding bodies aim to ensure that these data are publicly archived so that published results are reproducible. Additionally, publicly archived data can be 'repurposed' to address new questions in the future. In 2011, along with other leading journals in ecology and evolution, Molecular Ecology implemented mandatory public data archiving (the Joint Data Archiving Policy). To evaluate the effect of this policy, we assessed the genetic, spatial and temporal data archived for 419 data sets from 289 articles in Molecular Ecology from 2009 to 2013. We then determined whether archived data could be used to reproduce analyses as presented in the manuscript. We found that the journal's mandatory archiving policy has had a substantial positive impact, increasing genetic data archiving from 49 (pre-2011) to 98% (2011-present). However, 31% of publicly archived genetic data sets could not be recreated based on information supplied in either the manuscript or public archives, with incomplete data or inconsistent codes linking genetic data and metadata as the primary reasons. While the majority of articles did provide some geographic information, 40% did not provide this information as geographic coordinates. Furthermore, a large proportion of articles did not contain any information regarding date of sampling (40%). Although the inclusion of spatio-temporal data does require an increase in effort, we argue that the enduring value of publicly accessible genetic data to the molecular ecology field is greatly compromised when such metadata are not archived alongside genetic data. © 2015 John Wiley & Sons Ltd.
Social Science Data Archives and Libraries: A View to the Future.
ERIC Educational Resources Information Center
Clark, Barton M.
1982-01-01
Discusses factors militating against integration of social science data archives and libraries in near future, noting usage of materials, access requisite skills of librarians, economic stability of archives, existing structures which manage social science data archives. Role of librarians, data access tools, and cataloging of machine-readable…
Technologically Enhanced Archival Collections: Using the Buddy System
ERIC Educational Resources Information Center
Holz, Dayna
2006-01-01
Based in the context of challenges faced by archives when managing digital projects, this article explores options of looking outside the existing expertise of archives staff to find collaborative partners. In teaming up with other departments and organizations, the potential scope of traditional archival digitization projects is expanded beyond…
The Ethics of Archival Research
ERIC Educational Resources Information Center
McKee, Heidi A.; Porter, James E.
2012-01-01
What are the key ethical issues involved in conducting archival research? Based on examination of cases and interviews with leading archival researchers in composition, this article discusses several ethical questions and offers a heuristic to guide ethical decision making. Key to this process is recognizing the person-ness of archival materials.…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-10
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Public Availability of the National Archives and... Administration. ACTION: Notice of public availability of FY 2011 Service Contract Inventory. SUMMARY: In...), the National Archives and Records Administration (NARA) is publishing this notice to advise the public...
36 CFR 1253.1 - National Archives Building.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National Archives Building... PUBLIC AVAILABILITY AND USE LOCATION OF RECORDS AND HOURS OF USE § 1253.1 National Archives Building. (a) The National Archives Building is located at 700 Pennsylvania Avenue, NW., Washington, DC 20408...
Meeting Students Where They Are: Advancing a Theory and Practice of Archives in the Classroom
ERIC Educational Resources Information Center
Saidy, Christina; Hannah, Mark; Sura, Tom
2011-01-01
This article uses theories of technical communication and archives to advance a pedagogy that includes archival production in the technical communication classroom. By developing and maintaining local classroom archives, students directly engage in valuable processes of appraisal, selection, collaboration, and retention. The anticipated outcomes…
Formalin fixation and paraffin embedding (FFPE) is a cross-industry gold standard for preparing nonclinical and clinical samples for histopathological assessment which preserves tissue architecture and enables storage of tissue in archival banks. These archival banks are an untap...
A Generic Archive Protocol and an Implementation
NASA Astrophysics Data System (ADS)
Jordan, J. M.; Jennings, D. G.; McGlynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.
1993-01-01
Archiving vast amounts of data has become a major part of every scientific space mission today. GRASP, the Generic Retrieval/Ar\\-chive Services Protocol, addresses the question of how to archive the data collected in an environment where the underlying hardware archives and computer hosts may be rapidly changing.
The Preservation of Paper Collections in Archives.
ERIC Educational Resources Information Center
Adams, Cynthia Ann
The preservation methods used for paper collections in archives were studied through a survey of archives in the metropolitan Atlanta (Georgia) area. The preservation policy or program was studied, and the implications for conservators and preservation officers were noted. Twelve of 15 archives responded (response rate of 80 percent). Basic…
Resources for Archives: Developing Collections, Constituents, Colleagues, and Capital
ERIC Educational Resources Information Center
Primer, Ben
2009-01-01
The essential element for archival success is to be found in the quality of management decisions made and public services provided. Archivists can develop first-class archives operations through understanding the organizational context; planning; hiring, retaining, and developing staff; meeting archival standards for storage and access; and…
Cassini/Huygens Program Archive Plan for Science Data
NASA Technical Reports Server (NTRS)
Conners, D.
2000-01-01
The purpose of this document is to describe the Cassini/Huygens science data archive system which includes policy, roles and responsibilities, description of science and supplementary data products or data sets, metadata, documentation, software, and archive schedule and methods for archive transfer to the NASA Planetary Data System (PDS).
Records & Information Management Services | Alaska State Archives
Search Search in: Archives State of Alaska Home About Records Management (RIMS) For Researchers Collections Imaging (IMS) ASHRAB Libraries, Archives, & Museums Archives Records Management (RIMS) Records records and information management for the State of Alaska. Frequently Asked Questions Submit Records
An Introduction to Archival Automation: A RAMP Study with Guidelines.
ERIC Educational Resources Information Center
Cook, Michael
Developed under a contract with the International Council on Archives, these guidelines are designed to emphasize the role of automation techniques in archives and records services, provide an indication of existing computer systems used in different archives services and of specific computer applications at various stages of archives…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faubel, M.; Weiner, E.R.
Rotational level populations of N/sub 2/ were measured downstream from the skimmer in beams of pure N/sub 2/ and in mixtures of N/sub 2/ with He, Ne, and Ar expanded from room temperature nozzles. The range of p/sub 0/D was from 5 to 50 Torr cm. The formation of dimers and higher condensates of beam species was monitored during the runs. The effect of condensation energy release on rotational populations and parallel temperatures was readily observed. Two different methods for evaluating the rotational population distributions were compared. One method is based on a dipole-excitation model and the other on anmore » excitation matrix obtained empirically. Neither method proved clearly superior. Both methods indicated nonequilibrium rotational populations for all of our room temperature nozzle expansion conditions. Much of the nonequilibrium character appears to be due to the behavior of the K = 2 and K = 4 levels, which may be accounted for in terms of the rotational energy level spacing. In particular, the overpopulation of the K = 4 level is explained by a near-resonant transfer of rotational energy between molecules in the K = 6 and K = 0 states, to give two molecules in the K = 4 state. Rotational and vibrational temperatures were determined for pure N/sub 2/ beams from nozzles heated up to 1700 /sup 0/K. The heated nozzle experiments indicated a 40% increase in the rotational collision number between 300 and 1700 /sup 0/K.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeBoeuf, J. L., E-mail: jerome.leboeuf@mail.mcgill.ca; Brodusch, N.; Gauvin, R.
2014-12-28
A novel method has been optimized so that adhesion layers are no longer needed to reliably deposit patterned gold structures on amorphous substrates. Using this technique allows for the fabrication of amorphous oxide templates known as micro-crucibles, which confine a vapor–liquid–solid (VLS) catalyst of nominally pure gold to a specific geometry. Within these confined templates of amorphous materials, faceted silicon crystals have been grown laterally. The novel deposition technique, which enables the nominally pure gold catalyst, involves the undercutting of an initial chromium adhesion layer. Using electron backscatter diffraction it was found that silicon nucleated in these micro-crucibles were 30%more » single crystals, 45% potentially twinned crystals and 25% polycrystals for the experimental conditions used. Single, potentially twinned, and polycrystals all had an aversion to growth with the (1 0 0) surface parallel to the amorphous substrate. Closer analysis of grain boundaries of potentially twinned and polycrystalline samples revealed that the overwhelming majority of them were of the 60° Σ3 coherent twin boundary type. The large amount of coherent twin boundaries present in the grown, two-dimensional silicon crystals suggest that lateral VLS growth occurs very close to thermodynamic equilibrium. It is suggested that free energy fluctuations during growth or cooling, and impurities were the causes for this twinning.« less
Solar wind interaction with dusty plasmas produces instabilities and solitary structures
NASA Astrophysics Data System (ADS)
Saleem, H.; Ali, S.
2017-12-01
It is pointed out that the solar wind interaction with dusty magnetospheres of the planets can give rise to purely growing instabilities as well as nonlinear electric field structures. Linear dispersion relation of the low frequency electrostatic ion-acoustic wave (IAW) is modified in the presence of stationary dust and its frequency becomes larger than its frequency in usual electron ion plasma even if ion temperature is equal to the electron temperature. This dust-ion-acoustic wave (DIAW) either becomes a purely growing electrostatic instability or turns out to be the modified dust-ion-acoustic wave (mDIAW) depending upon the magnitude of shear flow scale length and its direction. Growth rate of shear flow-driven electrostatic instability in a plasma having negatively charged stationary dust is larger than the usual D'Angelo instability of electron-ion plasma. It is shown that shear modified dust ion acoustic wave (mDIAW) produces electrostatic solitons in the nonlinear regime. The fluid theory predicts the existence of electrostatic solitons in the dusty plasmas in those regions where the inhomogeneous solar wind flow is parallel to the planetary or cometary magnetic field lines. The amplitude and width of the solitary structure depends upon dust density and magnitude of shear in the flow. This is a general theoretical model which is applied to dusty plasma of Saturn's F-ring for illustration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing, E-mail: yangqing@cqu.edu.cn; Yu, Fei; Sima, Wenxia
Transformer oil-based nanofluids (NFs) with 0.03 g/L Fe{sub 3}O{sub 4} nanoparticle content exhibit 11.2% higher positive impulse breakdown voltage levels than pure transformer oils. To study the effects of the Fe{sub 3}O{sub 4} nanoparticles on the space charge in transformer oil and to explain why the nano-modified transformer oil exhibits improved impulse breakdown voltage characteristics, the traditional Kerr electro-optic field mapping technique is improved by increasing the length of the parallel-plate electrodes and by using a photodetector array as a high light sensitivity device. The space charge distributions of pure transformer oil and of NFs containing Fe{sub 3}O{sub 4} nanoparticlesmore » can be measured using the improved Kerr electro-optic field mapping technique. Test results indicate a significant reduction in space charge density in the transformer oil-based NFs with the Fe{sub 3}O{sub 4} nanoparticles. The fast electrons are captured by the nanoparticles and are converted into slow-charged particles in the NFs, which then reduce the space charge density and result in a more uniform electric field distribution. Streamer propagation in the NFs is also obstructed, and the breakdown strengths of the NFs under impulse voltage conditions are also improved.« less
Atomic Oxygen Lamp Cleaning Facility Fabricated and Tested
NASA Technical Reports Server (NTRS)
Sechkar, Edward A.; Stueber, Thomas J.
1999-01-01
NASA Lewis Research Center's Atomic Oxygen Lamp Cleaning Facility was designed to produce an atomic oxygen plasma within a metal halide lamp to remove carbon-based contamination. It is believed that these contaminants contribute to the high failure rate realized during the production of these lamps. The facility is designed to evacuate a metal halide lamp and produce a radio frequency generated atomic oxygen plasma within it. Oxygen gas, with a purity of 0.9999 percent and in the pressure range of 150 to 250 mtorr, is used in the lamp for plasma generation while the lamp is being cleaned. After cleaning is complete, the lamp can be backfilled with 0.9999-percent pure nitrogen and torch sealed. The facility comprises various vacuum components connected to a radiation-shielded box that encloses the bulb during operation. Radiofrequency power is applied to the two parallel plates of a capacitor, which are on either side of the lamp. The vacuum pump used, a Leybold Trivac Type D4B, has a pumping speed of 4-m3/hr, has an ultimate pressure of <8x10-4, and is specially adapted for pure oxygen service. The electronic power supply, matching network, and controller (500-W, 13.56-MHz) used to supply the radiofrequency power were purchased from RF Power Products Inc. Initial test results revealed that this facility could remove the carbon-based contamination from within bulbs.
Shen, Zhanhang; Mulholland, Kelly A; Zheng, Yujun; Wu, Chun
2017-09-01
DNA G-quadruplex structures are emerging cancer-specific targets for chemotherapeutics. Ligands that bind to and stabilize DNA G-quadruplexes have the potential to be anti-cancer drugs. Lack of binding selectivity to DNA G-quadruplex over DNA duplex remains a major challenge when attempting to develop G-quadruplex ligands into successful anti-cancer drugs. Thorough understanding of the binding nature of existing non-selective ligands that bind to both DNA quadruplex and DNA duplex will help to address this challenge. Daunomycin and doxorubicin, two commonly used anticancer drugs, are examples of non-selective DNA ligands. In this study, we extended our early all-atom binding simulation studies between doxorubicin and a DNA duplex (d(CGATCG) 2 ) to probe the binding between daunomycin and a parallel DNA quadruplex (d(TGGGGT) 4 ) and DNA duplex. In addition to the end stacking mode, which mimics the mode in the crystal structure, a pure groove binding mode was observed in our free binding simulations. The dynamic and energetic properties of these two binding modes are thoroughly examined, and a detailed comparison is made between DNA quadruplex binding modes and DNA duplex binding modes. Implications on the design of more selective DNA quadruplex ligands are also discussed. Graphical abstract Top stacking and groov binding modes from the MD simulations.
NASA Astrophysics Data System (ADS)
Esmaeili, A.; Almasi Kashi, M.; Ramazani, A.; Montazer, A. H.
2016-01-01
In this study, we aim to report the role of Cu additive in arrays of pulse-electrodeposited Co nanowires (NWs) with diameters from 30 to 75 nm, embedded in porous aluminum oxide templates. This features the role of Cu additive in composition and crystalline characteristics as well as in the magnetic properties of Co NWs. Increasing the duration of off-time between pulses during the electrodeposition of Co NWs made it possible to increase the amount of Cu content, so that Co-rich CoCu NWs were obtained. The parallel coercivity and squareness values increased up to 1500 Oe and 0.8 for 30 nm diameter Co94Cu6 NWs, starting from 500 Oe and 0.3 for pure Co NWs. On the other hand, although there was a substantial difference between the crystalline characteristics of 75 nm diameter pure Co and CoCu NWs, no considerable change in their magnetic properties was observed using hysteresis loop measurements. In this respect, the first-order reversal curve (FORC) analysis revealed strong inter-wire magnetostatic interactions for the CoCu NWs. Moreover, we studied the effect of thermal annealing, which resulted in an increase in the coercivity of CoCu NWs with different diameters up to 15%. As a result, the addition of small amount of Cu provides an alternative approach to tailoring the magnetic properties of Co NWs.
Rouvre, Ingrid; Gauquelin, Charles; Meynial-Salles, Isabelle; Basseguy, Régine
2016-06-01
The influence of additional chemical molecules, necessary for the purification process of [Fe]-hydrogenase from Clostridium acetobutylicum, was studied on the anaerobic corrosion of mild steel. At the end of the purification process, the pure [Fe-Fe]-hydrogenase was recovered in a Tris-HCl medium containing three other chemicals at low concentration: DTT, dithionite and desthiobiotin. Firstly, mild steel coupons were exposed in parallel to a 0.1 M pH7 Tris-HCl medium with or without pure hydrogenase. The results showed that hydrogenase and the additional molecules were in competition, and the electrochemical response could not be attributed solely to hydrogenase. Then, solutions with additional chemicals of different compositions were studied electrochemically. DTT polluted the electrochemical signal by increasing the Eoc by 35 mV 24 h after the injection of 300 μL of control solutions with DTT, whereas it drastically decreased the corrosion rate by increasing the charge transfer resistance (Rct 10 times the initial value). Thus, DTT was shown to have a strong antagonistic effect on corrosion and was removed from the purification process. An optimal composition of the medium was selected (0.5 mM dithionite, 7.5 mM desthiobiotin) that simultaneously allowed a high activity of hydrogenase and a lower impact on the electrochemical response for corrosion tests. Copyright © 2016 Elsevier B.V. All rights reserved.
Over-current carrying characteristics of rectangular-shaped YBCO thin films prepared by MOD method
NASA Astrophysics Data System (ADS)
Hotta, N.; Yokomizu, Y.; Iioka, D.; Matsumura, T.; Kumagai, T.; Yamasaki, H.; Shibuya, M.; Nitta, T.
2008-02-01
A fault current limiter (FCL) may be manufactured at competitive qualities and prices by using rectangular-shaped YBCO films which are prepared by metal-organic deposition (MOD) method, because the MOD method can produce large size elements with a low-cost and non-vacuum technique. Prior to constructing a superconducting FCL (SFCL), AC over-current carrying experiments were conducted for 120 mm long elements where YBCO thin film of about 200 nm in thickness was coated on sapphire substrate with cerium oxide (CeO2) interlayer. In the experiments, only single cycle of the ac damping current of 50 Hz was applied to the pure YBCO element without protective metal coating or parallel resistor and the magnitude of the current was increased step by step until the breakdown phenomena occurred in the element. In each experiment, current waveforms flowing through the YBCO element and voltage waveform across the element were measured to get the voltage-current characteristics. The allowable over-current and generated voltage were successfully estimated for the pure YBCO films. It can be pointed out that the lower n-value trends to bring about the higher allowable over-current and the higher withstand voltage more than tens of volts. The YBCO film having higher n-value is sensitive to the over-current. Thus, some protective methods such as a metal coating should be employed for applying to the fault current limiter.
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
A comprehensive cost model for NASA data archiving
NASA Technical Reports Server (NTRS)
Green, J. L.; Klenk, K. F.; Treinish, L. A.
1990-01-01
A simple archive cost model has been developed to help predict NASA's archiving costs. The model covers data management activities from the beginning of the mission through launch, acquisition, and support of retrospective users by the long-term archive; it is capable of determining the life cycle costs for archived data depending on how the data need to be managed to meet user requirements. The model, which currently contains 48 equations with a menu-driven user interface, is available for use on an IBM PC or AT.
The Emirates Space Data Center, a PDS4-Compliant Data Archive
NASA Astrophysics Data System (ADS)
DeWolfe, A. W.; Al Hammadi, O.; Amiri, S.
2017-12-01
As part of the UAE's Emirates Mars Mission (EMM), we are constructing a data archive to preserve and distribute science data from this and future missions. The archive will be publicly accessible and will provide access to Level 2 and 3 science data products from EMM, as well as ancillary data such as SPICE kernels and mission event timelines. As a member of the International Planetary Data Alliance (IPDA), the UAE has committed to making its archive PDS4-compatible, and maintaining the archive beyond the end of the mission. EMM is scheduled to begin collecting science data in spring 2021, and the archive is expected to begin releasing data in September 2021.
The Gran Telescopio Canarias and Calar Alto Virtual Observatory Compliant Archives
NASA Astrophysics Data System (ADS)
Alacid, J. M.; Solano, E.; Jiménez-Esteban, F. M.; Velasco, A.
2014-05-01
The Gran Telescopio Canarias and Calar Alto archives are the result of the collaboration agreements between the Centro de Astrobiología and two entities: GRANTECAN S.A. and the Centro Astronómico Hispano Alemán (CAHA). The archives have been developed in the framework of the Spanish Virtual Observatory and are maintained by the Data Archive Unit at Centro de Astrobiología. The archives contain both raw and science ready data and have been designed in compliance with the standards defined by the International Virtual Observatory Alliance, which guarantees a high level of data accessibility and handling. In this paper we describe the main characteristics and functionalities of both archives.
The Gran Telescopio Canarias and Calar Alto Virtual Observatory compliant archives
NASA Astrophysics Data System (ADS)
Solano, Enrique; Gutiérrez, Raúl; Alacid, José Manuel; Jiménez-Esteban, Francisco; Velasco Trasmonte, Almudena
2012-09-01
The Gran Telescopio Canarias (GTC) and Calar Alto archives are the result of the collaboration agreements between the Centro de Astrobiología (CAB, INTA-CSIC)) and two entities: GRANTECAN S.A. and the Centro Astronómico Hispano Alemán (CAHA). The archives have been developed in the framework of the Spanish Virtual Observatory and are maintained by the Data Archive Unit at CAB. The archives contain both raw and science ready data and have been designed in compliance with the standards defined by the International Virtual Observatory Alliance (IVOA) which guarantees a high level of data accessibility and handling. In this paper we describe the main characteristics and functionalities of both archives.
STScI Archive Manual, Version 7.0
NASA Astrophysics Data System (ADS)
Padovani, Paolo
1999-06-01
The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.
ERIC Educational Resources Information Center
Cachola, Ellen-Rae Cabebe
2014-01-01
This dissertation describes the International Women's Network Against Militarism's (IWNAM) political epistemology of security from an archival perspective, and how they create community archives to evidence this epistemology. This research examines records created by Women for Genuine Security (WGS) and Women's Voices Women Speak (WVWS), U.S. and…
ERIC Educational Resources Information Center
Phelps, Christopher
2007-01-01
In this article, the author shares his experience as he traveled from island to island with a single objective--to reach the archives. He found out that not all archives are the same. In recent months, his daydreaming in various facilities has yielded a recurrent question on what would constitute the Ideal Archive. What follows, in no particular…
The Role of Archives and Records Management in National Information Systems: A RAMP Study.
ERIC Educational Resources Information Center
Rhoads, James B.
Produced as part of the United Nations Educational, Scientific, and Cultural Organization (UNESCO) Records and Archives Management Programme (RAMP), this publication provides information about the essential character and value of archives and about the procedures and programs that should govern the management of both archives and current records,…
Fermilab History and Archives Project | Announcement of Renaming NAL
Archives Project Home About the Archives History and Archives Online Request Contact Us History & Fermi Laboratory In 1972 Enrico Fermi, Nobel Laureate Physicist Return to the Wilson Years NAL TO BECOME ENRICO FERMI LABORATORY IN 1972 Dr. Glenn T. Seaborg, Chairman of the Atomic Energy Commission, announced
36 CFR 1253.2 - National Archives at College Park.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... College Park. (a) The National Archives at College Park is located at 8601 Adelphi Road, College Park, MD...
Dusting the Archives of Childhood: Child Welfare Records as Historical Sources
ERIC Educational Resources Information Center
Vehkalahti, Kaisa
2016-01-01
Using administrative sources in the history of education and childhood involves a range of methodological and ethical considerations. This article discusses these problems, as well as the role of archives and archival policies in preserving history and shaping our understanding of past childhoods. Using Finnish child welfare archives from the…
36 CFR 1253.2 - National Archives at College Park.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...
36 CFR 1253.2 - National Archives at College Park.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...
36 CFR 1253.2 - National Archives at College Park.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false National Archives at College Park. 1253.2 Section 1253.2 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... Park, MD 20740-6001. Hours for the Research Center are posted at http://www.archives.gov. The phone...
A Structure Standard for Archival Context: EAC-CPF Is Here
ERIC Educational Resources Information Center
Dryden, Jean
2010-01-01
The archival community's new descriptive standard, "Encoded Archival Context" for Corporate Bodies, Persons, and Families (EAC-CPF), supports the sharing of descriptions of records creators and is a significant addition to the suite of standards for archival description. EAC-CPF is a data structure standard similar to its older sibling EAD…
Code of Federal Regulations, 2012 CFR
2012-07-01
... regional archives and Presidential libraries different from those in the Washington, DC, area? 1254.14... procedures in regional archives and Presidential libraries different from those in the Washington, DC, area... regional director of archival operations or Presidential library director indicates, you must follow the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... regional archives and Presidential libraries different from those in the Washington, DC, area? 1254.14... procedures in regional archives and Presidential libraries different from those in the Washington, DC, area... regional director of archival operations or Presidential library director indicates, you must follow the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... regional archives and Presidential libraries different from those in the Washington, DC, area? 1254.14... procedures in regional archives and Presidential libraries different from those in the Washington, DC, area... regional director of archival operations or Presidential library director indicates, you must follow the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... regional archives and Presidential libraries different from those in the Washington, DC, area? 1254.14... procedures in regional archives and Presidential libraries different from those in the Washington, DC, area... regional director of archival operations or Presidential library director indicates, you must follow the...
Checklist of Standards Applicable to the Preservation of Archives and Manuscripts.
ERIC Educational Resources Information Center
Walch, Victoria Irons, Comp.
1990-01-01
Presents a checklist of more than 150 standards that have been identified by the Society of American Archivists (SAA) Task Force on Archival Standards as applicable to the preservation of archives and manuscripts. The organizations that developed the standards are described, and increased archival participation in the standards development process…
Increasing Access to Archival Records in Library Online Public Access Catalogs.
ERIC Educational Resources Information Center
Gilmore, Matthew B.
1988-01-01
Looks at the use of online public access catalogs, the utility of subject and call-number searching, and possible archival applications. The Wallace Archives at the Claremont Colleges is used as an example of the availability of bibliographic descriptions of multiformat archival materials through the library catalog. Sample records and searches…
Clay Tablets to Micro Chips: The Evolution of Archival Practice into the Twenty-First Century.
ERIC Educational Resources Information Center
Hannestad, Stephen E.
1991-01-01
Describes archival concepts and theories and their evolution in recent times. Basic archival functions--appraisal, arrangement, description, reference, preservation, and publication--are introduced. Early applications of automation to archives (including SPINDEX, NARS-5, NARS-A-1, MARC AMC, presNET, CTRACK, PHOTO, and DIARY) and automation trends…
Digital Archival Image Collections: Who Are the Users?
ERIC Educational Resources Information Center
Herold, Irene M. H.
2010-01-01
Archival digital image collections are a relatively new phenomenon in college library archives. Digitizing archival image collections may make them accessible to users worldwide. There has been no study to explore whether collections on the Internet lead to users who are beyond the institution or a comparison of users to a national or…
A biological survey on the Ottoman Archive papers and determination of the D10 value
NASA Astrophysics Data System (ADS)
Kantoğlu, Ömer; Ergun, Ece; Ozmen, Dilan; Halkman, Hilal B. D.
2018-03-01
The Ottoman Archives have one of the richest archive collections in the world. However, not all the archived documents are well preserved and some undergo biodeterioration. Therefore, a rapid and promising treatment method is necessary to preserve the collection for following generations as heritage. Radiation presents as an alternative for the treatment of archival materials for this purpose. In this study, we conducted a survey to determine the contamination species and the D10 values of the samples obtained from the shelves of the Ottoman Archives. The samples also included several insect pests collected at using a pheromone trap placed in the archive storage room. With the exception of few localized problems, no active pest presence was observed. The D10 values of mold contamination and reference mold (A. niger) were found to be 1.0 and 0.68 kGy, respectively. Based on these results, it can be concluded that an absorbed dose of 6 kGy is required to remove the contamination from the materials stored in the Ottoman Archives.
Medical image digital archive: a comparison of storage technologies
NASA Astrophysics Data System (ADS)
Chunn, Timothy; Hutchings, Matt
1998-07-01
A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.
Nimbus Satellite Data Rescue Project for Sea Ice Extent: Data Processing
NASA Astrophysics Data System (ADS)
Campbell, G. G.; Sandler, M.; Moses, J. F.; Gallaher, D. W.
2011-12-01
Early Nimbus satellites collected both visible and infrared observations of the Earth at high resolution. Nimbus I operated in September, 1964. Nimbus II operated from April to November 1966 and Nimbus III operated from May 1969 to November 1969. We will discuss our procedures to recover this data into a modern digital archive useful for scientific analysis. The Advanced Videocon Camera System data was transmitted as an analog signal proportional to the brightness detected by a video camera. This was archived on black and white film. At NSIDC we are scanning and digitizing the film images using equipment derived from the motion picture industry. The High Resolution Infrared Radiance data was originally recorded in 36 bit words on 7 track digital tapes. The HRIR data were recently recovered from the tapes and TAP (tape file format from 1966) files were placed in EOSDIS archives for online access. The most interesting parts of the recovery project were the additional processing required to rectify and navigate the raw digital files. One of the artifacts we needed to identify and remove were fiducial marks representing latitude and longitude lines added to the film for users in the 1960's. The IR data recording inserted an artificial random jitter in the alignment of individual scan lines. We will describe our procedures to navigate, remap, detect noise and remove artifacts in the data. Beyond cleaning up the HRIR swath data or the AVCS picture data, we are remapping the data into standard grids for comparisons in time. A first run of all the Nimbus 2 HRIR data into EASE grids in NetCDF format has been completed. This turned up interesting problems of overlaps and missing data issues. Some of these processes require extensive computer resources and we have established methods for using the 'Elastic Compute Cloud' facility at Amazon.com to run the many processes in parallel. In addition we have set up procedures at the University of Colorado to monitor the ongoing scanning and simple quality control of more than 200,000 pictures. Preliminary results from September 1964, 1966 and 1969 data analysis will be discussed in this presentation. Our scientific use of the data will focus on recovering the sea ice extent around the poles. We especially welcome new users interested in the meteorology from 50N to 50S in the 1960's. Lessons and examples of the scanning and quality control procedures will be highlighted in the presentation. Illustrations will include mapped and reformatted data. When the project is finished a public archive from September 1964, April to November 1966 and May to December 1969 will be available for general use.
High-performance mass storage system for workstations
NASA Technical Reports Server (NTRS)
Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.
1993-01-01
Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).
NASA Astrophysics Data System (ADS)
Martinez, E.; Glassy, J. M.; Fowler, D. K.; Khayat, M.; Olding, S. W.
2014-12-01
The NASA Earth Science Data Systems Working Groups (ESDSWG) focuses on improving technologies and processes related to science discovery and preservation. One particular group, the Data Preservation Practices, is defining a set of guidelines to aid data providers in planning both what to submit for archival, and when to submit artifacts, so that the archival process can begin early in the project's life cycle. This has the benefit of leveraging knowledge within the project before staff roll off to other work. In this poster we describe various project archival use cases and identify possible archival life cycles that map closely to the pace and flow of work. To understand "archival life cycles", i.e., distinct project phases that produce archival artifacts such as instrument capabilities, calibration reports, and science data products, the workig group initially mapped the archival requirements defined in the Preservation Content Specification to the typical NASA project life cycle. As described in the poster, this work resulted in a well-defined archival life cycle, but only for some types of projects; it did not fit well for condensed project life cycles experienced within airborne and balloon campaigns. To understand the archival process for projects with compressed cycles, the working group gathered use cases from various communities. This poster will describe selected uses cases that provided insight into the unique flow of these projects, as well as proposing archival life cycles that map artifacts to projects with compressed timelines. Finally, the poster will conclude with some early recommendations for data providers, which will be captured in a formal Guidelines document - to be published in 2015.
Ticoş, C M; Scurtu, A; Toader, D; Banu, N
2015-03-01
A plasma jet produced in a small coaxial plasma gun operated at voltages up to 2 kV and working in pure carbon dioxide (CO2) at a few Torr is used to remove Martian soil simulant from a surface. A capacitor with 0.5 mF is charged up from a high voltage source and supplies the power to the coaxial electrodes. The muzzle of the coaxial plasma gun is placed at a few millimeters near the dusty surface and the jet is fired parallel with the surface. Removal of dust is imaged in real time with a high speed camera. Mars regolith simulant JSC-Mars-1A with particle sizes up to 5 mm is used on different types of surfaces made of aluminium, cotton fabric, polyethylene, cardboard, and phenolic.
Experimental demonstration of Martian soil simulant removal from a surface using a pulsed plasma jet
NASA Astrophysics Data System (ADS)
Ticoş, C. M.; Scurtu, A.; Toader, D.; Banu, N.
2015-03-01
A plasma jet produced in a small coaxial plasma gun operated at voltages up to 2 kV and working in pure carbon dioxide (CO2) at a few Torr is used to remove Martian soil simulant from a surface. A capacitor with 0.5 mF is charged up from a high voltage source and supplies the power to the coaxial electrodes. The muzzle of the coaxial plasma gun is placed at a few millimeters near the dusty surface and the jet is fired parallel with the surface. Removal of dust is imaged in real time with a high speed camera. Mars regolith simulant JSC-Mars-1A with particle sizes up to 5 mm is used on different types of surfaces made of aluminium, cotton fabric, polyethylene, cardboard, and phenolic.
Quantum lattice model solver HΦ
NASA Astrophysics Data System (ADS)
Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki
2017-08-01
HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).
Parametric resonance in quantum electrodynamics vacuum birefringence
NASA Astrophysics Data System (ADS)
Arza, Ariel; Elias, Ricardo Gabriel
2018-05-01
Vacuum magnetic birefringence is one of the most interesting nonlinear phenomena in quantum electrodynamics because it is a pure photon-photon result of the theory and it directly signalizes the violation of the classical superposition principle of electromagnetic fields in the full quantum theory. We perform analytical and numerical calculations when an electromagnetic wave interacts with an oscillating external magnetic field. We find that in an ideal cavity, when the external field frequency is around the electromagnetic wave frequency, the normal and parallel components of the wave suffer parametric resonance at different rates, producing a vacuum birefringence effect growing in time. We also study the case where there is no cavity and the oscillating magnetic field is spatially localized in a region of length L . In both cases we find also a rotation of the elliptical axis.
Study on the impedance of aligned carbon microcoils embedded in silicone rubber matrix
NASA Astrophysics Data System (ADS)
Zhu, Ya-Bo; Zhang, Lin; Guo, Li-Tong; Xiang, Dong-Hu
2010-12-01
This paper reports that carbon microcoils are grown through a chemical vapour deposit process, they are then embedded in silicone rubber, and manipulated to parallel with each other along their axes in the resulting composite. The impedance |Z| as well as phase angle θ of both the original carbon microcoil sheets and the aligned carbon microcoil/silicone rubber composites are measured. The results illustrate that carbon microcoils in different forms show different alternating current electric properties. The aligned carbon microcoils in the composites show stable parameters for f < 104 Hz but a sharp decrease in both |Z| and θ for frequencies > 104 Hz, which will also change as the carbon microcoils are extended. But, the original sheets have a pure resistance with their parameters stable throughout the entire alternating current frequency range investigated.
Chiral magnetic effect in the presence of electroweak interactions as a quasiclassical phenomenon
NASA Astrophysics Data System (ADS)
Dvornikov, Maxim; Semikoz, Victor B.
2018-03-01
We elaborate the quasiclassical approach to obtain the modified chiral magnetic effect (CME) in the case when the massless charged fermions interact with electromagnetic fields and the background matter by the electroweak forces. The derivation of the anomalous current along the external magnetic field involves the study of the energy density evolution of chiral particles in parallel electric and magnetic fields. We consider both the particle acceleration by the external electric field and the contribution of the Adler anomaly. The condition of the validity of this method for the derivation of the CME is formulated. We obtain the expression for the electric current along the external magnetic field, which appears to coincide with our previous results based on the purely quantum approach. Our results are compared with the findings of other authors.
Machinability of some dentin simulating materials.
Möllersten, L
1985-01-01
Machinability in low speed drilling was investigated for pure aluminium, Frasaco teeth, ivory, plexiglass and human dentin. The investigation was performed in order to find a suitable test material for drilling experiments using paralleling instruments. A material simulating human dentin in terms of cuttability at low drilling speeds was sought. Tests were performed using a specially designed apparatus. Holes to a depth of 2 mm were drilled with a twist drill using a constant feeding force. The time required was registered. The machinability of the materials tested was determined by direct comparison of the drilling times. As regards cuttability, first aluminium and then ivory were found to resemble human dentin most closely. By comparing drilling time variances the homogeneity of the materials tested was estimated. Aluminium, Frasaco teeth and plexiglass demonstrated better homogeneity than ivory and human dentin.
Measuring the orbital angular momentum spectrum of an electron beam
Grillo, Vincenzo; Tavabi, Amir H.; Venturi, Federico; Larocque, Hugo; Balboni, Roberto; Gazzadi, Gian Carlo; Frabboni, Stefano; Lu, Peng-Han; Mafakheri, Erfan; Bouchard, Frédéric; Dunin-Borkowski, Rafal E.; Boyd, Robert W.; Lavery, Martin P. J.; Padgett, Miles J.; Karimi, Ebrahim
2017-01-01
Electron waves that carry orbital angular momentum (OAM) are characterized by a quantized and unbounded magnetic dipole moment parallel to their propagation direction. When interacting with magnetic materials, the wavefunctions of such electrons are inherently modified. Such variations therefore motivate the need to analyse electron wavefunctions, especially their wavefronts, to obtain information regarding the material's structure. Here, we propose, design and demonstrate the performance of a device based on nanoscale holograms for measuring an electron's OAM components by spatially separating them. We sort pure and superposed OAM states of electrons with OAM values of between −10 and 10. We employ the device to analyse the OAM spectrum of electrons that have been affected by a micron-scale magnetic dipole, thus establishing that our sorter can be an instrument for nanoscale magnetic spectroscopy. PMID:28537248