Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang
2014-04-01
The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.
Intelligent transportation systems data compression using wavelet decomposition technique.
DOT National Transportation Integrated Search
2009-12-01
Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....
Administrative Uses of Microcomputers.
ERIC Educational Resources Information Center
Crawford, Chase
1987-01-01
This paper examines the administrative uses of the microcomputer, stating that high performance educational managers are likely to have microcomputers in their organizations. Four situations that would justify the use of a computer are: (1) when massive amounts of data are processed through well-defined operations; (2) when data processing is…
The Cycle of Dust in the Milky Ways: Clues from the High-Redshift and the Local Universe
NASA Technical Reports Server (NTRS)
Dwek, Eli
2008-01-01
Massive amount of dust has been observed at high-redshifts when the universe was a mere 900 Myr old. The formation and evolution of dust is there dominated by massive stars and interstellar processes. In contrast, in the local universe lower mass stars, predominantly 2-5 Msun AGB stars, play the dominant role in the production of interstellar dust. These two extreme environments offer fascinating clues about the evolution of dust in the Milky Way galaxy
Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations
NASA Astrophysics Data System (ADS)
Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.
2017-08-01
The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.
SCOOP: A Measurement and Database of Student Online Search Behavior and Performance
ERIC Educational Resources Information Center
Zhou, Mingming
2015-01-01
The ability to access and process massive amounts of online information is required in many learning situations. In order to develop a better understanding of student online search process especially in academic contexts, an online tool (SCOOP) is developed for tracking mouse behavior on the web to build a more extensive account of student web…
Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation
NASA Astrophysics Data System (ADS)
Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.
2017-12-01
NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.
Tourmaline in Appalachian - Caledonian massive sulphide deposits and its exploration significance.
Slack, J.F.
1982-01-01
Tourmaline is a common gangue mineral in several types of stratabound mineral deposits, including some massive base-metal sulphide ores of the Appalachian - Caledonian orogen. It is most abundant (sometimes forming massive foliated tourmalinite) in sediment-hosted deposits, such as those at the Elizabeth Cu mine and the Ore Knob Cu mine (North Carolina, USA). Trace amounts of tourmaline occur associated with volcanic-hosted deposits in the Piedmont and New England and also in the Trondheim district. Tourmaline associated with the massive sulphide deposits are Mg- rich dravites with major- and trace-element compositions significantly different from schorl. It is suggested that the necessary B was produced by submarine exhalative processes as a part of the same hydrothermal system that deposited the ores. An abundance of dravite in non-evaporitic terrains is believed to indicate proximity to former subaqueous fumarolic centres.-R.A.H.
Virtual Bioinformatics Distance Learning Suite
ERIC Educational Resources Information Center
Tolvanen, Martti; Vihinen, Mauno
2004-01-01
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser
NASA Astrophysics Data System (ADS)
Christen, M.
2016-06-01
Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.
V for Voice: Strategies for Bolstering Communication Skills in Statistics
ERIC Educational Resources Information Center
Khachatryan, Davit; Karst, Nathaniel
2017-01-01
With the ease and automation of data collection and plummeting storage costs, organizations are faced with massive amounts of data that present two pressing challenges: technical analysis of the data themselves and communication of the analytics process and its products. Although a plethora of academic and practitioner literature have focused on…
NASA Astrophysics Data System (ADS)
Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan
2016-11-01
The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.
NASA Astrophysics Data System (ADS)
Benini, Luca
2017-06-01
The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.
Massively parallel processor computer
NASA Technical Reports Server (NTRS)
Fung, L. W. (Inventor)
1983-01-01
An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.
Astrophysics and Big Data: Challenges, Methods, and Tools
NASA Astrophysics Data System (ADS)
Garofalo, Mauro; Botta, Alessio; Ventre, Giorgio
2017-06-01
Nowadays there is no field research which is not flooded with data. Among the sciences, astrophysics has always been driven by the analysis of massive amounts of data. The development of new and more sophisticated observation facilities, both ground-based and spaceborne, has led data more and more complex (Variety), an exponential growth of both data Volume (i.e., in the order of petabytes), and Velocity in terms of production and transmission. Therefore, new and advanced processing solutions will be needed to process this huge amount of data. We investigate some of these solutions, based on machine learning models as well as tools and architectures for Big Data analysis that can be exploited in the astrophysical context.
Mask data processing in the era of multibeam writers
NASA Astrophysics Data System (ADS)
Abboud, Frank E.; Asturias, Michael; Chandramouli, Maesh; Tezuka, Yoshihiro
2014-10-01
Mask writers' architectures have evolved through the years in response to ever tightening requirements for better resolution, tighter feature placement, improved CD control, and tolerable write time. The unprecedented extension of optical lithography and the myriad of Resolution Enhancement Techniques have tasked current mask writers with ever increasing shot count and higher dose, and therefore, increasing write time. Once again, we see the need for a transition to a new type of mask writer based on massively parallel architecture. These platforms offer a step function improvement in both dose and the ability to process massive amounts of data. The higher dose and almost unlimited appetite for edge corrections open new windows of opportunity to further push the envelope. These architectures are also naturally capable of producing curvilinear shapes, making the need to approximate a curve with multiple Manhattan shapes unnecessary.
2000-06-01
As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.
Adjusting process count on demand for petascale global optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.
2012-11-23
There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, themore » modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.« less
GHOSTS: The Stellar Populations in the Outskirts of Massive Disk Galaxies
NASA Astrophysics Data System (ADS)
De Jong, Roelof; Radburn-Smith, D. J.; Seth, A. C.; GHOSTS Team
2007-12-01
In recent years we have started to appreciate that the outskirts of galaxies contain valuable information about the formation process of galaxies. In hierarchical galaxy formation the stellar halos and thick disks of galaxies are thought to be the result of accretion of minor satellites, predominantly in the earlier assembly phases. The size, metallicity, and amount of substructure in current day halos are therefore directly related to issues like the small scale properties of the primordial power spectrum of density fluctuations and the suppression of star formation in small dark matter halos. I will show highlights from our ongoing HST/ACS/WFPC2 GHOSTS survey of the resolved stellar populations of 14 nearby, massive disk galaxies. I will show that the smaller galaxies (Vrot 100 km/s) have very small halos, but that most massive disk galaxies (Vrot 200 km/s) have very extended stellar envelopes. The luminosity of these envelopes seems to correlate with Hubble type and bulge-to-disk ratio, calling into question whether these are very extended bulge populations or inner halo populations. The amount of substructure varies strongly between galaxies. Finally, I will present the stellar populations of a very low surface brightness stream around M83, showing that it is old and fairly metal rich.
Bıçakçı, Zafer; Olcay, Lale
2014-06-01
Metabolic alkalosis, which is a non-massive blood transfusion complication, is not reported in the literature although metabolic alkalosis dependent on citrate metabolism is reported to be a massive blood transfusion complication. The aim of this study was to investigate the effect of elevated carbon dioxide production due to citrate metabolism and serum electrolyte imbalance in patients who received frequent non-massive blood transfusions. Fifteen inpatients who were diagnosed with different conditions and who received frequent blood transfusions (10-30 ml/kg/day) were prospectively evaluated. Patients who had initial metabolic alkalosis (bicarbonate>26 mmol/l), who needed at least one intensive blood transfusion in one-to-three days for a period of at least 15 days, and whose total transfusion amount did not fit the massive blood transfusion definition (<80 ml/kg) were included in the study. The estimated mean total citrate administered via blood and blood products was calculated as 43.2 ± 34.19 mg/kg/day (a total of 647.70 mg/kg in 15 days). Decompensated metabolic alkalosis+respiratory acidosis developed as a result of citrate metabolism. There was a positive correlation between cumulative amount of citrate and the use of fresh frozen plasma, venous blood pH, ionized calcium, serum-blood gas sodium and mortality, whereas there was a negative correlation between cumulative amount of citrate and serum calcium levels, serum phosphorus levels and amount of urine chloride. In non-massive, but frequent blood transfusions, elevated carbon dioxide production due to citrate metabolism causes intracellular acidosis. As a result of intracellular acidosis compensation, decompensated metabolic alkalosis+respiratory acidosis and electrolyte imbalance may develop. This situation may contribute to the increase in mortality. In conclusion, it should be noted that non-massive, but frequent blood transfusions may result in certain complications. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.
Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong
2015-12-01
SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.
Flood inundation extent mapping based on block compressed tracing
NASA Astrophysics Data System (ADS)
Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang
2015-07-01
Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.
Query-Structure Based Web Page Indexing
2012-11-01
the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their
Processing Solutions for Big Data in Astronomy
NASA Astrophysics Data System (ADS)
Fillatre, L.; Lepiller, D.
2016-09-01
This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.
Mass loss and stellar superwinds
NASA Astrophysics Data System (ADS)
Vink, Jorick S.
2017-09-01
Mass loss bridges the gap between massive stars and supernovae (SNe) in two major ways: (i) theoretically, it is the amount of mass lost that determines the mass of the star prior to explosion and (ii) observations of the circumstellar material around SNe may teach us the type of progenitor that made the SN. Here, I present the latest models and observations of mass loss from massive stars, both for canonical massive O stars, as well as very massive stars that show Wolf-Rayet type features. This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'.
Observations of the Large Magellanic Cloud with Fermi
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2010-03-18
Context. The Large Magellanic Cloud (LMC) is to date the only normal external galaxy that has been detected in high-energy gamma rays. High-energy gamma rays trace particle acceleration processes and gamma-ray observations allow the nature and sites of acceleration to be studied. Aims. We characterise the distribution and sources of cosmic rays in the LMC from analysis of gamma-ray observations. Methods. We analyse 11 months of continuous sky-survey observations obtained with the Large Area Telescope aboard the Fermi Gamma-Ray Space Telescope and compare it to tracers of the interstellar medium and models of the gamma-ray sources in the LMC. Results.more » The LMC is detected at 33σ significance. The integrated >100 MeV photon flux of the LMC amounts to (2.6 ± 0.2) × 10 -7 ph cm -2 s -1 which corresponds to an energy flux of (1.6 ± 0.1) × 10 -10 erg cm -2 s -1, with additional systematic uncertainties of 16%. The analysis reveals the massive star forming region 30 Doradus as a bright source of gamma-ray emission in the LMC in addition to fainter emission regions found in the northern part of the galaxy. The gamma-ray emission from the LMC shows very little correlation with gas density and is rather correlated to tracers of massive star forming regions. The close confinement of gamma-ray emission to star forming regions suggests a relatively short GeV cosmic-ray proton diffusion length. In conclusion, the close correlation between cosmic-ray density and massive star tracers supports the idea that cosmic rays are accelerated in massive star forming regions as a result of the large amounts of kinetic energy that are input by the stellar winds and supernova explosions of massive stars into the interstellar medium.« less
Baylor, Peter A; Sobenes, Juan R; Vallyathan, Val
2013-05-01
We present a case of interstitial pulmonary fibrosis accompanied by radiographic evidence of progressive massive fibrosis in a patient who had a 15-20 year history of almost daily recreational inhalation of methamphetamine. Mineralogical analysis confirmed the presence of talc on biopsy of the area of progressive massive fibrosis. The coexistence of interstitial pulmonary fibrosis and progressive massive fibrosis suggests that prolonged recreational inhalation of methamphetamine that has been "cut" with talc can result in sufficient amount of talc being inhaled to result in interstitial pulmonary fibrosis and progressive massive fibrosis in the absence of other causes.
Mass loss and stellar superwinds.
Vink, Jorick S
2017-10-28
Mass loss bridges the gap between massive stars and supernovae (SNe) in two major ways: (i) theoretically, it is the amount of mass lost that determines the mass of the star prior to explosion and (ii) observations of the circumstellar material around SNe may teach us the type of progenitor that made the SN. Here, I present the latest models and observations of mass loss from massive stars, both for canonical massive O stars, as well as very massive stars that show Wolf-Rayet type features.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).
Massive thoracoabdominal aortic thrombosis in a patient with iatrogenic Cushing syndrome.
Kim, Dong Hun; Choi, Dong-Hyun; Lee, Young-Min; Kang, Joon Tae; Chae, Seung Seok; Kim, Bo-Bae; Ki, Young-Jae; Kim, Jin Hwa; Chung, Joong-Wha; Koh, Young-Youp
2014-01-01
Massive thoracoabdominal aortic thrombosis is a rare finding in patients with iatrogenic Cushing syndrome in the absence of any coagulation abnormality. It frequently represents an urgent surgical situation. We report the case of an 82-year-old woman with massive aortic thrombosis secondary to iatrogenic Cushing syndrome. A follow-up computed tomography scan showed a decreased amount of thrombus in the aorta after anticoagulation therapy alone.
Ongoing Massive Star Formation in NGC 604
NASA Astrophysics Data System (ADS)
Martínez-Galarza, J. R.; Hunter, D.; Groves, B.; Brandl, B.
2012-12-01
NGC 604 is the second most massive H II region in the Local Group, thus an important laboratory for massive star formation. Using a combination of observational and analytical tools that include Spitzer spectroscopy, Herschel photometry, Chandra imaging, and Bayesian spectral energy distribution fitting, we investigate the physical conditions in NGC 604 and quantify the amount of massive star formation currently taking place. We derive an average age of 4 ± 1 Myr and a total stellar mass of 1.6+1.6 - 1.0 × 105 M ⊙ for the entire region, in agreement with previous optical studies. Across the region, we find an effect of the X-ray field on both the abundance of aromatic molecules and the [Si II] emission. Within NGC 604, we identify several individual bright infrared sources with diameters of about 15 pc and luminosity-weighted masses between 103 M ⊙ and 104 M ⊙. Their spectral properties indicate that some of these sources are embedded clusters in process of formation, which together account for ~8% of the total stellar mass in the NGC 604 system. The variations of the radiation field strength across NGC 604 are consistent with a sequential star formation scenario, with at least two bursts in the last few million years. Our results indicate that massive star formation in NGC 604 is still ongoing, likely triggered by the earlier bursts.
On the spatial distributions of dense cores in Orion B
NASA Astrophysics Data System (ADS)
Parker, Richard J.
2018-05-01
We quantify the spatial distributions of dense cores in three spatially distinct areas of the Orion B star-forming region. For L1622, NGC 2068/NGC 2071, and NGC 2023/NGC 2024, we measure the amount of spatial substructure using the Q-parameter and find all three regions to be spatially substructured (Q < 0.8). We quantify the amount of mass segregation using ΛMSR and find that the most massive cores are mildly mass segregated in NGC 2068/NGC 2071 (ΛMSR ˜ 2), and very mass segregated in NGC 2023/NGC 2024 (Λ _MSR = 28^{+13}_{-10} for the four most massive cores). Whereas the most massive cores in L1622 are not in areas of relatively high surface density, or deeper gravitational potentials, the massive cores in NGC 2068/NGC 2071 and NGC 2023/NGC 2024 are significantly so. Given the low density (10 cores pc-2) and spatial substructure of cores in Orion B, the mass segregation cannot be dynamical. Our results are also inconsistent with simulations in which the most massive stars form via competitive accretion, and instead hint that magnetic fields may be important in influencing the primordial spatial distributions of gas and stars in star-forming regions.
Improving Performance and Predictability of Storage Arrays
ERIC Educational Resources Information Center
Altiparmak, Nihat
2013-01-01
Massive amount of data is generated everyday through sensors, Internet transactions, social networks, video, and all other digital sources available. Many organizations store this data to enable breakthrough discoveries and innovation in science, engineering, medicine, and commerce. Such massive scale of data poses new research problems called big…
Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.
Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia
2018-05-03
Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.
Image processing and products for the Magellan mission to Venus
NASA Technical Reports Server (NTRS)
Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche
1992-01-01
The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.
ERIC Educational Resources Information Center
Corbeil, Maria Elena; Corbeil, Joseph Rene; Khan, Badrul H.
2017-01-01
Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educational institutions to gain new insights into how people learn (Kumar, 2013). E-learning has become an important part of education, and this form of learning is especially suited to the use of big data and data analysis,…
Experience in highly parallel processing using DAP
NASA Technical Reports Server (NTRS)
Parkinson, D.
1987-01-01
Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.
Condensate of massive graviton and dark matter
NASA Astrophysics Data System (ADS)
Aoki, Katsuki; Maeda, Kei-ichi
2018-02-01
We study coherently oscillating massive gravitons in the ghost-free bigravity theory. This coherent field can be interpreted as a condensate of the massive gravitons. We first define the effective energy-momentum tensor of the coherent massive gravitons in a curved spacetime. We then study the background dynamics of the Universe and the cosmic structure formation including the effects of the coherent massive gravitons. We find that the condensate of the massive graviton behaves as a dark matter component of the Universe. From the geometrical point of view the condensate is regarded as a spacetime anisotropy. Hence, in our scenario, dark matter is originated from the tiny deformation of the spacetime. We also discuss a production of the spacetime anisotropy and find that the extragalactic magnetic field of a primordial origin can yield a sufficient amount for dark matter.
Peer Assessment for Massive Open Online Courses (MOOCs)
ERIC Educational Resources Information Center
Suen, Hoi K.
2014-01-01
The teach-learn-assess cycle in education is broken in a typical massive open online course (MOOC). Without formative assessment and feedback, MOOCs amount to information dump or broadcasting shows, not educational experiences. A number of remedies have been attempted to bring formative assessment back into MOOCs, each with its own limits and…
Approaches to advancescientific understanding of macrosystems ecology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin
Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less
Roadmap of optical communications
NASA Astrophysics Data System (ADS)
Agrell, Erik; Karlsson, Magnus; Chraplyvy, A. R.; Richardson, David J.; Krummrich, Peter M.; Winzer, Peter; Roberts, Kim; Fischer, Johannes Karl; Savory, Seb J.; Eggleton, Benjamin J.; Secondini, Marco; Kschischang, Frank R.; Lord, Andrew; Prat, Josep; Tomkos, Ioannis; Bowers, John E.; Srinivasan, Sudha; Brandt-Pearce, Maïté; Gisin, Nicolas
2016-06-01
Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.
Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun
2016-01-01
Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682
The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey
NASA Astrophysics Data System (ADS)
Figura, Charles C.; Urquhart, J. S.
2013-01-01
Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.
High Resolution Studies of Mass Loss from Massive Binary Stars
NASA Astrophysics Data System (ADS)
Corcoran, Michael F.; Gull, Theodore R.; Hamaguchi, Kenji; Richardson, Noel; Madura, Thomas; Post Russell, Christopher Michael; Teodoro, Mairan; Nichols, Joy S.; Moffat, Anthony F. J.; Shenar, Tomer; Pablo, Herbert
2017-01-01
Mass loss from hot luminous single and binary stars has a significant, perhaps decisive, effect on their evolution. The combination of X-ray observations of hot shocked gas embedded in the stellar winds and high-resolution optical/UV spectra of the cooler mass in the outflow provides unique ways to study the unstable process by which massive stars lose mass both through continuous stellar winds and rare, impulsive, large-scale mass ejections. The ability to obtain coordinated observations with the Hubble Space Telescope Imaging Spectrograph (HST/STIS) and the Chandra High-Energy Transmission Grating Spectrometer (HETGS) and other X-ray observatories has allowed, for the first time, studies of resolved line emisssion over the temperature range of 104- 108K, and has provided observations to confront numerical dynamical models in three dimensions. Such observations advance our knowledge of mass-loss asymmetries, spatial and temporal variabilities, and the fundamental underlying physics of the hot shocked outflow, providing more realistic constraints on the amount of mass lost by different luminous stars in a variety of evolutionary stages. We discuss the impact that these joint observational studies have had on our understanding of dynamical mass outflows from massive stars, with particular emphasis on two important massive binaries, Delta Ori Aa, a linchpin of the mass luminosity relation for upper HRD main sequence stars, and the supermassive colliding wind binary Eta Carinae.
NASA Astrophysics Data System (ADS)
Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.
2017-12-01
Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.
Characterisation and Processing of Some Iron Ores of India
NASA Astrophysics Data System (ADS)
Krishna, S. J. G.; Patil, M. R.; Rudrappa, C.; Kumar, S. P.; Ravi, B. P.
2013-10-01
Lack of process characterization data of the ores based on the granulometry, texture, mineralogy, physical, chemical, properties, merits and limitations of process, market and local conditions may mislead the mineral processing entrepreneur. The proper implementation of process characterization and geotechnical map data will result in optimized sustainable utilization of resource by processing. A few case studies of process characterization of some Indian iron ores are dealt with. The tentative ascending order of process refractoriness of iron ores is massive hematite/magnetite < marine black iron oxide sands < laminated soft friable siliceous ore fines < massive banded magnetite quartzite < laminated soft friable clayey aluminous ore fines < massive banded hematite quartzite/jasper < massive clayey hydrated iron oxide ore < manganese bearing iron ores massive < Ti-V bearing magnetite magmatic ore < ferruginous cherty quartzite. Based on diagnostic process characterization, the ores have been classified and generic process have been adopted for some Indian iron ores.
Energy-efficient STDP-based learning circuits with memristor synapses
NASA Astrophysics Data System (ADS)
Wu, Xinyu; Saxena, Vishal; Campbell, Kristy A.
2014-05-01
It is now accepted that the traditional von Neumann architecture, with processor and memory separation, is ill suited to process parallel data streams which a mammalian brain can efficiently handle. Moreover, researchers now envision computing architectures which enable cognitive processing of massive amounts of data by identifying spatio-temporal relationships in real-time and solving complex pattern recognition problems. Memristor cross-point arrays, integrated with standard CMOS technology, are expected to result in massively parallel and low-power Neuromorphic computing architectures. Recently, significant progress has been made in spiking neural networks (SNN) which emulate data processing in the cortical brain. These architectures comprise of a dense network of neurons and the synapses formed between the axons and dendrites. Further, unsupervised or supervised competitive learning schemes are being investigated for global training of the network. In contrast to a software implementation, hardware realization of these networks requires massive circuit overhead for addressing and individually updating network weights. Instead, we employ bio-inspired learning rules such as the spike-timing-dependent plasticity (STDP) to efficiently update the network weights locally. To realize SNNs on a chip, we propose to use densely integrating mixed-signal integrate-andfire neurons (IFNs) and cross-point arrays of memristors in back-end-of-the-line (BEOL) of CMOS chips. Novel IFN circuits have been designed to drive memristive synapses in parallel while maintaining overall power efficiency (<1 pJ/spike/synapse), even at spike rate greater than 10 MHz. We present circuit design details and simulation results of the IFN with memristor synapses, its response to incoming spike trains and STDP learning characterization.
Incident analysis of Bucheon LPG filling station pool fire and BLEVE.
Park, Kyoshik; Mannan, M Sam; Jo, Young-Do; Kim, Ji-Yoon; Keren, Nir; Wang, Yanjun
2006-09-01
An LPG filling station incident in Korea has been studied. The direct cause of the incident was concluded to be faulty joining of the couplings of the hoses during the butane unloading process from a tank lorry into an underground storage tank. The faulty connection of a hose to the tank lorry resulted in a massive leak of gas followed by catastrophic explosions. The leaking source was verified by calculating the amount of released LPG and by analyzing captured photos recorded by the television news service. Two BLEVEs were also studied.
NASA Astrophysics Data System (ADS)
Lara-López, M. A.; Hopkins, A. M.; López-Sánchez, A. R.; Brough, S.; Colless, M.; Bland-Hawthorn, J.; Driver, S.; Foster, C.; Liske, J.; Loveday, J.; Robotham, A. S. G.; Sharp, R. G.; Steele, O.; Taylor, E. N.
2013-06-01
We study the interplay between gas phase metallicity (Z), specific star formation rate (SSFR) and neutral hydrogen gas (H I) for galaxies of different stellar masses. Our study uses spectroscopic data from Galaxy and Mass Assembly and Sloan Digital Sky Survey (SDSS) star-forming galaxies, as well as H I detection from the Arecibo Legacy Fast Arecibo L-band Feed Array (ALFALFA) and Galex Arecibo SDSS Survey (GASS) public catalogues. We present a model based on the Z-SSFR relation that shows that at a given stellar mass, depending on the amount of gas, galaxies will follow opposite behaviours. Low-mass galaxies with a large amount of gas will show high SSFR and low metallicities, while low-mass galaxies with small amounts of gas will show lower SSFR and high metallicities. In contrast, massive galaxies with a large amount of gas will show moderate SSFR and high metallicities, while massive galaxies with small amounts of gas will show low SSFR and low metallicities. Using ALFALFA and GASS counterparts, we find that the amount of gas is related to those drastic differences in Z and SSFR for galaxies of a similar stellar mass.
Non-standard s-process in low metallicity massive rotating stars
NASA Astrophysics Data System (ADS)
Frischknecht, U.; Hirschi, R.; Thielemann, F.-K.
2012-02-01
Context. Rotation is known to have a strong impact on the nucleosynthesis of light elements in massive stars, mainly by inducing mixing in radiative zones. In particular, rotation boosts the primary nitrogen production, and models of rotating stars are able to reproduce the nitrogen observed in low-metallicity halo stars. Aims: Here we present the first grid of stellar models for rotating massive stars at low metallicity, where a full s-process network is used to study the impact of rotation-induced mixing on the neutron capture nucleosynthesis of heavy elements. Methods: We used the Geneva stellar evolution code that includes an enlarged reaction network with nuclear species up to bismuth to calculate 25 M⊙ models at three different metallicities (Z = 10-3,10-5, and 10-7) and with different initial rotation rates. Results: First, we confirm that rotation-induced mixing (shear) between the convective H-shell and He-core leads to a large production of primary 22Ne (0.1 to 1% in mass fraction), which is the main neutron source for the s-process in massive stars. Therefore rotation boosts the s-process in massive stars at all metallicities. Second, the neutron-to-seed ratio increases with decreasing Z in models including rotation, which leads to the complete consumption of all iron seeds at metallicities below Z = 10-3 by the end of core He-burning. Thus at low Z, the iron seeds are the main limitation for this boosted s-process. Third, as the metallicity decreases, the production of elements up to the Ba peak increases at the expense of the elements of the Sr peak. We studied the impact of the initial rotation rate and of the highly uncertain 17O(α,γ) rate (which strongly affects the strength of 16O as a neutron poison) on our results. This study shows that rotating models can produce significant amounts of elements up to Ba over a wide range of Z, which has important consequences for our understanding of the formation of these elements in low-metallicity environments like the halo of our galaxy and globular clusters. Fourth, compared to the He-core, the primary 22Ne production induced by rotation in the He-shell is even higher (greater than 1% in mass fraction at all metallicities), which could open the door for an explosive neutron capture nucleosynthesis in the He-shell, with a primary neutron source.
Mineral deposit densities for estimating mineral resources
Singer, Donald A.
2008-01-01
Estimates of numbers of mineral deposits are fundamental to assessing undiscovered mineral resources. Just as frequencies of grades and tonnages of well-explored deposits can be used to represent the grades and tonnages of undiscovered deposits, the density of deposits (deposits/area) in well-explored control areas can serve to represent the number of deposits. Empirical evidence presented here indicates that the processes affecting the number and quantity of resources in geological settings are very general across many types of mineral deposits. For podiform chromite, porphyry copper, and volcanogenic massive sulfide deposit types, the size of tract that geologically could contain the deposits is an excellent predictor of the total number of deposits. The number of mineral deposits is also proportional to the type’s size. The total amount of mineralized rock is also proportional to size of the permissive area and the median deposit type’s size. Regressions using these variables provide a means to estimate the density of deposits and the total amount of mineralization. These powerful estimators are based on analysis of ten different types of mineral deposits (Climax Mo, Cuban Mn, Cyprus massive sulfide, Franciscan Mn, kuroko massive sulfide, low-sulfide quartz-Au vein, placer Au, podiform Cr, porphyry Cu, and W vein) from 108 permissive control tracts around the world therefore generalizing across deposit types. Despite the diverse and complex geological settings of deposit types studied here, the relationships observed indicate universal controls on the accumulation and preservation of mineral resources that operate across all scales. The strength of the relationships (R 2=0.91 for density and 0.95 for mineralized rock) argues for their broad use. Deposit densities can now be used to provide a guideline for expert judgment or used directly for estimating the number of most kinds of mineral deposits.
Massively parallel information processing systems for space applications
NASA Technical Reports Server (NTRS)
Schaefer, D. H.
1979-01-01
NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.
ERIC Educational Resources Information Center
Comer, Denise; Baker, Ryan; Wang, Yuan
2015-01-01
There are many positive aspects of teaching and learning in Massive Online Open Courses (MOOCs), for both instructors and students. However, there is also a considerable amount of negativity in MOOCs, emerging from learners on discussion forums and through peer assessment, from disciplinary colleagues and from public discourse around MOOCs.…
JELLYFISH: EVIDENCE OF EXTREME RAM-PRESSURE STRIPPING IN MASSIVE GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeling, H.; Stephenson, L. N.; Edge, A. C.
Ram-pressure stripping by the gaseous intracluster medium has been proposed as the dominant physical mechanism driving the rapid evolution of galaxies in dense environments. Detailed studies of this process have, however, largely been limited to relatively modest examples affecting only the outermost gas layers of galaxies in nearby and/or low-mass galaxy clusters. We here present results from our search for extreme cases of gas-galaxy interactions in much more massive, X-ray selected clusters at z > 0.3. Using Hubble Space Telescope snapshots in the F606W and F814W passbands, we have discovered dramatic evidence of ram-pressure stripping in which copious amounts ofmore » gas are first shock compressed and then removed from galaxies falling into the cluster. Vigorous starbursts triggered by this process across the galaxy-gas interface and in the debris trail cause these galaxies to temporarily become some of the brightest cluster members in the F606W passband, capable of outshining even the Brightest Cluster Galaxy. Based on the spatial distribution and orientation of systems viewed nearly edge-on in our survey, we speculate that infall at large impact parameter gives rise to particularly long-lasting stripping events. Our sample of six spectacular examples identified in clusters from the Massive Cluster Survey, all featuring M {sub F606W} < –21 mag, doubles the number of such systems presently known at z > 0.2 and facilitates detailed quantitative studies of the most violent galaxy evolution in clusters.« less
Computational overlay metrology with adaptive data analytics
NASA Astrophysics Data System (ADS)
Schmitt-Weaver, Emil; Subramony, Venky; Ullah, Zakir; Matsunobu, Masazumi; Somasundaram, Ravin; Thomas, Joel; Zhang, Linmiao; Thul, Klaus; Bhattacharyya, Kaustuve; Goossens, Ronald; Lambregts, Cees; Tel, Wim; de Ruiter, Chris
2017-03-01
With photolithography as the fundamental patterning step in the modern nanofabrication process, every wafer within a semiconductor fab will pass through a lithographic apparatus multiple times. With more than 20,000 sensors producing more than 700GB of data per day across multiple subsystems, the combination of a light source and lithographic apparatus provide a massive amount of information for data analytics. This paper outlines how data analysis tools and techniques that extend insight into data that traditionally had been considered unmanageably large, known as adaptive analytics, can be used to show how data collected before the wafer is exposed can be used to detect small process dependent wafer-towafer changes in overlay.
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Geveci, Berk
2014-11-01
The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less
NASA Astrophysics Data System (ADS)
Wang, Hao; Li, Xiaohu; Chu, Fengyou; Li, Zhenggang; Wang, Jianqiang; Yu, Xing; Bi, Dongwei
2018-04-01
The 15.2°S hydrothermal field is located at 15.2°S, 13.4°W within the Mid-Atlantic Ridge (MAR) and was initially discovered during Cruise DY125-22 by the Chinese expedition during R/V Dayangyihao in 2011. Here, we provide detailed mineralogical, bulk geochemical, and Sr-Pb isotopic data for massive sulfides and basalts from the 15.2°S hydrothermal field to improve our understanding of the mineral compositions, geochemical characteristics, type of hydrothermal field, and the source of metals present at this vent site. The samples include 14 massive sulfides and a single basalt. The massive sulfides are dominated by pyrite with minor amounts of sphalerite and chalcopyrite, although a few samples also contain minor amounts of gordaite, a sulfate mineral. The sulfides have bulk compositions that contain low concentrations of Cu + Zn (mean 7.84 wt%), Co (mean 183 ppm), Ni (mean 3 ppm), and Ba (mean 16 ppm), similar to the Normal Mid-Ocean Ridge Basalt (N-MORB) type deposits along the MAR but different to the compositions of the Enriched-MORB (E-MORB) and ultramafic type deposits along this spreading ridge. Sulfides from the study area have Pb isotopic compositions (206Pb/204Pb = 18.4502-18.4538, 207Pb/204Pb = 15.4903-15.4936, 208Pb/204Pb = 37.8936-37.9176) that are similar to those of the basalt sample (206Pb/204Pb = 18.3381, 207Pb/204Pb = 15.5041, 208Pb/204Pb = 37.9411), indicating that the metals within the sulfides were derived from leaching of the surrounding basaltic rocks. The sulfides also have 87Sr/86Sr ratios (0.708200-0.709049) that are much higher than typical MAR hydrothermal fluids (0.7028-0.7046), suggesting that the hydrothermal fluids mixed with a significant amount of seawater during massive sulfide precipitation.
ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.
Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping
2018-04-27
A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.
Problematic usage among highly-engaged players of massively multiplayer online role playing games.
Peters, Christopher S; Malesky, L Alvin
2008-08-01
One popular facet of Internet gaming is the massively multiplayer online role playing game (MMORPG). Some individuals spend so much time playing these games that it creates problems in their lives. This study focused on players of World of Warcraft. Factor analysis revealed one factor related to problematic usage, which was correlated with amount of time played, and personality characteristics of agreeableness, conscientiousness, neuroticism, and extraversion.
NASA Astrophysics Data System (ADS)
Hage, Sophie; Cartigny, Matthieu; Clare, Michael; Sumner, Esther; Talling, Peter; Vendettuoli, Daniela; Hughes Clarke, John; Hubbard, Stephen
2017-04-01
Massive sandstones have been studied in many outcrops worldwide as they form a building stone of good subsurface petroleum reservoirs. Massive sands are often associated with turbidites sequences in ancient sedimentary successions. Turbidites are widely known to result from the deceleration of turbidity currents, these underwater flows driven by the excess density of sediments they carry in suspension. Depositional processes that are associated with the formation of massive sands are still under debate in the literature and many theoretical mechanisms have been suggested based on outcrops interpretations, lab experiments and numerical models. Here we present the first field observations that show how massive sands are generated from flow instabilities associated with supercritical flow processes occurring in turbidity currents. We combine turbidity current measurements with seafloor topography observations on the active Squamish Delta, British Columbia (Canada). We show that supercritical flow processes shape crescent-shape bedforms on the seafloor, and how these crescent-shape bedforms are built by massive sands. This modern process-product link is then used to interpret massive sandstone successions found in ancient outcrops. We demonstrate that supercritical-flow processes can be recognised in outcrops and that these processes produce highly diachronous stratigraphic surfaces in the rock record. This has profound implications on how to interpret ancient geological successions and the Earth history they archive.
Expeditious reconciliation for practical quantum key distribution
NASA Astrophysics Data System (ADS)
Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.
2004-08-01
The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.
A review of bioinformatic methods for forensic DNA analyses.
Liu, Yao-Yuan; Harbison, SallyAnn
2018-03-01
Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.
A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management
Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng
2013-01-01
In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products. Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance. PMID:23645112
NASA Astrophysics Data System (ADS)
Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.
2018-01-01
Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.
NASA Astrophysics Data System (ADS)
Kaipov, I. V.
2017-03-01
Anthropogenic and natural factors have increased the power of wildfires in massive Siberian woodlands. As a consequence, the expansion of burned areas and increase in the duration of the forest fire season have led to the release of significant amounts of gases and aerosols. Therefore, it is important to understand the impact of wildland fires on air quality, atmospheric composition, climate and accurately describe the distribution of combustion products in time and space. The most effective research tool is the regional hydrodynamic model of the atmosphere, coupled with the model of pollutants transport and chemical interaction. Taking into account the meteorological parameters and processes of chemical interaction of impurities, complex use of remote sensing techniques for monitoring massive forest fires and mathematical modeling of long-range transport of pollutants in the atmosphere, allow to evaluate spatial and temporal scale of the phenomenon and calculate the quantitative characteristics of pollutants depending on the height and distance of migration.
A new archival infrastructure for highly-structured astronomical data
NASA Astrophysics Data System (ADS)
Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo
2018-03-01
With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.
Structure and function of isozymes: Evolutionary aspects and role of oxygen in eucaryotic organisms
NASA Technical Reports Server (NTRS)
Satyanarayana, T.
1985-01-01
Oxygen is not only one of the most abundant elements on the Earth, but it is also one of the most important elements for life. In terms of composition, the feature of the atmosphere that most distinguishes Earth from other planets is the presence of abundant amounts of oxygen. The first forms of life may have been similar to present day anaerobic bacteria such as clostridium. The relationship between prokaryotes and eukaryotes, if any, has been a topic of much speculation. With only a few exceptions eukaryotes are oxygen-utilizing organisms. This research eukaryotes or eukaryotic biochemical processes requiring oxygen, could have arisen quite early in evolution and utilized the small quantities of photocatalytically produced oxygen which are thought to have been present on the Earth prior to the evolution of massive amounts of photosynthetically-produced oxygen.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.
Too Much Information--Too Much Apprehension
ERIC Educational Resources Information Center
Hijazi, Sam
2004-01-01
The information age along with the exponential increase in information technology has brought an unexpected amount of information. The endeavor to sort and extract a meaning from the massive amount of data has become a challenging task to many educators and managers. This research is an attempt to collect the most common suggestions to reduce the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akkus, Harun, E-mail: physicisthakkus@gmail.com
2013-12-15
We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat’s principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem. Highlights: •A different and simpler method for the calculation of deflection angle of light. •Not a curved space, only 2-D Euclidean space. •Getting a varying refractive index from the Buckingham pi-theorem. •Obtaining the some results of general relativity from Fermat’s principle.
The role of black holes in galaxy formation and evolution.
Cattaneo, A; Faber, S M; Binney, J; Dekel, A; Kormendy, J; Mushotzky, R; Babul, A; Best, P N; Brüggen, M; Fabian, A C; Frenk, C S; Khalatyan, A; Netzer, H; Mahdavi, A; Silk, J; Steinmetz, M; Wisotzki, L
2009-07-09
Virtually all massive galaxies, including our own, host central black holes ranging in mass from millions to billions of solar masses. The growth of these black holes releases vast amounts of energy that powers quasars and other weaker active galactic nuclei. A tiny fraction of this energy, if absorbed by the host galaxy, could halt star formation by heating and ejecting ambient gas. A central question in galaxy evolution is the degree to which this process has caused the decline of star formation in large elliptical galaxies, which typically have little cold gas and few young stars, unlike spiral galaxies.
Langmuir wave phase-mixing in warm electron-positron-dusty plasmas
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Maity, Chandan
2018-04-01
An analytical study on nonlinear evolution of Langmuir waves in warm electron-positron-dusty plasmas is presented. The massive dust grains of either positively or negatively charged are assumed to form a fixed charge neutralizing background. A perturbative analysis of the fluid-Maxwell's equations confirms that the excited Langmuir waves phase-mix and eventually break, even at arbitrarily low amplitudes. It is shown that the nature of the dust-charge as well as the amount of dust grains can significantly influence the Langmuir wave phase-mixing process. The phase-mixing time is also found to increase with the temperature.
Stellar Wind Retention and Expulsion in Massive Star Clusters
NASA Astrophysics Data System (ADS)
Naiman, J. P.; Ramirez-Ruiz, E.; Lin, D. N. C.
2018-05-01
Mass and energy injection throughout the lifetime of a star cluster contributes to the gas reservoir available for subsequent episodes of star formation and the feedback energy budget responsible for ejecting material from the cluster. In addition, mass processed in stellar interiors and ejected as winds has the potential to augment the abundance ratios of currently forming stars, or stars which form at a later time from a retained gas reservoir. Here we present hydrodynamical simulations that explore a wide range of cluster masses, compactnesses, metallicities and stellar population age combinations in order to determine the range of parameter space conducive to stellar wind retention or wind powered gas expulsion in star clusters. We discuss the effects of the stellar wind prescription on retention and expulsion effectiveness, using MESA stellar evolutionary models as a test bed for exploring how the amounts of wind retention/expulsion depend upon the amount of mixing between the winds from stars of different masses and ages. We conclude by summarizing some implications for gas retention and expulsion in a variety of compact (σv ≳ 20 kms-1) star clusters including young massive star clusters (105 ≲ M/M⊙ ≲ 107, age ≲ 500 Myrs), intermediate age clusters (105 ≲ M/M⊙ ≲ 107, age ≈ 1 - 4 Gyrs), and globular clusters (105 ≲ M/M⊙ ≲ 107, age ≳ 10 Gyrs).
Ontology-Based Information Extraction for Business Intelligence
NASA Astrophysics Data System (ADS)
Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina
Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.
NASA Astrophysics Data System (ADS)
Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.
2015-07-01
Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
The singular behavior of one-loop massive QCD amplitudes with one external soft gluon
NASA Astrophysics Data System (ADS)
Bierenbaum, Isabella; Czakon, Michał; Mitov, Alexander
2012-03-01
We calculate the one-loop correction to the soft-gluon current with massive fermions. This current is process independent and controls the singular behavior of one-loop massive QCD amplitudes in the limit when one external gluon becomes soft. The result derived in this work is the last missing process-independent ingredient needed for numerical evaluation of observables with massive fermions at hadron colliders at the next-to-next-to-leading order.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Star formation induced by cloud-cloud collisions and galactic giant molecular cloud evolution
NASA Astrophysics Data System (ADS)
Kobayashi, Masato I. N.; Kobayashi, Hiroshi; Inutsuka, Shu-ichiro; Fukui, Yasuo
2018-05-01
Recent millimeter/submillimeter observations towards nearby galaxies have started to map the whole disk and to identify giant molecular clouds (GMCs) even in the regions between galactic spiral structures. Observed variations of GMC mass functions in different galactic environments indicates that massive GMCs preferentially reside along galactic spiral structures whereas inter-arm regions have many small GMCs. Based on the phase transition dynamics from magnetized warm neutral medium to molecular clouds, Kobayashi et al. (2017, ApJ, 836, 175) proposes a semi-analytical evolutionary description for GMC mass functions including a cloud-cloud collision (CCC) process. Their results show that CCC is less dominant in shaping the mass function of GMCs than the accretion of dense H I gas driven by the propagation of supersonic shock waves. However, their formulation does not take into account the possible enhancement of star formation by CCC. Millimeter/submillimeter observations within the Milky Way indicate the importance of CCC in the formation of star clusters and massive stars. In this article, we reformulate the time-evolution equation largely modified from Kobayashi et al. (2017, ApJ, 836, 175) so that we additionally compute star formation subsequently taking place in CCC clouds. Our results suggest that, although CCC events between smaller clouds are more frequent than the ones between massive GMCs, CCC-driven star formation is mostly driven by massive GMCs ≳ 10^{5.5} M_{⊙} (where M⊙ is the solar mass). The resultant cumulative CCC-driven star formation may amount to a few 10 percent of the total star formation in the Milky Way and nearby galaxies.
The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Barry Y.
The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less
Enabling Graph Appliance for Genome Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun
2015-01-01
In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less
NASA Technical Reports Server (NTRS)
Hicks, Rebecca
2010-01-01
A fiber Bragg grating is a portion of a core of a fiber optic stand that has been treated to affect the way light travels through the strand. Light within a certain narrow range of wavelengths will be reflected along the fiber by the grating, while light outside that range will pass through the grating mostly undisturbed. Since the range of wavelengths that can penetrate the grating depends on the grating itself as well as temperature and mechanical strain, fiber Bragg gratings can be used as temperature and strain sensors. This capability, along with the light-weight nature of the fiber optic strands in which the gratings reside, make fiber optic sensors an ideal candidate for flight testing and monitoring in which temperature and wing strain are factors. A team of NASA Dryden engineers has been working to advance the fiber optic sensor technology since the mid 1990 s. The team has been able to improve the dependability and sample rate of fiber optic sensor systems, making them more suitable for real-time wing shape and strain monitoring and capable of rivaling traditional strain gauge sensors in accuracy. The sensor system was recently tested on the Ikhana unmanned aircraft and will be used on the Global Observer unmanned aircraft. Since a fiber Bragg grating sensor can be placed every halfinch on each optic fiber, and since fibers of approximately 40 feet in length each are to be used on the Global Observer, each of these fibers will have approximately 1,000 sensors. A total of 32 fibers are to be placed on the Global Observer aircraft, to be sampled at a rate of about 50 Hz, meaning about 1.6 million data points will be taken every second. The fiber optic sensors system is capable of producing massive amounts of potentially useful data; however, methods to capture, record, and analyze all of this data in a way that makes the information useful to flight test engineers are currently limited. The purpose of this project is to research the availability of software capable of processing massive amounts of data in both real-time and post-flight settings, and to produce software segments that can be integrated to assist in the task as well. The selected software must be able to: (1) process massive amounts of data (up to 4GB) at a speed useful in a real-time settings (small fractions of a second); (2) process data in post-flight settings to allow test reproduction or further data analysis, inclusive; (3) produce, or make easier to produce, three-dimensional plots/graphs to make the data accessible to flight test engineers; and (4) be customized to allow users to use their own processing formulas or functions and display the data in formats they prefer. Several software programs were evaluated to determine their utility in completing the research objectives. These programs include: OriginLab, Graphis, 3D Grapher, Visualization Sciences Group (VSG) Avizo Wind, Interactive Analysis and Display System (IADS), SigmaPlot, and MATLAB.
Subbarao, G V; Sahrawat, K L; Nakahara, K; Rao, I M; Ishitani, M; Hash, C T; Kishii, M; Bonnett, D G; Berry, W L; Lata, J C
2013-07-01
Agriculture is the single largest geo-engineering initiative that humans have initiated on planet Earth, largely through the introduction of unprecedented amounts of reactive nitrogen (N) into ecosystems. A major portion of this reactive N applied as fertilizer leaks into the environment in massive amounts, with cascading negative effects on ecosystem health and function. Natural ecosystems utilize many of the multiple pathways in the N cycle to regulate N flow. In contrast, the massive amounts of N currently applied to agricultural systems cycle primarily through the nitrification pathway, a single inefficient route that channels much of this reactive N into the environment. This is largely due to the rapid nitrifying soil environment of present-day agricultural systems. In this Viewpoint paper, the importance of regulating nitrification as a strategy to minimize N leakage and to improve N-use efficiency (NUE) in agricultural systems is highlighted. The ability to suppress soil nitrification by the release of nitrification inhibitors from plant roots is termed 'biological nitrification inhibition' (BNI), an active plant-mediated natural function that can limit the amount of N cycling via the nitrification pathway. The development of a bioassay using luminescent Nitrosomonas to quantify nitrification inhibitory activity from roots has facilitated the characterization of BNI function. Release of BNIs from roots is a tightly regulated physiological process, with extensive genetic variability found in selected crops and pasture grasses. Here, the current status of understanding of the BNI function is reviewed using Brachiaria forage grasses, wheat and sorghum to illustrate how BNI function can be utilized for achieving low-nitrifying agricultural systems. A fundamental shift towards ammonium (NH4(+))-dominated agricultural systems could be achieved by using crops and pastures with high BNI capacities. When viewed from an agricultural and environmental perspective, the BNI function in plants could potentially have a large influence on biogeochemical cycling and closure of the N loop in crop-livestock systems.
Subbarao, G. V.; Sahrawat, K. L.; Nakahara, K.; Rao, I. M.; Ishitani, M.; Hash, C. T.; Kishii, M.; Bonnett, D. G.; Berry, W. L.; Lata, J. C.
2013-01-01
Background Agriculture is the single largest geo-engineering initiative that humans have initiated on planet Earth, largely through the introduction of unprecedented amounts of reactive nitrogen (N) into ecosystems. A major portion of this reactive N applied as fertilizer leaks into the environment in massive amounts, with cascading negative effects on ecosystem health and function. Natural ecosystems utilize many of the multiple pathways in the N cycle to regulate N flow. In contrast, the massive amounts of N currently applied to agricultural systems cycle primarily through the nitrification pathway, a single inefficient route that channels much of this reactive N into the environment. This is largely due to the rapid nitrifying soil environment of present-day agricultural systems. Scope In this Viewpoint paper, the importance of regulating nitrification as a strategy to minimize N leakage and to improve N-use efficiency (NUE) in agricultural systems is highlighted. The ability to suppress soil nitrification by the release of nitrification inhibitors from plant roots is termed ‘biological nitrification inhibition’ (BNI), an active plant-mediated natural function that can limit the amount of N cycling via the nitrification pathway. The development of a bioassay using luminescent Nitrosomonas to quantify nitrification inhibitory activity from roots has facilitated the characterization of BNI function. Release of BNIs from roots is a tightly regulated physiological process, with extensive genetic variability found in selected crops and pasture grasses. Here, the current status of understanding of the BNI function is reviewed using Brachiaria forage grasses, wheat and sorghum to illustrate how BNI function can be utilized for achieving low-nitrifying agricultural systems. A fundamental shift towards ammonium (NH4+)-dominated agricultural systems could be achieved by using crops and pastures with high BNI capacities. When viewed from an agricultural and environmental perspective, the BNI function in plants could potentially have a large influence on biogeochemical cycling and closure of the N loop in crop–livestock systems. PMID:23118123
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254
NASA Astrophysics Data System (ADS)
Stampoulidis, L.; Kehayas, E.; Karppinen, M.; Tanskanen, A.; Heikkinen, V.; Westbergh, P.; Gustavsson, J.; Larsson, A.; Grüner-Nielsen, L.; Sotom, M.; Venet, N.; Ko, M.; Micusik, D.; Kissinger, D.; Ulusoy, A. C.; King, R.; Safaisini, R.
2017-11-01
Modern broadband communication networks rely on satellites to complement the terrestrial telecommunication infrastructure. Satellites accommodate global reach and enable world-wide direct broadcasting by facilitating wide access to the backbone network from remote sites or areas where the installation of ground segment infrastructure is not economically viable. At the same time the new broadband applications increase the bandwidth demands in every part of the network - and satellites are no exception. Modern telecom satellites incorporate On-Board Processors (OBP) having analogue-to-digital (ADC) and digital-to-analogue converters (DAC) at their inputs/outputs and making use of digital processing to handle hundreds of signals; as the amount of information exchanged increases, so do the physical size, mass and power consumption of the interconnects required to transfer massive amounts of data through bulk electric wires.
Bridging the gap: from massive stars to supernovae
Crowther, Paul A.; Janka, Hans-Thomas; Langer, Norbert
2017-01-01
Almost since the beginning, massive stars and their resultant supernovae have played a crucial role in the Universe. These objects produce tremendous amounts of energy and new, heavy elements that enrich galaxies, encourage new stars to form and sculpt the shapes of galaxies that we see today. The end of millions of years of massive star evolution and the beginning of hundreds or thousands of years of supernova evolution are separated by a matter of a few seconds, in which some of the most extreme physics found in the Universe causes the explosive and terminal disruption of the star. Key questions remain unanswered in both the studies of how massive stars evolve and the behaviour of supernovae, and it appears the solutions may not lie on just one side of the explosion or the other or in just the domain of the stellar evolution or the supernova astrophysics communities. The need to view massive star evolution and supernovae as continuous phases in a single narrative motivated the Theo Murphy international scientific meeting ‘Bridging the gap: from massive stars to supernovae’ at Chicheley Hall, UK, in June 2016, with the specific purpose of simultaneously addressing the scientific connections between theoretical and observational studies of massive stars and their supernovae, through engaging astronomers from both communities. This article is part of the themed issue ‘Bridging the gap: from massive stars to supernovae’. PMID:28923995
Bridging the gap: from massive stars to supernovae.
Maund, Justyn R; Crowther, Paul A; Janka, Hans-Thomas; Langer, Norbert
2017-10-28
Almost since the beginning, massive stars and their resultant supernovae have played a crucial role in the Universe. These objects produce tremendous amounts of energy and new, heavy elements that enrich galaxies, encourage new stars to form and sculpt the shapes of galaxies that we see today. The end of millions of years of massive star evolution and the beginning of hundreds or thousands of years of supernova evolution are separated by a matter of a few seconds, in which some of the most extreme physics found in the Universe causes the explosive and terminal disruption of the star. Key questions remain unanswered in both the studies of how massive stars evolve and the behaviour of supernovae, and it appears the solutions may not lie on just one side of the explosion or the other or in just the domain of the stellar evolution or the supernova astrophysics communities. The need to view massive star evolution and supernovae as continuous phases in a single narrative motivated the Theo Murphy international scientific meeting 'Bridging the gap: from massive stars to supernovae' at Chicheley Hall, UK, in June 2016, with the specific purpose of simultaneously addressing the scientific connections between theoretical and observational studies of massive stars and their supernovae, through engaging astronomers from both communities.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).
1985-03-01
0423 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERSApplied Science Associates, Inc. AREAOU U...the workload on the user and help organize the massive amounts of data involved, job aids have been developed for CFEA users. A trial CFEA was conducted...coupled with the complex and massive nature of a CFEA of a battalion-sized [ organization could make the prospect of conducting a CFEA seem a bit over
An Approach for Removing Redundant Data from RFID Data Streams
Mahdin, Hairulnizam; Abawajy, Jemal
2011-01-01
Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches. PMID:22163730
Ho, K M; Leonard, A D
2011-01-01
Mortality of patients with critical bleeding requiring massive transfusion is high. Although hypothermia, acidosis and coagulopathy have been well described as important determinants of mortality in patients with critical bleeding requiring massive transfusion, the risk factors and outcome associated with hypocalcaemia in these patients remain uncertain. This cohort study assessed the relationship between the lowest ionised calcium concentration during the 24-hour period of critical bleeding and the hospital mortality of 352 consecutive patients, while adjusting for diagnosis, acidosis, coagulation results, transfusion requirements and use of recombinant factor VIIa. Hypocalcaemia was common (mean concentrations 0.77 mmol/l, SD 0.19) and had a linear; concentration-dependent relationship with mortality (odds ratio [OR] 1.25 per 0.1 mmol/l decrement, 95% confidence interval [CI]: 1.04 to 1.52; P = 0.02). Hypocalcaemia accounted for 12.5% of the variability and was more important than the lowest fibrinogen concentrations (10.8%), acidosis (7.9%) and lowest platelet counts (7.7%) in predicting hospital mortality. The amount of fresh frozen plasma transfused (OR 1.09 per unit, 95% CI: 1.02 to 1.17; P = 0.02) and acidosis (OR 1.45 per 0.1 decrement, 95% CI: 1.19 to 1.72; P = 0.01) were associated with the occurrence of severe hypocalcaemia (< 0.8 mmol/l). In conclusion, ionised calcium concentrations had an inverse concentration-dependent relationship with mortality of patients with critical bleeding requiring massive transfusion. Both acidosis and the amount of fresh frozen plasma transfused were the main risk factors for severe hypocalcaemia. Further research is needed to determine whether preventing ionised hypocalcaemia can reduce mortality of patients with critical bleeding requiring massive transfusion.
[Tumor Data Interacted System Design Based on Grid Platform].
Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke
2016-06-01
In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.
EvoGraph: On-The-Fly Efficient Mining of Evolving Graphs on GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen
With the prevalence of the World Wide Web and social networks, there has been a growing interest in high performance analytics for constantly-evolving dynamic graphs. Modern GPUs provide massive AQ1 amount of parallelism for efficient graph processing, but the challenges remain due to their lack of support for the near real-time streaming nature of dynamic graphs. Specifically, due to the current high volume and velocity of graph data combined with the complexity of user queries, traditional processing methods by first storing the updates and then repeatedly running static graph analytics on a sequence of versions or snapshots are deemed undesirablemore » and computational infeasible on GPU. We present EvoGraph, a highly efficient and scalable GPU- based dynamic graph analytics framework.« less
Selective sequential precipitation of dissolved metals in mine drainage from coal mine
NASA Astrophysics Data System (ADS)
Yim, Giljae; Bok, Songmin; Ji, Sangwoo; Oh, Chamteut; Cheong, Youngwook; Han, Youngsoo; Ahn, Joosung
2017-04-01
In abandoned mines in Korea, a large amount of mine drainage continues to flow out and spread pollution. In purification of the mine drainage a massive amount of sludge is generated as waste. Since this metal sludge contains high Fe, Al and Mn oxides, developing the treatment method to recover homogeneous individual metal with high purity may beneficial to recycle waste metals as useful resources and reduce the amount of sludge production. In this regard, we established a dissolved metals selective precipitation process to treat Waryong Industry's mine drainage. The process that selectively precipitates metals dissolved in mine drainage is a continuous Fe-buffer-Al process, and each process consists of the neutralization tank, the coagulation tank, and the settling tank. Based on this process, this study verified the operational applicability of the Fe and Al selective precipitation. Our previous study revealed that high-purity Fe and Al precipitates could be recovered at a flow rate of 1.5 ton/day, while the lower purity was achieved when the rate was increased to about 3 ton/day due to the difficulty in reagent dosage control. In the current study was conducted to increase the capacity of the system to recover Fe and Al as high-purity precipitates at a flow rate of 10 ton/day with the ensured continuous operations by introducing an automatic reagent injection system. The previous study had a difficulty in controlling the pH and operating system continuously due to the manually controlled reagent injection system. To upgrade this and ensure the optimal pH in a stable way, a continuous reagent injection system was installed. The result of operation of the 10 ton/day system confirmed that the scaled-up process could maintain the stable recovery rates and purities of precipitates on site.
A New Way to Manage TCGA Data - TCGA
Rachel Karchin, of Johns Hopkins University's Department of Biomedical Engineering, is developing a tool that will help researchers sort through the massive amounts of genomic data gathered from TCGA's ovarian cancer tumor samples.
Holographic thermalization and generalized Vaidya-AdS solutions in massive gravity
NASA Astrophysics Data System (ADS)
Hu, Ya-Peng; Zeng, Xiao-Xiong; Zhang, Hai-Qing
2017-02-01
We investigate the effect of massive graviton on the holographic thermalization process. Before doing this, we first find out the generalized Vaidya-AdS solutions in the de Rham-Gabadadze-Tolley (dRGT) massive gravity by directly solving the gravitational equations. Then, we study the thermodynamics of these Vaidya-AdS solutions by using the Misner-Sharp energy and unified first law, which also shows that the massive gravity is in a thermodynamic equilibrium state. Moreover, we adopt the two-point correlation function at equal time to explore the thermalization process in the dual field theory, and to see how the graviton mass parameter affects this process from the viewpoint of AdS/CFT correspondence. Our results show that the graviton mass parameter will increase the holographic thermalization process.
Zhu, Yuanjia; Kolawole, Tiwalola; Jimenez, Xavier F
2016-09-01
Bupropion is an atypical antidepressant that is structurally similar to amphetamines. Its primary toxic effects include seizure, sinus tachycardia, hypertension, and agitation; however, at higher amounts of ingestion, paradoxical cardiac effects are seen. We report the case of a 21-year-old woman who ingested 13.5 g of bupropion, a dose higher than any other previously reported. The patient presented with seizure, sinus tachycardia with prolonged QTc and QRS intervals, dilated pupils, and agitation. Four days after overdose, the patient's sinus tachycardia and prolonged QTc and QRS intervals resolved with symptomatic management, but she soon developed sinus bradycardia, hypotension, and mild transaminitis. With continued conservative management and close monitoring, her sinus bradycardia resolved 8 days after the overdose. The transaminitis resolved 12 days after the overdose. Our findings are consistent with previously reported toxic effects associated with common overdose amounts of bupropion. In addition, we have observed transient cardiotoxicity manifesting as sinus bradycardia associated with massive bupropion overdose. These findings are less frequently reported and must be considered when managing patients with massive bupropion overdose. We review the psychopharmacologic implications of this and comment on previous literature.
The Evolution and Stability of Massive Stars
NASA Astrophysics Data System (ADS)
Shiode, Joshua Hajime
Massive stars are the ultimate source for nearly all the elements necessary for life. The first stars forge these elements from the sparse set of ingredients supplied by the Big Bang, and distribute enriched ashes throughout their galactic homes via their winds and explosive deaths. Subsequent generations follow suit, assembling from the enriched ashes of their predecessors. Over the last several decades, the astrophysics community has developed a sophisticated theoretical picture of the evolution of these stars, but it remains an incomplete accounting of the rich set of observations. Using state of the art models of massive stars, I have investigated the internal processes taking place throughout the life-cycles of stars spanning those from the first generation ("Population III") to the present-day ("Population I"). I will argue that early-generation stars were not highly unstable to perturbations, contrary to a host of past investigations, if a correct accounting is made for the viscous effect of convection. For later generations, those with near solar metallicity, I find that this very same convection may excite gravity-mode oscillations that produce observable brightness variations at the stellar surface when the stars are near the main sequence. If confirmed with modern high-precision monitoring experiments, like Kepler and CoRoT, the properties of observed gravity modes in massive stars could provide a direct probe of the poorly constrained physics of gravity mode excitation by convection. Finally, jumping forward in stellar evolutionary time, I propose and explore an entirely new mechanism to explain the giant eruptions observed and inferred to occur during the final phases of massive stellar evolution. This mechanism taps into the vast nuclear fusion luminosity, and accompanying convective luminosity, in the stellar core to excite waves capable of carrying a super-Eddington luminosity out to the stellar envelope. This energy transfer from the core to the envelope has the potential to unbind a significant amount of mass in close proximity to a star's eventual explosion as a core collapse supernova.
NASA Astrophysics Data System (ADS)
Kalenchuk, K. S.; Hutchinson, D.; Diederichs, M. S.
2013-12-01
Downie Slide, one of the world's largest landslides, is a massive, active, composite, extremely slow rockslide located on the west bank of the Revelstoke Reservoir in British Columbia. It is a 1.5 billion m3 rockslide measuring 2400 m along the river valley, 3300m from toe to headscarp and up to 245 m thick. Significant contributions to the field of landslide geomechanics have been made by analyses of spatially and temporally discriminated slope deformations, and how these are controlled by complex geological and geotechnical factors. Downie Slide research demonstrates the importance of delineating massive landslides into morphological regions in order to characterize global slope behaviour and identify localized events, which may or may not influence the overall slope deformation patterns. Massive slope instabilities do not behave as monolithic masses, rather, different landslide zones can display specific landslide processes occurring at variable rates of deformation. The global deformation of Downie Slide is extremely slow moving; however localized regions of the slope incur moderate to high rates of movement. Complex deformation processes and composite failure mechanism are contributed to by topography, non-uniform shear surfaces, heterogeneous rockmass and shear zone strength and stiffness characteristics. Further, from the analysis of temporal changes in landslide behaviour it has been clearly recognized that different regions of the slope respond differently to changing hydrogeological boundary conditions. State-of-the-art methodologies have been developed for numerical simulation of large landslides; these provide important tools for investigating dynamic landslide systems which account for complex three-dimensional geometries, heterogenous shear zone strength parameters, internal shear zones, the interaction of discrete landslide zones and piezometric fluctuations. Numerical models of Downie Slide have been calibrated to reproduce observed slope behaviour, and the calibration process has provided important insight to key factors controlling massive slope mechanics. Through numerical studies it has been shown that the three-dimensional interpretation of basal slip surface geometry and spatial heterogeneity in shear zone stiffness are important factors controlling large-scale slope deformation processes. The role of secondary internal shears and the interaction between landslide morphological zones has also been assessed. Further, numerical simulation of changing groundwater conditions has produced reasonable correlation with field observations. Calibrated models are valuable tools for the forward prediction of landslide dynamics. Calibrated Downie Slide models have been used to investigate how trigger scenarios may accelerate deformations at Downie Slide. The ability to reproduce observed behaviour and forward test hypothesized changes to boundary conditions has valuable application in hazard management of massive landslides. The capacity of decision makers to interpret large amounts of data, respond to rapid changes in a system and understand complex slope dynamics has been enhanced.
Jellyfish: Evidence of Extreme Ram-pressure Stripping in Massive Galaxy Clusters
NASA Astrophysics Data System (ADS)
Ebeling, H.; Stephenson, L. N.; Edge, A. C.
2014-02-01
Ram-pressure stripping by the gaseous intracluster medium has been proposed as the dominant physical mechanism driving the rapid evolution of galaxies in dense environments. Detailed studies of this process have, however, largely been limited to relatively modest examples affecting only the outermost gas layers of galaxies in nearby and/or low-mass galaxy clusters. We here present results from our search for extreme cases of gas-galaxy interactions in much more massive, X-ray selected clusters at z > 0.3. Using Hubble Space Telescope snapshots in the F606W and F814W passbands, we have discovered dramatic evidence of ram-pressure stripping in which copious amounts of gas are first shock compressed and then removed from galaxies falling into the cluster. Vigorous starbursts triggered by this process across the galaxy-gas interface and in the debris trail cause these galaxies to temporarily become some of the brightest cluster members in the F606W passband, capable of outshining even the Brightest Cluster Galaxy. Based on the spatial distribution and orientation of systems viewed nearly edge-on in our survey, we speculate that infall at large impact parameter gives rise to particularly long-lasting stripping events. Our sample of six spectacular examples identified in clusters from the Massive Cluster Survey, all featuring M F606W < -21 mag, doubles the number of such systems presently known at z > 0.2 and facilitates detailed quantitative studies of the most violent galaxy evolution in clusters. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-10491, -10875, -12166, and -12884.
Knowledge Discovery and Data Mining in Iran's Climatic Researches
NASA Astrophysics Data System (ADS)
Karimi, Mostafa
2013-04-01
Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for internal climate studies in data mining and knowledge discovery techniques are used. However, it is necessary to use the KDD Approach and DM techniques in the climatic studies, specific interpreter of climate modeling result.
NASA Astrophysics Data System (ADS)
Sabater, David; Arriarán, Sofía; Romero, María Del Mar; Agnelli, Silvia; Remesar, Xavier; Fernández-López, José Antonio; Alemany, Marià
2014-01-01
White adipose tissue (WAT) produces lactate in significant amount from circulating glucose, especially in obesity;Under normoxia, 3T3L1 cells secrete large quantities of lactate to the medium, again at the expense of glucose and proportionally to its levels. Most of the glucose was converted to lactate with only part of it being used to synthesize fat. Cultured adipocytes were largely anaerobic, but this was not a Warburg-like process. It is speculated that the massive production of lactate, is a process of defense of the adipocyte, used to dispose of excess glucose. This way, the adipocyte exports glucose carbon (and reduces the problem of excess substrate availability) to the liver, but the process may be also a mechanism of short-term control of hyperglycemia. The in vivo data obtained from adipose tissue of male rats agree with this interpretation.
BioMAJ: a flexible framework for databanks synchronization and processing.
Filangi, Olivier; Beausse, Yoann; Assi, Anthony; Legrand, Ludovic; Larré, Jean-Marc; Martin, Véronique; Collin, Olivier; Caron, Christophe; Leroy, Hugues; Allouche, David
2008-08-15
Large- and medium-scale computational molecular biology projects require accurate bioinformatics software and numerous heterogeneous biological databanks, which are distributed around the world. BioMAJ provides a flexible, robust, fully automated environment for managing such massive amounts of data. The JAVA application enables automation of the data update cycle process and supervision of the locally mirrored data repository. We have developed workflows that handle some of the most commonly used bioinformatics databases. A set of scripts is also available for post-synchronization data treatment consisting of indexation or format conversion (for NCBI blast, SRS, EMBOSS, GCG, etc.). BioMAJ can be easily extended by personal homemade processing scripts. Source history can be kept via html reports containing statements of locally managed databanks. http://biomaj.genouest.org. BioMAJ is free open software. It is freely available under the CECILL version 2 license.
A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.
Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V
2016-07-01
In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
di, L.; Deng, M.
2010-12-01
Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.
The Synthesis of 44Ti and 56Ni in Massive Stars
NASA Astrophysics Data System (ADS)
Chieffi, Alessandro; Limongi, Marco
2017-02-01
We discuss the influence of rotation on the combined synthesis of {}44{Ti} and {}56{Ni} in massive stars. While {}56{Ni} is significantly produced by both complete and incomplete explosive Si burning, {}44{Ti} is mainly produced by complete explosive Si burning, with a minor contribution (in standard non-rotating models) from incomplete explosive Si burning and O burning (both explosive and hydrostatic). We find that, in most cases, the thickness of the region exposed to incomplete explosive Si burning increases in rotating models (initial velocity, v ini = 300 km s-1) and since {}56{Ni} is significantly produced in this zone, the fraction of mass coming from the complete explosive Si burning zone necessary to get the required amount of {}56{Ni} reduces. Therefore the amount of {}44{Ti} ejected for a given fixed amount of {}56{Ni} decreases in rotating models. However, some rotating models at [Fe/H] = -1 develop a very extended O convective shell in which a consistent amount of {}44{Ti} is formed, preserved, and ejected in the interstellar medium. Hence a better modeling of the thermal instabilities (convection) in the advanced burning phases together with a critical analysis of the cross sections of the nuclear reactions operating in O burning are relevant for the understanding of the synthesis of {}44{Ti}.
Martínez Moreno, José Manuel; Reyes-Ortiz, Alexander; Lage Sánchez, José María; Sánchez-Gallegos, Pilar; Garcia-Caballero, Manuel
2017-12-01
The aim of this study was to study the process of intestinal adaptation in the three limbs of the small intestine after malabsorptive bariatric surgery: the biliopancreatic limb, the alimentary limb, and the common channel. These limbs are exposed to different stimuli, namely, gastrointestinal transit and nutrients in the alimentary limb, biliopancreatic secretions in the biliopancreatic limb, and a mix of both in the common channel. We also wished to investigate the effect of glutamine supplementation on the adaptation process. Three types of surgery were performed using a porcine model: biliopancreatic bypass (BPBP), massive (75%) short bowel resection as the positive control, and a sham operation (transection) as the negative control. We measured the height and width of intestinal villi, histidine decarboxylase (HDC) activity, and amount of HDC messenger RNA (mRNA) (standard diet or a diet supplemented with glutamine). An increase in HDC activity and mRNA expression was observed in the BPBP group. This increase coincided with an increase in the height and width of the intestinal villi. The increase in villus height was observed immediately after surgery and peaked at 2 weeks. Levels remained higher than those observed in sham-operated pigs for a further 4 weeks. The intestinal adaptation process in animals that underwent BPBP was less intense than in those that underwent massive short bowel resection and more intense than in those that underwent transection only. Supplementation with glutamine did not improve any of the parameters studied, although it did appear to accelerate the adaptive process.
The 2nd Symposium on the Frontiers of Massively Parallel Computations
NASA Technical Reports Server (NTRS)
Mills, Ronnie (Editor)
1988-01-01
Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.
Thought Leaders during Crises in Massive Social Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Farber, Robert M.; Reynolds, William
The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less
Silicon Era of Carbon-Based Life: Application of Genomics and Bioinformatics in Crop Stress Research
Li, Man-Wah; Qi, Xinpeng; Ni, Meng; Lam, Hon-Ming
2013-01-01
Abiotic and biotic stresses lead to massive reprogramming of different life processes and are the major limiting factors hampering crop productivity. Omics-based research platforms allow for a holistic and comprehensive survey on crop stress responses and hence may bring forth better crop improvement strategies. Since high-throughput approaches generate considerable amounts of data, bioinformatics tools will play an essential role in storing, retrieving, sharing, processing, and analyzing them. Genomic and functional genomic studies in crops still lag far behind similar studies in humans and other animals. In this review, we summarize some useful genomics and bioinformatics resources available to crop scientists. In addition, we also discuss the major challenges and advancements in the “-omics” studies, with an emphasis on their possible impacts on crop stress research and crop improvement. PMID:23759993
MTL distributed magnet measurement system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J.M.; Craker, P.A.; Garbarini, J.P.
1993-04-01
The Magnet Test Laboratory (MTL) at the Superconducting Super collider Laboratory will be required to precisely and reliably measure properties of magnets in a production environment. The extensive testing of the superconducting magnets comprises several types of measurements whose main purpose is to evaluate some basic parameters characterizing magnetic, mechanic and cryogenic properties of magnets. The measurement process will produce a significant amount of data which will be subjected to complex analysis. Such massive measurements require a careful design of both the hardware and software of computer systems, having in mind a reliable, maximally automated system. In order to fulfillmore » this requirement a dedicated Distributed Magnet Measurement System (DMMS) is being developed.« less
Constructivist developmental theory is needed in developmental neuroscience
NASA Astrophysics Data System (ADS)
Arsalidou, Marie; Pascual-Leone, Juan
2016-12-01
Neuroscience techniques provide an open window previously unavailable to the origin of thoughts and actions in children. Developmental cognitive neuroscience is booming, and knowledge from human brain mapping is finding its way into education and pediatric practice. Promises of application in developmental cognitive neuroscience rests however on better theory-guided data interpretation. Massive amounts of neuroimaging data from children are being processed, yet published studies often do not frame their work within developmental models—in detriment, we believe, to progress in this field. Here we describe some core challenges in interpreting the data from developmental cognitive neuroscience, and advocate the use of constructivist developmental theories of human cognition with a neuroscience interpretation.
GraphReduce: Processing Large-Scale Graphs on Accelerator-Based Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Dipanjan; Song, Shuaiwen; Agarwal, Kapil
2015-11-15
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device’s internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host andmore » device.« less
Purely Dry Mergers do not Explain the Observed Evolution of Massive Early-type Galaxies since z ~ 1
NASA Astrophysics Data System (ADS)
Sonnenfeld, Alessandro; Nipoti, Carlo; Treu, Tommaso
2014-05-01
Several studies have suggested that the observed size evolution of massive early-type galaxies (ETGs) can be explained as a combination of dry mergers and progenitor bias, at least since z ~ 1. In this paper we carry out a new test of the dry-merger scenario based on recent lensing measurements of the evolution of the mass density profile of ETGs. We construct a theoretical model for the joint evolution of the size and mass density profile slope γ' driven by dry mergers occurring at rates given by cosmological simulations. Such dry-merger model predicts a strong decrease of γ' with cosmic time, inconsistent with the almost constant γ' inferred from observations in the redshift range 0 < z < 1. We then show with a simple toy model that a modest amount of cold gas in the mergers—consistent with the upper limits on recent star formation in ETGs—is sufficient to reconcile the model with measurements of γ'. By fitting for the amount of gas accreted during mergers, we find that models with dissipation are consistent with observations of the evolution in both size and density slope, if ~4% of the total final stellar mass arises from the gas accreted since z ~ 1. Purely dry merger models are ruled out at >99% CL. We thus suggest a scenario where the outer regions of massive ETGs grow by accretion of stars and dark matter, while small amounts of dissipation and nuclear star formation conspire to keep the mass density profile constant and approximately isothermal.
Connections in wood and foliage
Kevin T. Smith
2009-01-01
Trees are networked systems that capture energy, move massive amounts of water and material, and provide the setting for human society and for the lives of many associated organisms. Tree survival depends on making and breaking the right connections within these networks.
Chromoplast biogenesis and carotenoid accumulation
USDA-ARS?s Scientific Manuscript database
Chromoplasts are special organelles that possess superior ability to synthesize and store massive amounts of carotenoids. They are responsible for the distinctive colors found in fruits, flowers, and roots. Chromoplasts exhibit various morphologies and are derived from either pre-existing chloroplas...
NASA Technical Reports Server (NTRS)
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
GPUmotif: An Ultra-Fast and Energy-Efficient Motif Analysis Program Using Graphics Processing Units
Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui
2012-01-01
Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a “fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/ PMID:22662128
GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.
Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui
2012-01-01
Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David
2006-05-01
The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.
Content Platforms Meet Data Storage, Retrieval Needs
NASA Technical Reports Server (NTRS)
2012-01-01
Earth is under a constant barrage of information from space. Whether from satellites orbiting our planet, spacecraft circling Mars, or probes streaking toward the far reaches of the Solar System, NASA collects massive amounts of data from its spacefaring missions each day. NASA s Earth Observing System (EOS) satellites, for example, provide daily imagery and measurements of Earth s atmosphere, oceans, vegetation, and more. The Earth Observing System Data and Information System (EOSDIS) collects all of that science data and processes, archives, and distributes it to researchers around the globe; EOSDIS recently reached a total archive volume of 4.5 petabytes. Try to store that amount of information in your standard, four-drawer file cabinet, and you would need 90 million to get the job done. To manage the flood of information, NASA has explored technologies to efficiently collect, archive, and provide access to EOS data for scientists today and for years to come. One such technology is now providing similar capabilities to businesses and organizations worldwide.
Study of the radiated energy loss during massive gas injection mitigated disruptions on EAST
NASA Astrophysics Data System (ADS)
Duan, Y. M.; Hao, Z. K.; Hu, L. Q.; Wang, L.; Xu, P.; Xu, L. Q.; Zhuang, H. D.; EAST Team
2015-08-01
The MGI mitigated disruption experiments were carried out on EAST with a new fast gas controlling valve in 2012. Different amounts of noble gas He or mixed gas of 99% He + 1% Ar are injected into plasma in current flat-top phase and current ramp-down phase separately. The initial results of MGI experiments are described. The MGI system and the radiation measurement system are briefly introduced. The characteristics of radiation distribution and radiation energy loss are analyzed. About 50% of the stored thermal energy Wdia is dissipated by radiation during the entire disruption process and the impurities of C and Li from the PFC play important roles to radiative energy loss. The amount of the gas can affect the pre-TQ phase. Strong poloidal asymmetry of radiation begins to appear in the CQ phase, which is possibly caused by the plasma configuration changes as a result of VDE. No toroidal radiation asymmetry is observed presently.
Vetter, Jeffrey S.
2005-02-01
The method and system described herein presents a technique for performance analysis that helps users understand the communication behavior of their message passing applications. The method and system described herein may automatically classifies individual communication operations and reveal the cause of communication inefficiencies in the application. This classification allows the developer to quickly focus on the culprits of truly inefficient behavior, rather than manually foraging through massive amounts of performance data. Specifically, the method and system described herein trace the message operations of Message Passing Interface (MPI) applications and then classify each individual communication event using a supervised learning technique: decision tree classification. The decision tree may be trained using microbenchmarks that demonstrate both efficient and inefficient communication. Since the method and system described herein adapt to the target system's configuration through these microbenchmarks, they simultaneously automate the performance analysis process and improve classification accuracy. The method and system described herein may improve the accuracy of performance analysis and dramatically reduce the amount of data that users must encounter.
Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices
NASA Astrophysics Data System (ADS)
Wei, H. L.
2016-12-01
A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.
Wildfire Health and Economic Impacts Case Study###
Since 2008 eastern North Carolina experienced 6 major wildfires, far exceeding the historic 50 year expected rate of return. Initiated by the lighting strikes, these fires spread across multiple feet deep, dry and extremely vulnerable peat bogs. The fires produced massive amounts...
[Massive trichuriasis in an adult diagnosed by colonoscopy].
Sapunar, J; Gil, L C; Gil, J G
1999-01-01
A case of massive trichuriasis in a 37-year-old female from a rural locality of the Metropolitan Region of Chile, with antecedents of alcoholism, chronic hepatic damage and portal cavernomatosis, is presented. Since 12 year ago she has had geophagia. In the last six months she has frequently presented liquid diarrhea, colic abdominal pains, tenesmus and sensation of abdominal distention. Clinical and laboratory tests confirmed her hepatic affection associated with a celiac disease with anemia and hypereosinophilia. Within a week diarrhea became worse and dysentery appeared. A colonoscopy revealed an impressive and massive trichuriasis. The patient was successfully treated with two cures of 200 mg tablets of mebendazole twice daily for three days with a week interval. After the first cure she evacuated a big amount of Tricuris trichiura, fecal evacuations became normal, geophagia disappeared and recovered 4 kg of body weight.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
Thermal stress control using waste steel fibers in massive concretes
NASA Astrophysics Data System (ADS)
Sarabi, Sahar; Bakhshi, Hossein; Sarkardeh, Hamed; Nikoo, Hamed Safaye
2017-11-01
One of the important subjects in massive concrete structures is the control of the generated heat of hydration and consequently the potential of cracking due to the thermal stress expansion. In the present study, using the waste turnery steel fibers in the massive concretes, the amount of used cement was reduced without changing the compressive strength. By substituting a part of the cement with waste steel fibers, the costs and the generated hydration heat were reduced and the tensile strength was increased. The results showed that by using 0.5% turnery waste steel fibers and consequently, reducing to 32% the cement content, the hydration heat reduced to 23.4% without changing the compressive strength. Moreover, the maximum heat gradient reduced from 18.5% in the plain concrete sample to 12% in the fiber-reinforced concrete sample.
Closha: bioinformatics workflow system for the analysis of massive sequencing data.
Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook
2018-02-19
While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .
Formation of the giant planets
NASA Technical Reports Server (NTRS)
Lissauer, Jack J.
2006-01-01
The observed properties of giant planets, models of their evolution and observations of protoplanetary disks provide constraints on the formation of gas giant planets. The four largest planets in our Solar System contain considerable quantities of hydrogen and helium, which could not have condensed into solid planetesimals within the protoplanetary disk. All three (transiting) extrasolar giant planets with well determined masses and radii also must contain substantial amounts of these light gases. Jupiter and Saturn are mostly hydrogen and helium, but have larger abundances of heavier elements than does the Sun. Neptune and Uranus are primarily composed of heavier elements. HD 149026 b, which is slightly more massive than is Saturn, appears to have comparable quantities of light gases and heavy elements. HD 209458 b and TrES-1 are primarily hydrogen and helium, but may contain supersolar abundances of heavy elements. Spacecraft flybys and observations of satellite orbits provide estimates of the gravitational moments of the giant planets in our Solar System, which in turn provide information on the internal distribution of matter within Jupiter, Saturn, Uranus and Neptune. Atmospheric thermal structure and heat flow measurements constrain the interior temperatures of planets. Internal processes may cause giant planets to become more compositionally differentiated or alternatively more homogeneous; high-pressure laboratory .experiments provide data useful for modeling these processes. The preponderance of evidence supports the core nucleated gas accretion model. According to this model, giant planets begin their growth by the accumulation of small solid bodies, as do terrestrial planets. However, unlike terrestrial planets, the growing giant planet cores become massive enough that they are able to accumulate substantial amounts of gas before the protoplanetary disk dissipates. The primary questions regarding the core nucleated growth model is under what conditions planets with small cores/total heavy element abundances can accrete gaseous envelopes within the lifetimes of gaseous protoplanetary disks.
NASA Astrophysics Data System (ADS)
Brunini, Adrián; López, María Cristina
2018-06-01
We present a semi analytic model to evaluate the delivery of water to the habitable zone around a solar type star carried by icy planetesimals born beyond the snow line. The model includes sublimation of ice, gas drag and scattering by an outer giant planet located near the snow line. The sublimation model is general and could be applicable to planetary synthesis models or N-Body simulations of the formation of planetary systems. We perform a short series of simulations to asses the potential relevance of sublimation of volatiles in the process of delivery of water to the inner regions of a planetary system during early stages of its formation. We could anticipate that erosion by sublimation would prevent the arrival of much water to the habitable zone of protoplanetary disks in the form of icy planetesimals. Close encounters with a massive planet orbiting near the outer edge of the snow line could make possible for planetesimals to reach the habitable zone somewhat less eroded. However, only large planetesimals could provide appreciable amounts of water. Massive disks and sharp gas surface density profiles favor icy planetesimals to reach inner regions of a protoplanetary disk.
Serpentinitic waste materials: possible reuses and critical issues
NASA Astrophysics Data System (ADS)
Cavallo, Alessandro
2017-04-01
The extraction and processing of marbles, rocks and granites produces a significant amount of waste materials, in the form of shapeless blocks, scraps, gravel and sludge. Current regulations and a greater concern to the environment promote the reuse of these wastes: quartz-feldspathic materials are successfully used for ceramics, crushed porphyry as track ballast, whereas carbonatic wastes for lime, cement and fillers. However, there are currently no reuses for serpentinitic materials: a striking example is represented by the Valmalenco area (central Alps, northern Italy), a relatively small productive district. In this area 22 different enterprises operate in the quarrying and/or processing of serpentinites with various textures, schistose to massive, and color shades; the commercial products are used all over the world and are known with many commercial names. The total volume extracted in the quarries is estimated around 68000 m3/yr. and the resulting commercial blocks and products can be estimated around the 40 - 50 % of the extracted material. The processing wastes can vary significantly according to the finished product: 35 % of waste can be estimated in the case of slab production, whereas 50 % can be estimated in the case of gang-saw cutting of massive serpentinite blocks. The total estimate of the processing rock waste in the Valmalenco area is about 12700 m3/yr; together with the quarry waste, the total amount of waste produced in the area is more than 43000 m3/yr. The sludge (approximately 12000 m3/yr, more than 95 % has grain size < 50 micron) mainly derives from the cutting (by diamond disk and gang-saw) and polishing of massive serpentinites; it is filter-pressed before disposal (water content ranging from 11.5 to 19.4 wt. %). All the different waste materials (85 samples) were characterized by quantitative XRPD (FULLPAT software), whole-rock geochemistry (ICP-AES, ICP-MS and Leco®) and SEM-EDS. The mineralogical composition is quite variable from quarry to quarry, with abundant antigorite (up to 90 wt. %) and olivine (up to 38 wt. %), and variable contents of diopside, chlorite, magnetite, chromite and brucite. The chemical composition reflects the protolith: MgO 35.1 - 42.7 wt. %, SiO2 38.8 - 42.3 wt. %, Fe2O3 7.1 - 8.8 wt. %, Al2O3 0.9 - 2.8 wt. %, CaO 0.2 - 3.1 wt. %, Cr2O3 0.26 - 0.35 wt. %, Ni 1800 - 2100 ppm; little differences can be observed in trace elements. SEM-EDS investigations evidenced little amounts of chrysotile asbestos fibers (generally < 1000 ppm, mean values 200 - 400 ppm), deriving from cracks, fissures and veins of the waste blocks. Very few published studies on the reuse of serpentinitic wastes can be found. Finely ground antigorite-rich materials could be used as filler for plastics (instead of talc), whereas olivine-rich wastes as a reactive fixing carbon dioxide (as carbonates) released during the use of fossil fuels. In the ceramic industry, the most promising target is represented by forsterite and/or high-MgO ceramics and forsterite refractories (with periclase addition), but also by cordierite ceramics (adding kaolin) and high-hardness vitroceramics. The real possibility of an industrial use of serpentinitic materials will require much more experimental work, because no relevant previous studies are available. Special care must be taken to avoid chrysotile asbestos contamination.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Chromatographic separation of radioactive noble gases from xenon
NASA Astrophysics Data System (ADS)
Akerib, D. S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Bramante, R.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Chiller, A. A.; Chiller, C.; Coffey, T.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Ghag, C.; Gibson, K. R.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Ihm, M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Pech, K.; Phelps, P.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solovov, V. N.; Sorensen, P.; Stephenson, S.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Yazdani, K.; Young, S. K.; Zhang, C.
2018-01-01
The Large Underground Xenon (LUX) experiment operates at the Sanford Underground Research Facility to detect nuclear recoils from the hypothetical Weakly Interacting Massive Particles (WIMPs) on a liquid xenon target. Liquid xenon typically contains trace amounts of the noble radioactive isotopes 85Kr and 39Ar that are not removed by the in situ gas purification system. The decays of these isotopes at concentrations typical of research-grade xenon would be a dominant background for a WIMP search experiment. To remove these impurities from the liquid xenon, a chromatographic separation system based on adsorption on activated charcoal was built. 400 kg of xenon was processed, reducing the average concentration of krypton from 130 ppb to 3.5 ppt as measured by a cold-trap assisted mass spectroscopy system. A 50 kg batch spiked to 0.001 g/g of krypton was processed twice and reduced to an upper limit of 0.2 ppt.
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
Chromatographic separation of radioactive noble gases from xenon
Akerib, DS; Araújo, HM; Bai, X; ...
2017-10-31
The Large Underground Xenon (LUX) experiment operates at the Sanford Underground Research Facility to detect nuclear recoils from the hypothetical Weakly Interacting Massive Particles (WIMPs) on a liquid xenon target. Liquid xenon typically contains trace amounts of the noble radioactive isotopesmore » $$^{85}$$Kr and $$^{39}$$Ar that are not removed by the in situ gas purification system. The decays of these isotopes at concentrations typical of research-grade xenon would be a dominant background for a WIMP search exmperiment. To remove these impurities from the liquid xenon, a chromatographic separation system based on adsorption on activated charcoal was built. 400 kg of xenon was processed, reducing the average concentration of krypton from 130 ppb to 3.5 ppt as measured by a cold-trap assisted mass spectroscopy system. A 50 kg batch spiked to 0.001 g/g of krypton was processed twice and reduced to an upper limit of 0.2 ppt.« less
On the statistical properties of viral misinformation in online social media
NASA Astrophysics Data System (ADS)
Bessi, Alessandro
2017-03-01
The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.
Dynamic modeling of Tampa Bay urban development using parallel computing
Xian, G.; Crane, M.; Steinwand, D.
2005-01-01
Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively.
Helium-Shell Nucleosynthesis and Extinct Radioactivities
NASA Technical Reports Server (NTRS)
Meyer, B. S.; The, L.-S.; Clayton, D. D.; ElEid, M. F.
2004-01-01
Although the exact site for the origin of the r-process isotopes remains mysterious, most thinking has centered on matter ejected from the cores of massive stars in core-collapse supernovae [13]. In the 1970's and 1980's, however, difficulties in understanding the yields from such models led workers to consider the possibility of r-process nucleosynthesis farther out in the exploding star, in particular, in the helium burning shell [4,5]. The essential idea was that shock passage through this shell would heat and compress this material to the point that the reactions 13C(alpha; n)16O and, especially, 22Ne(alpha; n)25Mg would generate enough neutrons to capture on preexisting seed nuclei and drive an "n process" [6], which could reproduce the r-process abundances. Subsequent work showed that the required 13C and 22Ne abundances were too large compared to the amounts available in realistic models [7] and recent thinking has returned to supernova core material or matter ejected from neutron star-neutron star collisions as the more likely r-process sites.
Buysse, Karen; Beulen, Lean; Gomes, Ingrid; Gilissen, Christian; Keesmaat, Chantal; Janssen, Irene M; Derks-Willemen, Judith J H T; de Ligt, Joep; Feenstra, Ilse; Bekker, Mireille N; van Vugt, John M G; Geurts van Kessel, Ad; Vissers, Lisenka E L M; Faas, Brigitte H W
2013-12-01
Circulating cell-free fetal DNA (ccffDNA) in maternal plasma is an attractive source for noninvasive prenatal testing (NIPT). The amount of total cell-free DNA significantly increases 24h after venipuncture, leading to a relative decrease of the ccffDNA fraction in the blood sample. In this study, we evaluated the downstream effects of extended processing times on the reliability of aneuploidy detection by massively parallel sequencing (MPS). Whole blood from pregnant women carrying normal and trisomy 21 (T21) fetuses was collected in regular EDTA anti-coagulated tubes and processed within 6h, 24 and 48h after venipuncture. Samples of all three different time points were further analyzed by MPS using Z-score calculation and the percentage of ccffDNA based on X-chromosome reads. Both T21 samples were correctly identified as such at all time-points. However, after 48h, a higher deviation in Z-scores was noticed. Even though the percentage of ccffDNA in a plasma sample has been shown previously to significantly decrease 24h after venipuncture, the percentages based on MPS results did not show a significant decrease after 6, 24 or 48h. The quality and quantity of ccffDNA extracted from plasma samples processed up to 24h after venipuncture are sufficiently high for reliable downstream NIPT analysis by MPS. Furthermore, we show that it is important to determine the percentage of ccffDNA in the fraction of the sample that is actually used for NIPT, as downstream procedures might influence the fetal or maternal fraction. © 2013.
Chemistry and Environments of Dolomitization —A Reappraisal
NASA Astrophysics Data System (ADS)
Machel, Hans-G.; Mountjoy, Eric W.
1986-05-01
Dolomitization of calcium carbonate can best be expressed by mass transfer reactions that allow for volume gain, preservation, or loss during the replacement process. Experimental data, as well as textures and porosities of natural dolomites, indicate that these reactions must include CO 32- and/or HCO 3- supplied by the solution to the reaction site. Since dolomite formation is thermodynamically favoured in solutions of (a) low Ca 2+/Mg 2+ ratios, (b) low Ca 2+/CO 32- (or Ca 2+/HCO 3-) ratios, and (c) high temperatures, the thermodynamic stability for the system calcite-dolomite-water is best represented in a diagram with these three parameters as axes. Kinetic considerations favour dolomitization under the same conditions, and additionally at low as well as at high salinities. If thermodynamic and kinetic considerations are combined, the following conditions and environments are considered chemically conducive to dolomitization: (1) environments of any salinity above thermodynamic and kinetic saturation with respect to dolomite (i.e. freshwater/seawater mixing zones, normal saline to hypersaline subtidal environments, hypersaline supratidal environments, schizohaline environments); (2) alkaline environments (i.e. those under the influence of bacterial reduction and/or fermentation processes, or with high input of alkaline continental groundwaters); and (3) many environments with temperatures greater than about 50°C (subsurface and hydrothermal environments). Whether or not massive, replacive dolostones are formed in these environments depends on a sufficient supply of magnesium, and thus on hydrologic parameters. Most massive dolostones, particularly those consisting of shallowing-upward cycles and capped by regional unconformities, have been interpreted to be formed according to either the freshwater/seawater mixing model or the sabkha with reflux model. However, close examination of natural mixing zones and exposed evaporitic environments reveals that the amounts of dolomite formed are small and texturally different from the massive, replacive dolostones commonly inferred to have been formed in these environments. Many shallowing-upward sequences are devoid of dolomite. It is therefore suggested that massive, replacive dolomitization during exposure is rare, if not impossible. Rather, only small quantities of dolomite (cement or replacement) are formed which may act as nuclei for later subsurface dolomitization. Alternatively, large-scale dolomitization may take place in shallow subtidal environments of moderate to strong hypersalinity. The integration of stratigraphic, petrographic, geochemical, and hydrological parameters suggests that the only environments capable of forming massive, replacive dolostones on a large scale are shallow, hypersaline subtidal environments and certain subsurface environments.
Biomechanical effect of latissimus dorsi tendon transfer for irreparable massive cuff tear.
Oh, Joo Han; Tilan, Justin; Chen, Yu-Jen; Chung, Kyung Chil; McGarry, Michelle H; Lee, Thay Q
2013-02-01
The purpose of this study was to determine the biomechanical effects of latissimus dorsi transfer in a cadaveric model of massive posterosuperior rotator cuff tear. Eight cadaveric shoulders were tested at 0°, 30°, and 60° of abduction in the scapular plane with anatomically based muscle loading. Humeral rotational range of motion and the amount of humeral rotation due to muscle loading were measured. Glenohumeral kinematics and contact characteristics were measured throughout the range of motion. After testing in the intact condition, the supraspinatus and infraspinatus were resected. The cuff tear was then repaired by latissimus dorsi transfer. Two muscle loading conditions were applied after latissimus transfer to simulate increased tension that may occur due to limited muscle excursion. A repeated-measures analysis of variance was used for statistical analysis. The amount of internal rotation due to muscle loading and maximum internal rotation increased with massive cuff tear and was restored with latissimus transfer (P < .05). At maximum internal rotation, the humeral head apex shifted anteriorly, superiorly, and laterally at 0° of abduction after massive cuff tear (P < .05); this abnormal shift was corrected with latissimus transfer (P < .05). However, at 30° and 60° of abduction, latissimus transfer significantly altered kinematics (P < .05) and latissimus transfer with increased muscle loading increased contact pressure, especially at 60° of abduction. Latissimus dorsi transfer is beneficial in restoring humeral internal/external rotational range of motion, the internal/external rotational balance of the humerus, and glenohumeral kinematics at 0° of abduction. However, latissimus dorsi transfer with simulated limited excursion may lead to an overcompensation that can further deteriorate normal biomechanics, especially at higher abduction angles. Published by Mosby, Inc.
The kinetics of aerosol particle formation and removal in NPP severe accidents
NASA Astrophysics Data System (ADS)
Zatevakhin, Mikhail A.; Arefiev, Valentin K.; Semashko, Sergey E.; Dolganov, Rostislav A.
2016-06-01
Severe Nuclear Power Plant (NPP) accidents are accompanied by release of a massive amount of energy, radioactive products and hydrogen into the atmosphere of the NPP containment. A valid estimation of consequences of such accidents can only be carried out through the use of the integrated codes comprising a description of the basic processes which determine the consequences. A brief description of a coupled aerosol and thermal-hydraulic code to be used for the calculation of the aerosol kinetics within the NPP containment in case of a severe accident is given. The code comprises a KIN aerosol unit integrated into the KUPOL-M thermal-hydraulic code. Some features of aerosol behavior in severe NPP accidents are briefly described.
The kinetics of aerosol particle formation and removal in NPP severe accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zatevakhin, Mikhail A.; Arefiev, Valentin K.; Semashko, Sergey E.
2016-06-08
Severe Nuclear Power Plant (NPP) accidents are accompanied by release of a massive amount of energy, radioactive products and hydrogen into the atmosphere of the NPP containment. A valid estimation of consequences of such accidents can only be carried out through the use of the integrated codes comprising a description of the basic processes which determine the consequences. A brief description of a coupled aerosol and thermal–hydraulic code to be used for the calculation of the aerosol kinetics within the NPP containment in case of a severe accident is given. The code comprises a KIN aerosol unit integrated into themore » KUPOL-M thermal–hydraulic code. Some features of aerosol behavior in severe NPP accidents are briefly described.« less
Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key
NASA Astrophysics Data System (ADS)
Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.
2017-01-01
Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.
PREPping Students for Authentic Science
ERIC Educational Resources Information Center
Dolan, Erin L.; Lally, David J.; Brooks, Eric; Tax, Frans E.
2008-01-01
In this article, the authors describe a large-scale research collaboration, the Partnership for Research and Education in Plants (PREP), which has capitalized on publicly available databases that contain massive amounts of biological information; stock centers that house and distribute inexpensive organisms with different genotypes; and the…
Efficient Access to Massive Amounts of Tape-Resident Data
NASA Astrophysics Data System (ADS)
Yu, David; Lauret, Jérôme
2017-10-01
Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.
Constraining the physics of carbon crystallization through pulsations of a massive DAV BPM37093
NASA Astrophysics Data System (ADS)
Nitta, Atsuko; Kepler, S. O.; Chené, André-Nicolas; Koester, D.; Provencal, J. L.; Kleinmani, S. J.; Sullivan, D. J.; Chote, Paul; Sefako, Ramotholo; Kanaan, Antonio; Romero, Alejandra; Corti, Mariela; Kilic, Mukremin; Montgomery, M. H.; Winget, D. E.
We are trying to reduce the largest uncertainties in using white dwarf stars as Galactic chronometers by understanding the details of carbon crystalliazation that currently result in a 1-2 Gyr uncertainty in the ages of the oldest white dwarf stars. We expect the coolest white dwarf stars to have crystallized interiors, but theory also predicts hotter white dwarf stars, if they are massive enough, will also have some core crystallization. BPM 37093 is the first discovered of only a handful of known massive white dwarf stars that are also pulsating DAV, or ZZ Ceti, variables. Our approach is to use the pulsations to constrain the core composition and amount of crystallization. Here we report our analysis of 4 hours of continuous time series spectroscopy of BPM 37093 with Gemini South combined with simultaneous time-series photometry from Mt. John (New Zealand), SAAO, PROMPT, and Complejo Astronomico El Leoncito (CASLEO, Argentina).
2015-12-01
pesticide application over farm fields to produce a better crop.2 On 3 August 1921 in a joint effort between the U.S. Army Signal Corps in Dayton, Ohio... pesticide dissemination because of the relatively small amount of product needed to spray for nuisance insects over a vast area. The ULV system is... pesticide per minute. Applications that require massive amounts of liquid herbicide to neutralize cheatgrass and other fire-prone, invasive vegetation on
Place in Perspective: Extracting Online Information about Points of Interest
NASA Astrophysics Data System (ADS)
Alves, Ana O.; Pereira, Francisco C.; Rodrigues, Filipe; Oliveirinha, João
During the last few years, the amount of online descriptive information about places has reached reasonable dimensions for many cities in the world. Being such information mostly in Natural Language text, Information Extraction techniques are needed for obtaining the meaning of places that underlies these massive amounts of commonsense and user made sources. In this article, we show how we automatically label places using Information Extraction techniques applied to online resources such as Wikipedia, Yellow Pages and Yahoo!.
A Mechanical Model of Brownian Motion for One Massive Particle Including Slow Light Particles
NASA Astrophysics Data System (ADS)
Liang, Song
2018-01-01
We provide a connection between Brownian motion and a classical mechanical system. Precisely, we consider a system of one massive particle interacting with an ideal gas, evolved according to non-random mechanical principles, via interaction potentials, without any assumption requiring that the initial velocities of the environmental particles should be restricted to be "fast enough". We prove the convergence of the (position, velocity)-process of the massive particle under a certain scaling limit, such that the mass of the environmental particles converges to 0 while the density and the velocities of them go to infinity, and give the precise expression of the limiting process, a diffusion process.
USDA-ARS?s Scientific Manuscript database
Chromoplasts are unique plastids that accumulate massive amounts of carotenoids. To gain a general and comparative characterization of chromoplast proteins, we performed proteomic analysis of chromoplasts from six carotenoid-rich crops: watermelon, tomato, carrot, orange cauliflower, red papaya, and...
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
USDA-ARS?s Scientific Manuscript database
In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...
Infants Hierarchically Organize Memory Representations
ERIC Educational Resources Information Center
Rosenberg, Rebecca D.; Feigenson, Lisa
2013-01-01
Throughout development, working memory is subject to capacity limits that severely constrain short-term storage. However, adults can massively expand the total amount of remembered information by grouping items into "chunks". Although infants also have been shown to chunk objects in memory, little is known regarding the limits of this…
Health burden from peat wildfire in North Carolina
In June 2008, a wildfire smoldering through rich peat deposits in the Pocosin Lakes National Wildlife Refuge produced massive amounts of smoke and exposed a largely rural North Carolina area to air pollution in access of the National Ambient Air Quality Standards. In this talk, w...
Coma, Hyperthermia and Bleeding Associated with Massive LSD Overdose
Klock, John C.; Boerner, Udo; Becker, Charles E.
1974-01-01
Eight patients were seen within 15 minutes of intranasal self-administration of large amounts of pure D-lysergic acid diethylamide (LSD) tartrate powder. Emesis and collapse occurred along with signs of sympathetic overactivity, hyperthermia, coma and respiratory arrest. Mild generalized bleeding occurred in several patients and evidence of platelet dysfunction was present in all. Serum and gastric concentrations of LSD tartrate ranged from 2.1 to 26 nanograms per ml and 1,000 to 7,000 μg per 100 ml, respectively. With supportive care, all patients recovered. Massive LSD overdose in man is life-threatening and produces striking and distinctive manifestations. ImagesFigure 1. PMID:4816396
Coma, hyperthermia and bleeding associated with massive LSD overdose. A report of eight cases.
Klock, J C; Boerner, U; Becker, C E
1974-03-01
Eight patients were seen within 15 minutes of intranasal self-administration of large amounts of pure D-lysergic acid diethylamide (LSD) tartrate powder. Emesis and collapse occurred along with signs of sympathetic overactivity, hyperthermia, coma and respiratory arrest. Mild generalized bleeding occurred in several patients and evidence of platelet dysfunction was present in all. Serum and gastric concentrations of LSD tartrate ranged from 2.1 to 26 nanograms per ml and 1,000 to 7,000 mug per 100 ml, respectively. With supportive care, all patients recovered. Massive LSD overdose in man is life-threatening and produces striking and distinctive manifestations.
QCD corrections to massive color-octet vector boson pair production
NASA Astrophysics Data System (ADS)
Freitas, Ayres; Wiegand, Daniel
2017-09-01
This paper describes the calculation of the next-to-leading order (NLO) QCD corrections to massive color-octet vector boson pair production at hadron colliders. As a concrete framework, a two-site coloron model with an internal parity is chosen, which can be regarded as an effective low-energy approximation of Kaluza-Klein gluon physics in universal extra dimensions. The renormalization procedure involves several subtleties, which are discussed in detail. The impact of the NLO corrections is relatively modest, amounting to a reduction of 11-14% in the total cross-section, but they significantly reduce the scale dependence of the LO result.
An automated workflow for parallel processing of large multiview SPIM recordings
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-01-01
Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585
An automated workflow for parallel processing of large multiview SPIM recordings.
Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel
2016-04-01
Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Fast, Massively Parallel Data Processors
NASA Technical Reports Server (NTRS)
Heaton, Robert A.; Blevins, Donald W.; Davis, ED
1994-01-01
Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.
THE PREVALENCE AND IMPACT OF WOLF–RAYET STARS IN EMERGING MASSIVE STAR CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokal, Kimberly R.; Johnson, Kelsey E.; Indebetouw, Rémy
We investigate Wolf–Rayet (WR) stars as a source of feedback contributing to the removal of natal material in the early evolution of massive star clusters. Despite previous work suggesting that massive star clusters clear out their natal material before the massive stars evolve into the WR phase, WR stars have been detected in several emerging massive star clusters. These detections suggest that the timescale for clusters to emerge can be at least as long as the time required to produce WR stars (a few million years), and could also indicate that WR stars may be providing the tipping point inmore » the combined feedback processes that drive a massive star cluster to emerge. We explore the potential overlap between the emerging phase and the WR phase with an observational survey to search for WR stars in emerging massive star clusters hosting WR stars. We select candidate emerging massive star clusters from known radio continuum sources with thermal emission and obtain optical spectra with the 4 m Mayall Telescope at Kitt Peak National Observatory and the 6.5 m MMT.{sup 4} We identify 21 sources with significantly detected WR signatures, which we term “emerging WR clusters.” WR features are detected in ∼50% of the radio-selected sample, and thus we find that WR stars are commonly present in currently emerging massive star clusters. The observed extinctions and ages suggest that clusters without WR detections remain embedded for longer periods of time, and may indicate that WR stars can aid, and therefore accelerate, the emergence process.« less
Massive Volcanic SO2 Oxidation and Sulphate Aerosol Deposition in Cenozoic North America
Volcanic eruptions release a large amount of sulphur dioxide (SO2) into the atmosphere. SO2 is oxidized to sulphate and can subsequently form sulphate aerosol, which can affect the Earth's radiation balance, biologic productivity and high-altitude ozone co...
Generation and Limiters of Rogue Waves
2014-06-01
Jacobs, 7320 Ruth H. Preller, 7300 1231 1008.3 E. R. Franchi , 7000 Erick Rogers, 7322 1. REFERENCES AND ENCLOSURES 2. TYPE OF PUBLICATION OR...wave heights do not grow unlimited. With massive amount of global wave observations available nowadays, wave heights much in excess of 30m have never
Fermilab | Tevatron | Experiments
electrons, muons and charged hadrons followed curved paths through them. The slower or less massive the particles, the greater was the magnet's effect on them, and the more they curved. Scientists therefore used the amount which a particle's track curved to determine its momentum. This information helped them
Recent Developments in Young-Earth Creationist Geology
ERIC Educational Resources Information Center
Heaton, Timothy H.
2009-01-01
Young-earth creationism has undergone a shift in emphasis toward building of historical models that incorporate Biblical and scientific evidence and the acceptance of scientific conclusions that were formerly rejected. The RATE Group admitted that massive amounts of radioactive decay occurred during earth history but proposed a period of…
Frameworks Coordinate Scientific Data Management
NASA Technical Reports Server (NTRS)
2012-01-01
Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.
Communities of Practice and Professional Development
ERIC Educational Resources Information Center
Chalmers, Lex; Keown, Paul
2006-01-01
The Internet has had a transformative effect on many aspects of contemporary living. While there may be a tendency to overstate the impacts of this technology, workplaces and work practices in many societies have been greatly affected by almost instant access to massive amounts of information, delivered through broadening bandwidth. This paper…
A HIERARCHICAL MODELING FRAMEWORK FOR GEOLOGICAL STORAGE OF CARBON DIOXIDE
Carbon Capture and Storage, or CCS, is likely to be an important technology in a carbonconstrained world. CCS will involve subsurface injection of massive amounts of captured CO2, on a scale that has not previously been approached. The unprecedented scale of t...
Social Data Analytics Using Tensors and Sparse Techniques
ERIC Educational Resources Information Center
Zhang, Miao
2014-01-01
The development of internet and mobile technologies is driving an earthshaking social media revolution. They bring the internet world a huge amount of social media content, such as images, videos, comments, etc. Those massive media content and complicate social structures require the analytic expertise to transform those flood of information into…
Social Studies Special Issue: Civic Literacy in a Digital Age
ERIC Educational Resources Information Center
VanFossen, Phillip J.; Berson, Michael J.
2008-01-01
Young people today consume large amounts of information through various media outlets and simultaneously create and distribute their own messages via information and communication technologies and massively multiplayer online gaming. In doing so, these "digital natives" are often exposed to violent, racist, or other deleterious messages.…
Lee, Byung Moo
2017-12-29
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.
2017-01-01
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339
NASA Technical Reports Server (NTRS)
Berard, Peter R.
1993-01-01
Researchers in the Molecular Sciences Research Center (MSRC) of Pacific Northwest Laboratory (PNL) currently generate massive amounts of scientific data. The amount of data that will need to be managed by the turn of the century is expected to increase significantly. Automated tools that support the management, maintenance, and sharing of this data are minimal. Researchers typically manage their own data by physically moving datasets to and from long term storage devices and recording a dataset's historical information in a laboratory notebook. Even though it is not the most efficient use of resources, researchers have tolerated the process. The solution to this problem will evolve over the next three years in three phases. PNL plans to add sophistication to existing multilevel file system (MLFS) software by integrating it with an object database management system (ODBMS). The first phase in the evolution is currently underway. A prototype system of limited scale is being used to gather information that will feed into the next two phases. This paper describes the prototype system, identifies the successes and problems/complications experienced to date, and outlines PNL's long term goals and objectives in providing a permanent solution.
Neural Parallel Engine: A toolbox for massively parallel neural signal processing.
Tam, Wing-Kin; Yang, Zhi
2018-05-01
Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.
Petermann Glacier, North Greenland: massive calving in 2010 and the past half century
NASA Astrophysics Data System (ADS)
Johannessen, O. M.; Babiker, M.; Miles, M. W.
2011-01-01
Greenland's marine-terminating glaciers drain large amounts of solid ice through calving of icebergs, as well as melting of floating glacial ice. Petermann Glacier, North Greenland, has the Northern Hemisphere's long floating ice shelf. A massive (~270 km2) calving event was observed from satellite sensors in August 2010. In order to understand this in perspective, here we perform a comprehensive retrospective data analysis of Petermann Glacier calving-front variability spanning half a century. Here we establish that there have been at least four massive (100+ km2) calving events over the past 50 years: (1) 1959-1961 (~153 km2), (2) 1991 (~168 km2), (3) 2001 (~71 km2) and (4) 2010 (~270 km2), as well as ~31 km2 calved in 2008. The terminus position in 2010 has retreated ~15 km beyond the envelope of previous observations. Whether the massive calving in 2010 represents natural episodic variability or a response to global and/or ocean warming in the fjord remains speculative, although this event supports the contention that the ice shelf recently has become vulnerable due to extensive fracturing and channelized basal melting.
A Study of Cloud Radiative Forcing and Feedback
NASA Technical Reports Server (NTRS)
Ramanathan, Veerabhadran
2000-01-01
The main objective of the grant proposal was to participate in the CERES (Cloud and Earth's Radiant Energy System) Satellite experiment and perform interdisciplinary investigation of NASA's Earth Observing System (EOS). During the grant period, massive amounts of scientific data from diverse platforms have been accessed, processed and archived for continuing use; several software packages have been developed for integration of different data streams for performing scientific evaluation; extensive validation studies planned have been completed culminating in the development of important algorithms that are being used presently in the operational production of data from the CERES. Contributions to the inter-disciplinary science investigations have been significantly more than originally envisioned. The results of these studies have appeared in several refereed journals and conference proceedings. They are listed at the end of this report.
Building micro-soccer-balls with evaporating colloidal fakir drops
NASA Astrophysics Data System (ADS)
Gelderblom, Hanneke; Marín, Álvaro G.; Susarrey-Arce, Arturo; van Housselt, Arie; Lefferts, Leon; Gardeniers, Han; Lohse, Detlef; Snoeijer, Jacco H.
2013-11-01
Drop evaporation can be used to self-assemble particles into three-dimensional microstructures on a scale where direct manipulation is impossible. We present a unique method to create highly-ordered colloidal microstructures in which we can control the amount of particles and their packing fraction. To this end, we evaporate colloidal dispersion drops from a special type of superhydrophobic microstructured surface, on which the drop remains in Cassie-Baxter state during the entire evaporative process. The remainders of the drop consist of a massive spherical cluster of the microspheres, with diameters ranging from a few tens up to several hundreds of microns. We present scaling arguments to show how the final particle packing fraction of these balls depends on the drop evaporation dynamics, particle size, and number of particles in the system.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
NASA Astrophysics Data System (ADS)
Jauzac, Mathilde; Harvey, David; Massey, Richard
2018-04-01
We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and VLT/MUSE spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z=0.397, M(R < 200 kpc)=1.6×1014M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar halos are allowed, the model improves by 35%. This technique may provide a new way to investigate the processes and timescales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.
NASA Astrophysics Data System (ADS)
Jauzac, Mathilde; Harvey, David; Massey, Richard
2018-07-01
We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and Very Large Telescope/Multi Unit Spectroscopic Explorer spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z = 0.397, M(R < 200 kpc) = 1.6 × 1014 M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar haloes are allowed, the model improves by 35 per cent. This technique may provide a new way to investigate the processes and time-scales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.
Argo_CUDA: Exhaustive GPU based approach for motif discovery in large DNA datasets.
Vishnevsky, Oleg V; Bocharnikov, Andrey V; Kolchanov, Nikolay A
2018-02-01
The development of chromatin immunoprecipitation sequencing (ChIP-seq) technology has revolutionized the genetic analysis of the basic mechanisms underlying transcription regulation and led to accumulation of information about a huge amount of DNA sequences. There are a lot of web services which are currently available for de novo motif discovery in datasets containing information about DNA/protein binding. An enormous motif diversity makes their finding challenging. In order to avoid the difficulties, researchers use different stochastic approaches. Unfortunately, the efficiency of the motif discovery programs dramatically declines with the query set size increase. This leads to the fact that only a fraction of top "peak" ChIP-Seq segments can be analyzed or the area of analysis should be narrowed. Thus, the motif discovery in massive datasets remains a challenging issue. Argo_Compute Unified Device Architecture (CUDA) web service is designed to process the massive DNA data. It is a program for the detection of degenerate oligonucleotide motifs of fixed length written in 15-letter IUPAC code. Argo_CUDA is a full-exhaustive approach based on the high-performance GPU technologies. Compared with the existing motif discovery web services, Argo_CUDA shows good prediction quality on simulated sets. The analysis of ChIP-Seq sequences revealed the motifs which correspond to known transcription factor binding sites.
Eta Carinae in the Context of the Most Massive Stars
NASA Technical Reports Server (NTRS)
Gull, Theodore R.; Damineli, Augusto
2009-01-01
Eta Car, with its historical outbursts, visible ejecta and massive, variable winds, continues to challenge both observers and modelers. In just the past five years over 100 papers have been published on this fascinating object. We now know it to be a massive binary system with a 5.54-year period. In January 2009, Car underwent one of its periodic low-states, associated with periastron passage of the two massive stars. This event was monitored by an intensive multi-wavelength campaign ranging from -rays to radio. A large amount of data was collected to test a number of evolving models including 3-D models of the massive interacting winds. August 2009 was an excellent time for observers and theorists to come together and review the accumulated studies, as have occurred in four meetings since 1998 devoted to Eta Car. Indeed, Car behaved both predictably and unpredictably during this most recent periastron, spurring timely discussions. Coincidently, WR140 also passed through periastron in early 2009. It, too, is a intensively studied massive interacting binary. Comparison of its properties, as well as the properties of other massive stars, with those of Eta Car is very instructive. These well-known examples of evolved massive binary systems provide many clues as to the fate of the most massive stars. What are the effects of the interacting winds, of individual stellar rotation, and of the circumstellar material on what we see as hypernovae/supernovae? We hope to learn. Topics discussed in this 1.5 day Joint Discussion were: Car: the 2009.0 event: Monitoring campaigns in X-rays, optical, radio, interferometry WR140 and HD5980: similarities and differences to Car LBVs and Eta Carinae: What is the relationship? Massive binary systems, wind interactions and 3-D modeling Shapes of the Homunculus & Little Homunculus: what do we learn about mass ejection? Massive stars: the connection to supernovae, hypernovae and gamma ray bursters Where do we go from here? (future directions) The Science Organizing Committee: Co-chairs: Augusto Damineli (Brazil) & Theodore R. Gull (USA). Members: D. John Hillier (USA), Gloria Koenigsberger (Mexico), Georges Meynet (Switzerland), Nidia Morrell (Chile), Atsuo T. Okazaki (Japan), Stanley P. Owocki (USA), Andy M.T. Pol- lock (Spain), Nathan Smith (USA), Christiaan L. Sterken (Belgium), Nicole St Louis (Canada), Karel A. van der Hucht (Netherlands), Roberto Viotti (Italy) and GerdWeigelt (Germany)
A 15.65-solar-mass black hole in an eclipsing binary in the nearby spiral galaxy M 33.
Orosz, Jerome A; McClintock, Jeffrey E; Narayan, Ramesh; Bailyn, Charles D; Hartman, Joel D; Macri, Lucas; Liu, Jiefeng; Pietsch, Wolfgang; Remillard, Ronald A; Shporer, Avi; Mazeh, Tsevi
2007-10-18
Stellar-mass black holes are found in X-ray-emitting binary systems, where their mass can be determined from the dynamics of their companion stars. Models of stellar evolution have difficulty producing black holes in close binaries with masses more than ten times that of the Sun (>10; ref. 4), which is consistent with the fact that the most massive stellar black holes known so far all have masses within one standard deviation of 10. Here we report a mass of (15.65 +/- 1.45) for the black hole in the recently discovered system M 33 X-7, which is located in the nearby galaxy Messier 33 (M 33) and is the only known black hole that is in an eclipsing binary. To produce such a massive black hole, the progenitor star must have retained much of its outer envelope until after helium fusion in the core was completed. On the other hand, in order for the black hole to be in its present 3.45-day orbit about its (70.0 +/- 6.9) companion, there must have been a 'common envelope' phase of evolution in which a significant amount of mass was lost from the system. We find that the common envelope phase could not have occurred in M 33 X-7 unless the amount of mass lost from the progenitor during its evolution was an order of magnitude less than what is usually assumed in evolutionary models of massive stars.
On the Formation of Massive Stars
NASA Technical Reports Server (NTRS)
Yorke, Harold W.; Sonnhalter, Cordula
2002-01-01
We calculate numerically the collapse of slowly rotating, nonmagnetic, massive molecular clumps of masses 30,60, and 120 Stellar Mass, which conceivably could lead to the formation of massive stars. Because radiative acceleration on dust grains plays a critical role in the clump's dynamical evolution, we have improved the module for continuum radiation transfer in an existing two-dimensional (axial symmetry assumed) radiation hydrodynamic code. In particular, rather than using "gray" dust opacities and "gray" radiation transfer, we calculate the dust's wavelength-dependent absorption and emission simultaneously with the radiation density at each wavelength and the equilibrium temperatures of three grain components: amorphous carbon particles. silicates, and " dirty ice " -coated silicates. Because our simulations cannot spatially resolve the innermost regions of the molecular clump, however, we cannot distinguish between the formation of a dense central cluster or a single massive object. Furthermore, we cannot exclude significant mass loss from the central object(s) that may interact with the inflow into the central grid cell. Thus, with our basic assumption that all material in the innermost grid cell accretes onto a single object. we are able to provide only an upper limit to the mass of stars that could possibly be formed. We introduce a semianalytical scheme for augmenting existing evolutionary tracks of pre-main-sequence protostars by including the effects of accretion. By considering an open outermost boundary, an arbitrary amount of material could, in principal, be accreted onto this central star. However, for the three cases considered (30, 60, and 120 Stellar Mass originally within the computation grid), radiation acceleration limited the final masses to 3 1.6, 33.6, and 42.9 Stellar Mass, respectively, for wavelength-dependent radiation transfer and to 19.1, 20.1, and 22.9 Stellar Mass. for the corresponding simulations with gray radiation transfer. Our calculations demonstrate that massive stars can in principle be formed via accretion through a disk. The accretion rate onto the central source increases rapidly after one initial free-fall time and decreases monotonically afterward. By enhancing the nonisotropic character of the radiation field, the accretion disk reduces the effects of radiative acceleration in the radial direction - a process we call the "flashlight effect." The flashlight effect is further amplified in our case by including the effects of frequency-dependent radiation transfer. We conclude with the warning that a careful treatment of radiation transfer is a mandatory requirement for realistic simulations of the formation of massive stars.
World-wide amateur observations
NASA Astrophysics Data System (ADS)
Eversberg, T.; Aldoretta, E. J.; Knapen, J. H.; Moffat, A. F. J.; Morel, T.; Ramiaramanantsoa, T.; Rauw, G.; Richardson, N. D.; St-Louis, N.; Teodoro, M.
For some years now, spectroscopic measurements of massive stars in the amateur domain have been fulfilling professional requirements. Various groups in the northern and southern hemispheres have been established, running successful professional-amateur (ProAm) collaborative campaigns, e.g., on WR, O and B type stars. Today high quality data (echelle and long-slit) are regularly delivered and corresponding results published. Night-to-night long-term observations over months to years open a new opportunity for massive-star research. We introduce recent and ongoing sample campaigns (e.g. ɛ Aur, WR 134, ζ Pup), show respective results and highlight the vast amount of data collected in various data bases. Ultimately it is in the time-dependent domain where amateurs can shine most.
Before Industrialization: A Rural Social System Base Study.
ERIC Educational Resources Information Center
Summers, Gene F.; And Others
A recent trend in American economic life has been the location of industrial complexes in traditionally rural areas. When this occurs, there are often accompanying rapid and sometimes traumatic changes in the rural community. These changes, in part, result from investment of new and massive amounts of capital, new employment opportunities,…
Overcoming Challenges of the Technological Age by Teaching Information Literacy Skills
ERIC Educational Resources Information Center
Burke, Melynda
2010-01-01
The technological age has forever altered every aspect of life and work. Technology has changed how people locate and view information. However, the transition from print to electronic formats has created numerous challenges for individuals to overcome. These challenges include coping with the massive amounts of information bombarding people and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercier, C.W.
The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.
ERIC Educational Resources Information Center
Lu, Hsin-Min
2010-01-01
Deep penetration of personal computers, data communication networks, and the Internet has created a massive platform for data collection, dissemination, storage, and retrieval. Large amounts of textual data are now available at a very low cost. Valuable information, such as consumer preferences, new product developments, trends, and opportunities,…
Alginate-based polysaccharide beads for cationic contaminant sorption from water
Mei Li; Thomas Elder; Gisela Buschle-Diller
2016-01-01
Massive amounts of agricultural and industrial water worldwide are polluted by different types of contaminants that harm the environment and impact human health. Removing the contaminants from effluents by adsorbent materials made from abundant, inexpensive polysaccharides is a feasible approach to deal with this problem. In this research, alginate beads combined with...
The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970s into the 1990s using the electrolytic amalgam method entailing the use of massive amounts o...
The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970s into the 1990s using the electrolytic amalgam method entailing the use of massive amounts o...
The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970's into the 1990's using the electrolytic amalgam method entailing the use of massive amounts...
Preliminary Validation of Composite Material Constitutive Characterization
John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson
2012-01-01
This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...
ERIC Educational Resources Information Center
Palliser, Janna
2010-01-01
Bottled water is ubiquitous, taken for granted, and seemingly benign. Americans are consuming bottled water in massive amounts and spending a lot of money: In 2007, Americans spent $11.7 billion on 8.8 billions gallons of bottled water (Gashler 2008). That same year, two million plastic water bottles were used in the United States every five…
ERIC Educational Resources Information Center
Shaughnessy, Michael F.
While many students have found SQ3R (Survey, Question, Read, Recite, Review) and PQ 4 R (Preview, Question, Read, Reflect, Recite, Review) systems to be helpful, developmental/remedial students may need more assistance than the average freshman. Students who need more help to deal with the massive amounts of reading that needs to be done in…
Listen, Listen, Listen and Listen: Building a Comprehension Corpus and Making It Comprehensible
ERIC Educational Resources Information Center
Mordaunt, Owen G.; Olson, Daniel W.
2010-01-01
Listening comprehension input is necessary for language learning and acculturation. One approach to developing listening comprehension skills is through exposure to massive amounts of naturally occurring spoken language input. But exposure to this input is not enough; learners also need to make the comprehension corpus meaningful to their learning…
Exploring the Integration of Data Mining and Data Visualization
ERIC Educational Resources Information Center
Zhang, Yi
2011-01-01
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…
Especial Skills: Their Emergence with Massive Amounts of Practice
ERIC Educational Resources Information Center
Keetch, Katherine M.; Schmidt, Richard A.; Lee, Timothy D.; Young, Douglas E.
2005-01-01
Differing viewpoints concerning the specificity and generality of motor skill representations in memory were compared by contrasting versions of a skill having either extensive or minimal specific practice. In Experiments 1 and 2, skilled basketball players more accurately performed set shots at the foul line than would be predicted on the basis…
2011-07-01
given to evidence - based medicine in the 20th century has not only allowed improved dissemination of information to civilian providers but has also...limiting the amount of crystalloid used to resuscitate patients by 61%. This is further confirmation that evidence - based medicine changes in practice are at
Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery
Czyż, Stanisław H.; Moss, Sarah J.
2016-01-01
There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill. PMID:27547196
Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery.
Czyż, Stanisław H; Moss, Sarah J
2016-01-01
There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill.
The Rb problem in massive AGB stars.
NASA Astrophysics Data System (ADS)
Pérez-Mesa, V.; García-Hernández, D. A.; Zamora, O.; Plez, B.; Manchado, A.; Karakas, A. I.; Lugaro, M.
2017-03-01
The asymptotic giant branch (AGB) is formed by low- and intermediate-mass stars (0.8 M_{⊙} < M < 8 M_{⊙}) in their last nuclear-burning phase, when they develop thermal pulses (TP) and suffer extreme mass loss. AGB stars are the main contributor to the enrichment of the interstellar medium (ISM) and thus to the chemical evolution of galaxies. In particular, the more massive AGB stars (M > 4 M_{⊙}) are expected to produce light (e.g., Li, N) and heavy neutron-rich s-process elements (such as Rb, Zr, Ba, Y, etc.), which are not formed in lower mass AGB stars and Supernova explosions. Classical chemical analyses using hydrostatic atmospheres revealed strong Rb overabundances and high [Rb/Zr] ratios in massive AGB stars of our Galaxy and the Magellanic Clouds (MC), confirming for the first time that the ^{22}Ne neutron source dominates the production of s-process elements in these stars. The extremely high Rb abundances and [Rb/Zr] ratios observed in the most massive stars (specially in the low-metallicity MC stars) uncovered a Rb problem; such extreme Rb and [Rb/Zr] values are not predicted by the s-process AGB models, suggesting fundamental problems in our present understanding of their atmospheres. We present more realistic dynamical model atmospheres that consider a gaseous circumstellar envelope with a radial wind and we re-derive the Rb (and Zr) abundances in massive Galactic AGB stars. The new Rb abundances and [Rb/Zr] ratios derived with these dynamical models significantly resolve the problem of the mismatch between the observations and the theoretical predictions of the more massive AGB stars.
VisualUrText: A Text Analytics Tool for Unstructured Textual Data
NASA Astrophysics Data System (ADS)
Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.
2018-05-01
The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.
NASA Technical Reports Server (NTRS)
Musielak, Zdzislaw E.
1987-01-01
The radiative damping of acoustic and MHD waves that propagate through white dwarf photospheric layers is studied, and other damping processes that may be important for the propagation of the MHD waves are calculated. The amount of energy remaining after the damping processes have occurred in different types of waves is estimated. The results show that lower acoustic fluxes should be expected in layered DA and homogeneous DB white dwarfs than had previously been estimated. Acoustic emission manifests itself in an enhancement of the quadrupole term, but this term may become comparable to or even lower than the dipole term for cool white dwarfs. Energy carried by the acoustic waves is significantly dissipated in deep photospheric layers, mainly because of radiative damping. Acoustically heated corona cannot exist around DA and DB white dwarfs in a range T(eff) = 10,000-30,000 K and for log g = 7 and 8. However, relatively hot and massive white dwarfs could be exceptions.
Recycling of Ammonia Wastewater During Vanadium Extraction from Shale
NASA Astrophysics Data System (ADS)
Shi, Qihua; Zhang, Yimin; Liu, Tao; Huang, Jing
2018-03-01
In the vanadium metallurgical industry, massive amounts of ammonia hydroxide or ammonia salt are added during the precipitation process to obtain V2O5; therefore, wastewater containing a high level of NH4 + is generated, which poses a serious threat to environmental and hydrologic safety. In this article, a novel process was developed to recycle ammonia wastewater based on a combination of ammonia wastewater leaching and crystallization during vanadium extraction from shale. The effects of the NH4 + concentration, temperature, time and liquid-to-solid ratio on the leaching efficiencies of vanadium, aluminum and potassium were investigated, and the results showed that 93.2% of vanadium, 86.3% of aluminum and 96.8% of potassium can be leached from sulfation-roasted shale. Subsequently, 80.6% of NH4 + was separated from the leaching solution via cooling crystallization. Vanadium was recovered via a combined method of solvent extraction, precipitation and calcination. Therefore, ammonia wastewater was successfully recycled during vanadium extraction from shale.
Soleilhac, Emmanuelle; Nadon, Robert; Lafanechere, Laurence
2010-02-01
Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.
The s-process in massive stars: the Shell C-burning contribution
NASA Astrophysics Data System (ADS)
Pignatari, Marco; Gallino, R.; Baldovin, C.; Wiescher, M.; Herwig, F.; Heger, A.; Heil, M.; Käppeler, F.
In massive stars the s¡ process (slow neutron capture process) is activated at different tempera- tures, during He¡ burning and during convective shell C¡ burning. At solar metallicity, the neu- tron capture process in the convective C¡ shell adds a substantial contribution to the s¡ process yields made by the previous core He¡ burning, and the final results carry the signature of both processes. With decreasing metallicity, the contribution of the C¡ burning shell to the weak s¡ process rapidly decreases, because of the effect of the primary neutron poisons. On the other hand, also the s¡ process efficiency in the He core decreases with metallicity.
Imprints of fast-rotating massive stars in the Galactic Bulge.
Chiappini, Cristina; Frischknecht, Urs; Meynet, Georges; Hirschi, Raphael; Barbuy, Beatriz; Pignatari, Marco; Decressin, Thibaut; Maeder, André
2011-04-28
The first stars that formed after the Big Bang were probably massive, and they provided the Universe with the first elements heavier than helium ('metals'), which were incorporated into low-mass stars that have survived to the present. Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars (that is, with higher α-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe, which usually arises through nucleosynthesis in low-mass stars (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude, which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also overabundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars, whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the 'first stars'.
ERIC Educational Resources Information Center
Granena, Gisela
2013-01-01
Language aptitude has been hypothesized as a factor that can compensate for postcritical period effects in language learning capacity. However, previous research has primarily focused on instructed contexts and rarely on acquisition-rich learning environments where there is a potential for massive amounts of input. In addition, the studies…
ERIC Educational Resources Information Center
Tractenberg, Rochelle E.
2017-01-01
Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…
Wildfire events produce massive amounts of smoke and thus play an important role in local and regional air quality as well as public health. It is not well understood however if the impacts of wildfire smoke are influenced by fuel types or combustion conditions. Here we develop...
1988-2000 Long-Range Plan for Technology of the Texas State Board of Education.
ERIC Educational Resources Information Center
Texas State Board of Education, Austin.
This plan plots the course for meeting educational needs in Texas through such technologies as computer-based systems, devices for storage and retrieval of massive amounts of information, telecommunications for audio, video, and information sharing, and other electronic media devised by the year 2000 that can help meet the instructional and…
An extension of the plant ontology project supporting wood anatomy and development research
Federic Lens; Laurel Cooper; Maria Alejandra Gandolfo; Andrew Groover; Pankaj Jaiswal; Barbara Lachenbruch; Rachel Spicer; Margaret E. Staton; Dennis W. Stevenson; Ramona L. Walls; Jill Wegrzyn
2012-01-01
A wealth of information on plant anatomy and morphology is available in the current and historical literature, and molecular biologists are producing massive amounts of transcriptome and genome data that can be used to gain better insights into the development, evolution, ecology, and physiological function of plant anatomical attributes. Integrating anatomical and...
Watson for Genomics: Moving Personalized Medicine Forward.
Rhrissorrakrai, Kahn; Koyama, Takahiko; Parida, Laxmi
2016-08-01
The confluence of genomic technologies and cognitive computing has brought us to the doorstep of widespread usage of personalized medicine. Cognitive systems, such as Watson for Genomics (WG), integrate massive amounts of new omic data with the current body of knowledge to assist physicians in analyzing and acting on patient's genomic profiles. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wulf, Kathleen M.; And Others
1980-01-01
An analysis of the massive amount of literature pertaining to the improvement of professional instruction in dental education resulted in the formation of a comprehensive model of 10 categories, including Delphi technique; systems approach; agencies; workshops; multi-media, self-instruction; evaluation paradigms, measurement, courses, and…
The Effectiveness of "Knowledge Management System" in Research Mentoring Using Knowledge Engineering
ERIC Educational Resources Information Center
Sriwichai, Puangpet; Meksamoot, Komsak; Chakpitak, Nopasit; Dahal, Keshav; Jengjalean, Anchalee
2014-01-01
Currently, many old universities in Thailand have been facing the occurrence of lecturer massive retirement. This leads to the large amount of newly Ph. D. graduate recruitment for taking immediate responsibilities to teach and conduct research without mentoring by senior staff as well as in new universities. Therefore, this paper aims to propose…
Modeling MOOC Student Behavior with Two-Layer Hidden Markov Models
ERIC Educational Resources Information Center
Geigle, Chase; Zhai, ChengXiang
2017-01-01
Massive open online courses (MOOCs) provide educators with an abundance of data describing how students interact with the platform, but this data is highly underutilized today. This is in part due to the lack of sophisticated tools to provide interpretable and actionable summaries of huge amounts of MOOC activity present in log data. To address…
Dual Audio Television; an Experiment in Saturday Morning Broadcast and a Summary Report.
ERIC Educational Resources Information Center
Borton, Terry; And Others
The Philadelphia City Schools engaged in a four-year program to develop and test dual audio television, a way to help children learn more from the massive amounts of time they spend watching commercial television. The format consisted of an instructional radio broadcast which accompanied popular television shows and attempted to clarify and…
NSDL K-12 Science Literacy Maps: A Visual Tool for Learning
ERIC Educational Resources Information Center
Payo, Robert
2008-01-01
Given the massive amount of science and mathematics content available online, libraries working with science teachers can become lost when attempting to select material that is both compelling for the learner and effective in addressing learning goals. Tools that help educators identify the most appropriate resources can be a great time saver.…
Kayacan, Mehmet C; Baykal, Yakup B; Karaaslan, Tamer; Özsoy, Koray; Alaca, İlker; Duman, Burhan; Delikanlı, Yunus E
2018-04-01
This study investigated the design and osseointegration process of transitive porous implants that can be used in humans and all trabecular and compact bone structure animals. The aim was to find a way of forming a strong and durable tissue bond on the bone-implant interface. Massive and transitive porous implants were produced on a direct metal laser sintering machine, surgically implanted into the skulls of sheep and kept in place for 12 weeks. At the end of the 12-week period, the Massive and porous implants removed from the sheep were investigated by scanning electron microscopy (SEM) to monitor the osseointegration process. In the literature, each study has selected standard sizes for pore diameter in the structures they use. However, none of these involved transitional porous structures. In this study, as opposed to standard pores, there were spherical or elliptical pores at the micro level, development channels and an inner region. Bone cells developed in the inner region. Transitive pores grown gradually in accordance with the natural structure of the bone were modeled in the inner region for cells to develop. Due to this structure, a strong and durable tissue bond could be formed at the bone-implant interface. Osseointegration processes of Massive vs. porous implants were compared. It was observed that cells were concentrated on the surface of Massive implants. Therefore, osseointegration between implant and bone was less than that of porous implants. In transitive porous implants, as opposed to Massive implants, an outer region was formed in the bone-implant interface that allowed tissue development.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
NASA Astrophysics Data System (ADS)
Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin
2017-08-01
As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.
The Dramatic Size and Kinematic Evolution of Massive Early-type Galaxies
NASA Astrophysics Data System (ADS)
Lapi, A.; Pantoni, L.; Zanisi, L.; Shi, J.; Mancuso, C.; Massardi, M.; Shankar, F.; Bressan, A.; Danese, L.
2018-04-01
We aim to provide a holistic view on the typical size and kinematic evolution of massive early-type galaxies (ETGs) that encompasses their high-z star-forming progenitors, their high-z quiescent counterparts, and their configurations in the local Universe. Our investigation covers the main processes playing a relevant role in the cosmic evolution of ETGs. Specifically, their early fast evolution comprises biased collapse of the low angular momentum gaseous baryons located in the inner regions of the host dark matter halo; cooling, fragmentation, and infall of the gas down to the radius set by the centrifugal barrier; further rapid compaction via clump/gas migration toward the galaxy center, where strong heavily dust-enshrouded star formation takes place and most of the stellar mass is accumulated; and ejection of substantial gas amount from the inner regions by feedback processes, which causes a dramatic puffing-up of the stellar component. In the late slow evolution, passive aging of stellar populations and mass additions by dry merger events occur. We describe these processes relying on prescriptions inspired by basic physical arguments and by numerical simulations to derive new analytical estimates of the relevant sizes, timescales, and kinematic properties for individual galaxies along their evolution. Then we obtain quantitative results as a function of galaxy mass and redshift, and compare them to recent observational constraints on half-light size R e , on the ratio v/σ between rotation velocity and velocity dispersion (for gas and stars) and on the specific angular momentum j ⋆ of the stellar component; we find good consistency with the available multiband data in average values and dispersion, both for local ETGs and for their z ∼ 1–2 star-forming and quiescent progenitors. The outcomes of our analysis can provide hints to gauge sub-grid recipes implemented in simulations, to tune numerical experiments focused on specific processes, and to plan future multiband, high-resolution observations on high-redshift star-forming and quiescent galaxies with next-generation facilities.
Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot
2004-12-01
The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.
BEANS - a software package for distributed Big Data analysis
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz
2018-07-01
BEANS software is a web-based, easy to install and maintain, new tool to store and analyse in a distributed way a massive amount of data. It provides a clear interface for querying, filtering, aggregating, and plotting data from an arbitrary number of data sets. Its main purpose is to simplify the process of storing, examining, and finding new relations in huge data sets. The software is an answer to a growing need of the astronomical community to have a versatile tool to store, analyse, and compare the complex astrophysical numerical simulations with observations (e.g. simulations of the Galaxy or star clusters with the Gaia archive). However, this software was built in a general form and it is ready to use in any other research field. It can be used as a building block for other open-source software too.
BEANS - a software package for distributed Big Data analysis
NASA Astrophysics Data System (ADS)
Hypki, Arkadiusz
2018-03-01
BEANS software is a web based, easy to install and maintain, new tool to store and analyse in a distributed way a massive amount of data. It provides a clear interface for querying, filtering, aggregating, and plotting data from an arbitrary number of datasets. Its main purpose is to simplify the process of storing, examining and finding new relations in huge datasets. The software is an answer to a growing need of the astronomical community to have a versatile tool to store, analyse and compare the complex astrophysical numerical simulations with observations (e.g. simulations of the Galaxy or star clusters with the Gaia archive). However, this software was built in a general form and it is ready to use in any other research field. It can be used as a building block for other open source software too.
Goddard Conference on Mass Storage Systems and Technologies, Volume 1
NASA Technical Reports Server (NTRS)
Kobler, Ben (Editor); Hariharan, P. C. (Editor)
1993-01-01
Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology.
Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas
2014-01-01
Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806
Glacier-derived permafrost ground ice, Bylot Island, Nunavut
NASA Astrophysics Data System (ADS)
Coulombe, S.; Fortier, D.; Lacelle, D.; Godin, E.; Veillette, A.
2014-12-01
Massive icy bodies are important components of permafrost geosystems. In situ freezing of water in the ground by ice-segregation processes forms most of these icy bodies. Other hypotheses for the origin of massive ice include the burial of ice (e.g. glacier, snow, lake, river, sea). The analysis of ground-ice characteristics can give numerous clues about the geomorphologic processes and the thermal conditions at the time when permafrost developed. Massive underground ice therefore shows a great potential as a natural archive of the earth's past climate. Identifying the origin of massive ice is a challenge for permafrost science since the different types of massive ice remain difficult to distinguish on the sole basis of field observations. There is actually no clear method to accurately assess the origin of massive ice and identification criteria need to be defined. The present study uses physico-chemical techniques to characterize buried glacier ice observed on Bylot Island, Nunavut. Combined to the analysis of cryostratigraphy, massive-ice cores crystallography and high-resolution imagery of the internal structure of the ice cores were obtained using micro-computed tomography techniques. These techniques are well suited for detailed descriptions (shape, size, orientation) of crystals, gas inclusions and sediment inclusions. Oxygen and hydrogen isotopes ratios of massive-ice cores were also obtained using common equilibrium technique. Preliminary results suggest the occurrence of two types of buried massive-ice of glacial origin similar to those found on contemporary glaciers: 1) Englacial ice: clear to whitish ice, with large crystals (cm) and abundant gas bubbles at crystal intersections; 2) Basal glacier ice: ice-rich, banded, micro-suspended to suspended cryostructures and ice-rich lenticular to layered cryostructures, with small ice crystals (mm) and a few disseminated gas bubbles. Glacier-derived permafrost contains antegenetic ice, which is ice that predates the aggradation of the permafrost. Remnants of glacier ice represent unique environmental archives and offer the possibility to reconstruct climate anterior to the formation of permafrost.
Surface Operations Systems Improve Airport Efficiency
NASA Technical Reports Server (NTRS)
2009-01-01
With Small Business Innovation Research (SBIR) contracts from Ames Research Center, Mosaic ATM of Leesburg, Virginia created software to analyze surface operations at airports. Surface surveillance systems, which report locations every second for thousands of air and ground vehicles, generate massive amounts of data, making gathering and analyzing this information difficult. Mosaic?s Surface Operations Data Analysis and Adaptation (SODAA) tool is an off-line support tool that can analyze how well the airport surface operation is working and can help redesign procedures to improve operations. SODAA helps researchers pinpoint trends and correlations in vast amounts of recorded airport operations data.
Massive stars in the Sagittarius Dwarf Irregular Galaxy
NASA Astrophysics Data System (ADS)
Garcia, Miriam
2018-02-01
Low metallicity massive stars hold the key to interpret numerous processes in the past Universe including re-ionization, starburst galaxies, high-redshift supernovae, and γ-ray bursts. The Sagittarius Dwarf Irregular Galaxy [SagDIG, 12+log(O/H) = 7.37] represents an important landmark in the quest for analogues accessible with 10-m class telescopes. This Letter presents low-resolution spectroscopy executed with the Gran Telescopio Canarias that confirms that SagDIG hosts massive stars. The observations unveiled three OBA-type stars and one red supergiant candidate. Pending confirmation from high-resolution follow-up studies, these could be the most metal-poor massive stars of the Local Group.
Characterizing the Disk of a Recent Massive Collisional Event
NASA Astrophysics Data System (ADS)
Song, Inseok
2015-10-01
Debris disks play a key role in the formation and evolution of planetary systems. On rare occasions, circumstellar material appears as strictly warm infrared excess in regions of expected terrestrial planet formation and so present an interesting opportunity for the study of terrestrial planetary regions. There are only a few known cases of extreme, warm, dusty disks which lack any colder outer component including BD+20 307, HD 172555, EF Cha, and HD 23514. We have recently found a new system TYC 8830-410-1 belonging to this rare group. Warm dust grains are extremely short-lived, and the extraordinary amount of warm dust near these stars can only be plausibly explainable by a recent (or on-going) massive transient event such as the Late Heavy Bombardment (LHB) or plantary collisions. LHB-like events are seen generally in a system with a dominant cold disk, however, warm dust only systems show no hint of a massive cold disk. Planetary collisions leave a telltale sign of strange mid-IR spectral feature such as silica and we want to fully characterize the spectral shape of the newly found system with SOFIA/FORCAST. With SOFIA/FORCAST, we propose to obtain two narrow band photometric measurements between 6 and 9 microns. These FORCAST photometric measurements will constrain the amount and temperature of the warm disk in the system. There are less than a handful systems with a strong hint of recent planetary collisions. With the firmly constrained warm disk around TYC 8830-410-1, we will publish the discovery in a leading astronomical journal accompanied with a potential press release through SOFIA.
Unsupervised classification of variable stars
NASA Astrophysics Data System (ADS)
Valenzuela, Lucas; Pichara, Karim
2018-03-01
During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.
NASA Astrophysics Data System (ADS)
MacDonald, J. H., Jr.; Milliken, S. H.; Zalud, K. M.
2017-12-01
The Jurassic Ingalls ophiolite complex is located in the central Cascades, Washington State. This ophiolite predominantly consists of three variably serpentinized mantle units. Serpentinite occurs as massive replacing peridotite, or as highly sheared fault zones cutting other rocks. Mylonitic serpentinite forms a large-scale mélange in the middle of the ophiolite, and is interpreted as a fracture zone. Whole-rock and mineral geochemistry of the massive serpentinite was done to understand the metasomatic process and identify the possible protoliths of these rocks. Whole-rock major and trace elements of the massive serpentinite are similar to modern peridotites. The majority of samples analyzed are strongly serpentinized, while a few were moderately to weakly altered. Ca, Mg, and Al suggest these rocks formed from serpentinized harzburgite and dunite with minor lherzolite. All samples have positive Eu/Eu*. Serpentinites plot in fields defined by modern abyssal and forearc peridotites. Trace elements suggests the protoliths underwent variable amounts of mantel depletion (5-20%). Serpentine and relic igneous minerals were analyzed by EPMA at the Florida Center for Analytical Electron Microscopy. The serpentine dose not chemically display brucite mixing, has minor substitution of Fe, Ni, and Cr for Mg, and minor Al substitution for Si. Bastites have higher Ni than replaced olivine. Mineral chemistry, high LOI, and X-ray diffraction suggest lizardite is the primary serpentine polymorph, with minor chrysotile also occurring. Relic Al-chromite and Cr-spinel commonly have Cr-magnetite rims. These relic cores have little SiO2 and Fe3+, suggesting the spinels are well preserved. Most spinels plot in overlap fields defined by abyssal and arc peridotite, while two samples plot entirely in arc fields. Relic olivine have Fo90 to Fo92 and plot along the mantle array. Relic pyroxene are primarily enstatite, with lesser high-Ca varieties. Relic minerals plot near fields defined by harzburgite, dunite, and lherzolite from unaltered Ingalls peridotite. The massive serpentinite likely formed by low T (< 300°C), and possibly low pressure, hydrothermal alteration of harzburgite, dunite and lherzolite. The protoliths were variably depleted mantle residues from a possible supra-subduction zone setting.
Supervised Detection of Anomalous Light Curves in Massive Astronomical Catalogs
NASA Astrophysics Data System (ADS)
Nun, Isadora; Pichara, Karim; Protopapas, Pavlos; Kim, Dae-Won
2014-09-01
The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. In order to process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new methodology to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. By leaving out one of the classes on the training set, we perform a validity test and show that when the random forest classifier attempts to classify unknown light curves (the class left out), it votes with an unusual distribution among the classes. This rare voting is detected by the Bayesian network and expressed as a low joint probability. Our method is suitable for exploring massive data sets given that the training process is performed offline. We tested our algorithm on 20 million light curves from the MACHO catalog and generated a list of anomalous candidates. After analysis, we divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration, or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post-analysis stage by performing a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables, and X-ray sources. For some outliers there was no additional information. Among them we identified three unknown variability types and a few individual outliers that will be followed up in order to perform a deeper analysis.
Impact of the uncertainty in α-captures on {sup 22}Ne on the weak s-process in massive stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, N.; Hirschi, R.; Pignatari, M.
2014-05-02
Massive stars at solar metallicity contribute to the production of heavy elements with atomic masses between A = 60 and A = 90 via the so-called weak s-process (which takes place during core He and shell C burning phases). Furthermore, recent studies have shown that rotation boosts the s-process production in massive stars at low metallicities, with a production that may reach the barium neutron-magic peak. These results are very sensitive to neutron source and neutron poison reaction rates. For the weak s-process, the main neutron source is the reaction {sup 22}Ne(α,n){sup 25}Mg, which is in competition with {sup 22}Ne(α,γ){supmore » 26}Mg. The uncertainty of both rates strongly affects the nucleosynthesis predictions from stellar model calculations. In this study, we investigate the impact of the uncertainty in α-captures on {sup 22}Ne on the s-process nucleosynthesis in massive stars both at solar and at very low metallicity. For this purpose, we post-process, with the Nugrid mppnp code, non-rotating and rotating evolutionary models 25M{sub ⊙} stars at two different metallicities: Z = Z{sub ⊙} and Z = 10{sup −5}Z{sub ⊙}, respectively. Our results show that uncertainty of {sup 22}Ne(α,n){sup 25}Mg and {sup 22}Ne(α,γ){sup 26}Mg rates have a significant impact on the final elemental production especially for metal poor rotating models. Beside uncertainties in the neutron source reactions, for fast rotating massive stars at low metallicity we revisit the impact of the neutron poisoning effect by the reaction chain {sup 16}O(n,γ){sup 17}O(α,γ){sup 21}Ne, in competition with the {sup 17}O(α,n){sup 20}Ne, recycling the neutrons captured by {sup 16}O.« less
Massive ovarian edema, due to adjacent appendicitis.
Callen, Andrew L; Illangasekare, Tushani; Poder, Liina
2017-04-01
Massive ovarian edema is a benign clinical entity, the imaging findings of which can mimic an adnexal mass or ovarian torsion. In the setting of acute abdominal pain, identifying massive ovarian edema is a key in avoiding potential fertility-threatening surgery in young women. In addition, it is important to consider other contributing pathology when ovarian edema is secondary to another process. We present a case of a young woman presenting with subacute abdominal pain, whose initial workup revealed marked enlarged right ovary. Further imaging, diagnostic tests, and eventually diagnostic laparoscopy revealed that the ovarian enlargement was secondary to subacute appendicitis, rather than a primary adnexal process. We review the classic ultrasound and MRI imaging findings and pitfalls that relate to this diagnosis.
Ghaibeh, A Ammar; Kasem, Asem; Ng, Xun Jin; Nair, Hema Latha Krishna; Hirose, Jun; Thiruchelvam, Vinesh
2018-01-01
The analysis of Electronic Health Records (EHRs) is attracting a lot of research attention in the medical informatics domain. Hospitals and medical institutes started to use data mining techniques to gain new insights from the massive amounts of data that can be made available through EHRs. Researchers in the medical field have often used descriptive statistics and classical statistical methods to prove assumed medical hypotheses. However, discovering new insights from large amounts of data solely based on experts' observations is difficult. Using data mining techniques and visualizations, practitioners can find hidden knowledge, identify interesting patterns, or formulate new hypotheses to be further investigated. This paper describes a work in progress on using data mining methods to analyze clinical data of Nasopharyngeal Carcinoma (NPC) cancer patients. NPC is the fifth most common cancer among Malaysians, and the data analyzed in this study was collected from three states in Malaysia (Kuala Lumpur, Sabah and Sarawak), and is considered to be the largest up-to-date dataset of its kind. This research is addressing the issue of cancer recurrence after the completion of radiotherapy and chemotherapy treatment. We describe the procedure, problems, and insights gained during the process.
Direct trust-based security scheme for RREQ flooding attack in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Dutta, Kamlesh
2017-06-01
The routing algorithms in MANETs exhibit distributed and cooperative behaviour which makes them easy target for denial of service (DoS) attacks. RREQ flooding attack is a flooding-type DoS attack in context to Ad hoc On Demand Distance Vector (AODV) routing protocol, where the attacker broadcasts massive amount of bogus Route Request (RREQ) packets to set up the route with the non-existent or existent destination in the network. This paper presents direct trust-based security scheme to detect and mitigate the impact of RREQ flooding attack on the network, in which, every node evaluates the trust degree value of its neighbours through analysing the frequency of RREQ packets originated by them over a short period of time. Taking the node's trust degree value as the input, the proposed scheme is smoothly extended for suppressing the surplus RREQ and bogus RREQ flooding packets at one-hop neighbours during the route discovery process. This scheme distinguishes itself from existing techniques by not directly blocking the service of a normal node due to increased amount of RREQ packets in some unusual conditions. The results obtained throughout the simulation experiments clearly show the feasibility and effectiveness of the proposed defensive scheme.
[Massive cardiac lipomatosis, an autopsy finding in a patient with sudden death].
Zamarrón-de Lucas, Ester; García-Fernández, Eugenia; Carpio, Carlos; Alcolea, Sergio; Martínez-Abad, Yolanda; Álvarez-Sala, Rodolfo
2016-06-17
The fat replacement of myocardial cells is a degenerative process that usually affects the right ventricle and is found in 50% of the elderly. The problem arises when this degeneration occurs to a massive degree, a differential diagnosis with other pathologies being necessary. We present the case of a patient who died suddenly and a massive cardiac lipomatosis was found on autopsy, as the only explanation of the outcome. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure
Fontaine, Michael D.
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690
Detection of ^11B/^10B: Part II
NASA Astrophysics Data System (ADS)
Duncan, Douglas
1999-07-01
HST observations {e.g. Duncan Etal 1992; 1997} have led to new theories of how cosmic rays {CRs} rich in CNO near massive stars form the light elements Li, Be, and B {e.g. Ramaty Etal 1996, 1998}. The neutrino process in SN, which has never been experimentally verified, should also produce boron, but only ^11B, yielding a very different isotopic ratio than CR spallation. The boron isotope ratio, 11B/10B, can provide a definitive test of both these theories, but its galactic evolution is completely unknown. Our previous GHRS echelle observation of the moderately metal-poor {Fe/H=-1.0} star HD76932 placed a limit on its B isotope ratio, but not a definite value, because possible blending from an unknown spectrum line could not be ruled out {Rebull Etal 1998}. The discovery of a halo star greatly depleted in B {Primas Etal 1998b} provides a wonderful opportunity to make the result definite. By comparing two similar {Fe/H -1.6} stars, which have very different amounts of B, we can rule out or measure any blends. This should give a definite result for 11B/10B at metallcity Fe/H -1.6, an epoch when massive star SN should have dominated galactic nucleosynthesis. Furthermore, we can then use our blending knowledge to reanalyze HD76932, getting a definite result for its 11B/10B ratio as well.
Cosmological evolution of the nitrogen abundance
NASA Astrophysics Data System (ADS)
Vangioni, Elisabeth; Dvorkin, Irina; Olive, Keith A.; Dubois, Yohan; Molaro, Paolo; Petitjean, Patrick; Silk, Joe; Kimm, Taysun
2018-06-01
The abundance of nitrogen in the interstellar medium is a powerful probe of star formation processes over cosmological time-scales. Since nitrogen can be produced both in massive and intermediate-mass stars with metallicity-dependent yields, its evolution is challenging to model, as evidenced by the differences between theoretical predictions and observations. In this work, we attempt to identify the sources of these discrepancies using a cosmic evolution model. To further complicate matters, there is considerable dispersion in the abundances from observations of damped Lyα absorbers (DLAs) at z ˜ 2-3. We study the evolution of nitrogen with a detailed cosmic chemical evolution model and find good agreement with these observations, including the relative abundances of (N/O) and (N/Si). We find that the principal contribution of nitrogen comes from intermediate-mass stars, with the exception of systems with the lowest N/H, where nitrogen production might possibly be dominated by massive stars. This last result could be strengthened if stellar rotation which is important at low metallicity can produce significant amounts of nitrogen. Moreover, these systems likely reside in host galaxies with stellar masses below 108.5 M⊙. We also study the origin of the observed dispersion in nitrogen abundances using the cosmological hydrodynamical simulations Horizon-AGN. We conclude that this dispersion can originate from two effects: difference in the masses of the DLA host galaxies, and difference in their position inside the galaxy.
A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.
Xia, Yingjie; Hu, Jia; Fontaine, Michael D
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.
Working in a Text Mine; Is Access about to Go down?
ERIC Educational Resources Information Center
Emery, Jill
2008-01-01
The age of networked research and networked data analysis is upon us. "Wired Magazine" proclaims on the cover of their July 2008 issue: "The End of Science. The quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data. Welcome to the Petabyte Age." Computing technology is sufficiently complex at this point…
Stop Programming Robots: How to Prepare Every Student for Success in Any Career
ERIC Educational Resources Information Center
Chester, Eric
2012-01-01
While technology has made communicating "easy," it has done so at the cost of communication that is "meaningful." And for information to be internalized to the point where it is remembered, used and valued, it must be meaningful. In other words, technology makes it easy to disseminate massive amounts of information to the masses, but teaching a…
Teaching Case: Introduction to NoSQL in a Traditional Database Course
ERIC Educational Resources Information Center
Fowler, Brad; Godin, Joy; Geddy, Margaret
2016-01-01
Many organizations are dealing with the increasing demands of big data, so they are turning to NoSQL databases as their preferred system for handling the unique problems of capturing and storing massive amounts of data. Therefore, it is likely that employees in all sizes of organizations will encounter NoSQL databases. Thus, to be more job-ready,…
Critical Pedagogy and the Decolonial Option: Challenges to the Inevitability of Capitalism
ERIC Educational Resources Information Center
Monzó, Lilia D.; McLaren, Peter
2014-01-01
The demise of capitalism was theoretically prophesied by Marx who posited that the world would come to such a state of destruction and human suffering that no amount of coercion or concessions would suffice to stop the massive uprisings that would lead us into a new socialist alternative. Although the downfall of world capitalism may seem…
We're all in this together: decisionmaking to address climate change in a complex world
Jonathan Thompson; Ralph Alig
2009-01-01
Forests significantly influence the global carbon budget: they store massive amounts of carbon in their wood and soil, they sequester atmospheric carbon as they grow, and they emit carbon as a greenhouse gas when harvested or converted to another use. These factors make forest conservation and management important components of most strategies for adapting to and...
2006-06-16
7 Corsair . As the war in Southeast Asia expanded, the massive amounts of ordnance being dropped on Laos, Cambodia, South Vietnam, and North Vietnam...over North Vietnam. Though gravely wounded, one of Foster’s main concerns while he 89 lay in the Oriskany’s sick bay was the impact on Tom Spitzer
Should You Trust Your Money to a Robot?
Dhar, Vasant
2015-06-01
Financial markets emanate massive amounts of data from which machines can, in principle, learn to invest with minimal initial guidance from humans. I contrast human and machine strengths and weaknesses in making investment decisions. The analysis reveals areas in the investment landscape where machines are already very active and those where machines are likely to make significant inroads in the next few years.
ERIC Educational Resources Information Center
Molnar, Alex; Boninger, Faith
2015-01-01
Computer technology has made it possible to aggregate, collate, analyze, and store massive amounts of information about students. School districts and private companies that sell their services to the education market now regularly collect such information, raising significant issues about the privacy rights of students. Most school districts lack…
Star Formation in the Eagle Nebula
NASA Astrophysics Data System (ADS)
Oliveira, J. M.
2008-12-01
M16 (the Eagle Nebula) is a striking star forming region, with a complex morphology of gas and dust sculpted by the massive stars in NGC 6611. Detailed studies of the famous ``elephant trunks'' dramatically increased our understanding of the massive star feedback into the parent molecular cloud. A rich young stellar population (2-3 Myr) has been identified, from massive O-stars down to substellar masses. Deep into the remnant molecular material, embedded protostars, Herbig-Haro objects and maser sources bear evidence of ongoing star formation in the nebula, possibly triggered by the massive cluster members. M 16 is a excellent template for the study of star formation under the hostile environment created by massive O-stars. This review aims at providing an observational overview not only of the young stellar population but also of the gas remnant of the star formation process.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-01-01
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786
Game-powered machine learning.
Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert
2012-04-24
Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.
Experience of e-learning implementation through massive open online courses
NASA Astrophysics Data System (ADS)
Ivleva, N. V.; Fibikh, E. V.
2016-04-01
E-learning is considered to be one of the most prospective directions in education development worldwide. To have a competitive advantage over other institutions offering a wide variety of educational services it is important to introduce information and communication technologies into the educational process to develop e-learning on the whole. The aim of the research is to reveal problems which prevent from full implementation of e-learning at the Reshetnev Siberian State Aerospace University (SibSAU) and to suggest ways on solving those problems through optimization of e-learning introduction process at the university by motivating students and teaching staff to participate in massive open online courses and formation of tailored platforms with the view to arrange similar courses at the premises of the university. The paper considers the introduction and development level of e-learning in Russia and at SibSAU particularly. It substantiates necessity to accelerate e-learning introduction process at an aerospace university as a base for training of highly-qualified specialists in the area of aviation, machine building, physics, info-communication technologies and also in other scientific areas within which university training is carried out. The paper covers SibSAU’s experience in e-learning implementation in the educational process through students and teaching staff participation in massive open online courses and mastering other up-to-date and trendy educational platforms and their usage in the educational process. Key words. E-learning, distance learning, online learning, massive open online course.
The Feasibility of Linear Motors and High-Energy Thrusters for Massive Aerospace Vehicles
NASA Astrophysics Data System (ADS)
Stull, M. A.
A combination of two propulsion technologies, superconducting linear motors using ambient magnetic fields and high- energy particle beam thrusters, may make it possible to develop massive aerospace vehicles the size of aircraft carriers. If certain critical thresholds can be attained, linear motors can enable massive vehicles to fly within the atmosphere and can propel them to orbit. Thrusters can do neither, because power requirements are prohibitive. However, unless superconductors having extremely high critical current densities can be developed, the interplanetary magnetic field is too weak for linear motors to provide sufficient acceleration to reach even nearby planets. On the other hand, high-energy thrusters can provide adequate acceleration using a minimal amount of reaction mass, at achievable levels of power generation. If the requirements for linear motor propulsion can be met, combining the two modes of propulsion could enable huge nuclear powered spacecraft to reach at least the inner planets of the solar system, the asteroid belt, and possibly Jupiter, in reasonably short times under continuous acceleration, opening them to exploration, resource development and colonization.
Massive naproxen overdose with serial serum levels.
Al-Abri, Suad A; Anderson, Ilene B; Pedram, Fatehi; Colby, Jennifer M; Olson, Kent R
2015-03-01
Massive naproxen overdose is not commonly reported. Severe metabolic acidosis and seizure have been described, but the use of renal replacement therapy has not been studied in the context of overdose. A 28-year-old man ingested 70 g of naproxen along with an unknown amount of alcohol in a suicidal attempt. On examination in the emergency department 90 min later, he was drowsy but had normal vital signs apart from sinus tachycardia. Serum naproxen level 90 min after ingestion was 1,580 mg/L (therapeutic range 25-75 mg/L). He developed metabolic acidosis requiring renal replacement therapy using sustained low efficiency dialysis (SLED) and continuous venovenous hemofiltration (CVVH) and had recurrent seizure activity requiring intubation within 4 h from ingestion. He recovered after 48 h. Massive naproxen overdose can present with serious toxicity including seizures, altered mental status, and metabolic acidosis. Hemodialysis and renal replacement therapy may correct the acid base disturbance and provide support in cases of renal impairment in context of naproxen overdose, but further studies are needed to determine the extraction of naproxen.
Howe, S.S.
1985-01-01
The Devonian massive sulfide orebodies of the West Shasta district in N California are composed primarily of pyrite, with lesser amounts of other sulfide and gangue minerals. Examination of polished thin sections of more than 100 samples from the Mammoth, Shasta King, Early Bird, Balaklala, Keystone, and Iron Mountain mines suggests that mineralization may be divided into 6 paragenetic stages, the last 5 each separated by an episode of deformation: 1) precipitation of fine-grained, locally colloform and framboidal pyrite and sphalerite; 2) deposition of fine-grained arsenopyrite and coarse-grained pyrite; 3) penetration and local replacement of sulfide minerals of stages 1 and 2 along growth zones and fractures by chalcopyrite, sphalerite, galena, tennantite, pyrrhotite, bornite, and idaite; 4) recrystallization and remobilization of existing minerals; 5) deposition of quartz, white mica, chlorite, and calcite; and 6) formation of bornite, digenite, chalcocite, and covellite during supergene enrichment of several orebodies at the Iron Mountain mine. Mineralogic and textural evidence do not support a second major episode of massive sulfide mineralization during the Permian. -from Author
Contribution of Massive Stars to the Production of Neutron Capture Elements
NASA Astrophysics Data System (ADS)
Federman, Steven
2010-09-01
Elements beyond the Fe-peak must be synthesized through neutron-capture processes. With the aim of understanding the contribution of massive stars to the synthesis of neutron-capture elements during the current epoch, we propose an archival survey of interstellar arsenic, cadmium, tin, and lead. Nucleosynthesis via the weak slow process and the rapid process are the routes involving massive stars, while the main slow process arises from the evolution of low-mass stars. Ultraviolet lines for the dominant ions for each element will be used to extract interstellar abundances. The survey involves about forty sight lines, many of which are associated with regions of massive star formation shaped by core-collapse supernovae {SNe II}. The sample will increase the number of published determinations by factors of 2 to 5. HST spectra are the only means for determining the elemental abundances for this set of species in diffuse interstellar clouds. The survey contains directions that are both molecule poor and molecule rich, thereby enabling us to examine the overall level of depletion onto grains as a function of gas density. Complementary laboratory determinations of oscillator strengths will place the interstellar measurements on an absolute scale. The results from the proposed study will be combined with published interstellar abundances for other neutron capture elements and the suite of measurements will be compared to results from stars throughout the history of the Galaxy.
Shifting of the resonance location for planets embedded in circumstellar disks
NASA Astrophysics Data System (ADS)
Marzari, F.
2018-03-01
Context. In the early evolution of a planetary system, a pair of planets may be captured in a mean motion resonance while still embedded in their nesting circumstellar disk. Aims: The goal is to estimate the direction and amount of shift in the semimajor axis of the resonance location due to the disk gravity as a function of the gas density and mass of the planets. The stability of the resonance lock when the disk dissipates is also tested. Methods: The orbital evolution of a large number of systems is numerically integrated within a three-body problem in which the disk potential is computed as a series of expansion. This is a good approximation, at least over a limited amount of time. Results: Two different resonances are studied: the 2:1 and the 3:2. In both cases the shift is inwards, even if by a different amount, when the planets are massive and carve a gap in the disk. For super-Earths, the shift is instead outwards. Different disk densities, Σ, are considered and the resonance shift depends almost linearly on Σ. The gas dissipation leads to destabilization of a significant number of resonant systems, in particular if it is fast. Conclusions: The presence of a massive circumstellar disk may significantly affect the resonant behavior of a pair of planets by shifting the resonant location and by decreasing the size of the stability region. The disk dissipation may explain some systems found close to a resonance but not locked in it.
NASA Technical Reports Server (NTRS)
Strugalski, Z.
1985-01-01
Experimental study of the space-time development of the particle production process in hadronic collisions at its initial stage was performed. Massive target nuclei have been used as fine detectors of properties of the particle production process development within time intervals smaller than 10 to the 22nd power s and spatial distances smaller than 10 to the 12th power cm. In hadron-nucleon collisions, in particular in nucleon-nucleon collisions, the particle production process goes through intermediate objects in 2 yields 2 type endoergic reactions. The objects decay into commonly observed resonances and paricles.
Learning from Massive Distributed Data Sets (Invited)
NASA Astrophysics Data System (ADS)
Kang, E. L.; Braverman, A. J.
2013-12-01
Technologies for remote sensing and ever-expanding computer experiments in climate science are generating massive data sets. Meanwhile, it has been common in all areas of large-scale science to have these 'big data' distributed over multiple different physical locations, and moving large amounts of data can be impractical. In this talk, we will discuss efficient ways for us to summarize and learn from distributed data. We formulate a graphical model to mimic the main characteristics of a distributed-data network, including the size of the data sets and speed of moving data. With this nominal model, we investigate the trade off between prediction accurate and cost of data movement, theoretically and through simulation experiments. We will also discuss new implementations of spatial and spatio-temporal statistical methods optimized for distributed data.
Applications of massively parallel computers in telemetry processing
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon
1994-01-01
Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).
Principal thorium resources in the United States
Staatz, Mortimer Hay; Armbrustmacher, T.J.; Olson, J.C.; Brownfield, I.K.; Brock, M.R.; Lemons, J.F.; Coppa, L.V.; Clingan, B.V.
1979-01-01
Resources were assessed for thorium in the higher grade and better known deposits in the United States in: (1) veins, (2) massive carbonatites, (3) stream placers of North and South Carolina, and (4) disseminated deposits. Thorium resources for the first three categories were divided into reserves and probable potential resources. Each of these then were separated into the following cost categories: (1) the amount of ThO2 producible at less than $15 per pound, (2) the amount producible at between $15 and $30 per pound, and (3) the amount producible at more than $50 per pound. The type of mining and milling needed at each deposit determines the capital, operating, and fixed costs of both mining and milling. Costs start with the clearing of land and are carried through to the final product, which for all deposits is ThO2. Capital costs of mining are affected most by the type of mining and the size of the mine. Those of milling are affected most by the kind of mill, its size, and whether or not extra circuits are needed for the separation of rare earths or some other byproduct. Veins, massive carbonatites, and stream placers of North and South Carolina have reserves of 188,000 short tons of ThO2 and probable potential resources of 505,000 tons of ThO2. Approximately half of the reserves and probable potential resources can be produced at less than $30 per pound of ThO2. Veins are the highest grade source in the United States and have total reserves of 142,000 tons of ThO2 and probable potential resources of 343,000 tons. About 90 percent of the reserves and 91 percent of the probable potential resources can be produced at less than $15 per pound of ThO2. Seven vein districts were evaluated: (1) Lemhi Pass, Mont.-Idaho, (2) Wet Mountains, Colo., (3) Powderhorn, Colo., (4) Hall Mountain, Idaho, (5) Diamond Creek, Idaho, (6) Bear Lodge Mountains, Wyo. and (7) Mountain Pass, Calif. Eighty-seven percent of the total reserves and probable potential resources are in the Lemhi Pass and Wet Mountains Districts. The first district has reserves of 68,000 tons of ThO2 and probable potential resources of 124,000 tons that can be produced at less than $15 per pound; the second district has 54,000 tons of reserves and 141,000 tons of probable potential resources producible at less than $15 per pound. Rare earths are a common byproduct, and in many veins they are from one-half to several times as abundant as thorium. Massive carbonatite bodies are large-tonnage low-grade deposits. Thorium in these deposits would be a byproduct either of rare earth or of niobium mining. The Iron Hill carbonatite body in the Powderhorn district, Colorado, and the Sulfide Queen carbonatite body in the Mountain Pass district, California, were evaluated. These two deposits contain 40,800 tons of ThO2 in reserves and 125,000 tons of ThO2 in probable potential resources. More than 80 percent of this total is in the Iron Hill carbonatite. This thorium is entirely a byproduct and is producible at less than $15 per pound of ThO2. The Sulphide Queen massive carbonatite deposit was being mined in 1977 for rare earths, and thorium could be recovered by adding an extra circuit to the existing mill. Stream placers in North and South Carolina occur both in the Piedmont and just east of the Fall Line. The reserves of these deposits total 5,270 tons of ThO2, and the probable potential resources are 36,800 tons of ThO2. The Piedmont placers are all too small to produce ThO2 at a cost of less than $50 per pound. One placer on Hollow Creek, S.C., just east of the Fall Line had reserves of 2,040 tons of ThO2 that is producible at between $15 and $30 per pound. Thorium occurs in monazite in these placers. Other heavy minerals that would be recovered with the monazite include rutile, zircon, and ilmenite. In addition to thorium, monazite contains large amounts of rare earths and small amounts of uranium; both can be recovered during the process that separates thorium fr
Hunting for Shooting Stars in 30 Doradus
NASA Astrophysics Data System (ADS)
de Mink, Selma E.; Lennon, D. J.; Sabbi, E.; Anderson, J.; Bedin, L. R.; Sohn, S.; van der Marel, R. P.; Walborn, N. R.; Bastian, N.; Bressert, E.; Crowther, P. A.; Evans, C. J.; Herrero, A.; Langer, N.; Sana, H.
2012-01-01
We are undertaking an ambitious proper motion survey of massive stars in the 30 Doradus region of the Large Magellanic Cloud using the unique capabilities of HST. We aim to derive the directions of motion of massive runaway stars, searching in particular for stars which have been ejected from the dense star cluster R136. These stars probe the dynamical processes in the core of the cluster. The core has been suggested as a formation site for very massive stars exceeding the canonical upper limit of the IMF. These are possible progenitors of intermediate-mass black holes. Furthermore, they provide insight about the origin of massive field stars, addressing open questions related to the poorly understood process of massive star formation. Some may originate from disrupted binary systems and bear the imprints of interaction with the original companion. They will end their life far away from their birth location as core collapse supernova or possibly even long gamma-ray bursts. Here we discuss the first epoch of observations, presenting a 16'x13' mosaic of the data, and initial results based on comparisons with archival data. SdM acknowledges the NASA Hubble Fellowship grant HST-HF-51270.01-A awarded by STScI, operated by AURA for NASA, contract NAS 5-26555.
NASA Astrophysics Data System (ADS)
Prantzos, N.; Abia, C.; Limongi, M.; Chieffi, A.; Cristallo, S.
2018-05-01
We present a comprehensive study of the abundance evolution of the elements from H to U in the Milky Way halo and local disc. We use a consistent chemical evolution model, metallicity-dependent isotopic yields from low and intermediate mass stars and yields from massive stars which include, for the first time, the combined effect of metallicity, mass loss, and rotation for a large grid of stellar masses and for all stages of stellar evolution. The yields of massive stars are weighted by a metallicity-dependent function of the rotational velocities, constrained by observations as to obtain a primary-like 14N behaviour at low metallicity and to avoid overproduction of s-elements at intermediate metallicities. We show that the Solar system isotopic composition can be reproduced to better than a factor of 2 for isotopes up to the Fe-peak, and at the 10 per cent level for most pure s-isotopes, both light ones (resulting from the weak s-process in rotating massive stars) and the heavy ones (resulting from the main s-process in low and intermediate mass stars). We conclude that the light element primary process (LEPP), invoked to explain the apparent abundance deficiency of the s-elements with A < 100, is not necessary. We also reproduce the evolution of the heavy to light s-elements abundance ratio ([hs/ls]) - recently observed in unevolved thin disc stars - as a result of the contribution of rotating massive stars at sub-solar metallicities. We find that those stars produce primary F and dominate its solar abundance and we confirm their role in the observed primary behaviour of N. In contrast, we show that their action is insufficient to explain the small observed values of ^{12}C/^{13}C in halo red giants, which is rather due to internal processes in those stars.
NASA Astrophysics Data System (ADS)
Haritashya, U. K.; Strattman, K.; Kargel, J. S.
2017-12-01
A high altitude glacierized region in the central Himalaya hosts thousands of glaciers and originates major rivers like the Ganges and Yamuna. This region has seen significant changes in last few decades due to climate system coupling involving the westerlies and the monsoon, high seismic activities, complex topography, extensive glacier debris cover, and widespread mass movement. Consequently, we analyzed regional variability in hundreds of glacier surface processes and downstream river basins of varying geomorphology using a variety of satellite imagery from the early 1990s to 2017. Our results indicate a massive increase in supraglacial ponds in south facing glaciers. Several of these ponds are either seasonal and forms exactly at the same location every year or forms at the beginning of the melt season and drains out as the season progresses from April to July/August. We also observed evolution in size of these ponds in the last two decades to the point where some of them now seem to be stationary and might increase in size and develop large lake in the future. To understand our result and melting pattern in the region, we also analyzed ice velocity and surface temperature; both of which reveals a temporal shift in the pattern. Glacier surface temperatures, especially show a warming pattern in recent years and strong correlation with debris cover. Additionally, we also observed changes in the downstream region both around the river bed and steep slopes where massive erosion of Himalayan glaciers are depositing and transporting excessive amount of sediments. Overall, our results are discussed in the context of better landscape evolution modeling from the top of the glacier to the several km downstream from the glacier terminus.
NASA Astrophysics Data System (ADS)
Plümper, Oliver; Beinlich, Andreas; Bach, Wolfgang; Janots, Emilie; Austrheim, Håkon
2014-09-01
Geochemical micro-environments within serpentinizing systems can abiotically synthesize hydrocarbons and provide the ingredients required to support life. Observations of organic matter in microgeode-like hydrogarnets found in Mid-Atlantic Ridge serpentinites suggest these garnets possibly represent unique nests for the colonization of microbial ecosystems within the oceanic lithosphere. However, little is known about the mineralogical and geochemical processes that allow such unique environments to form. Here we present work on outcrop-scale vein networks from an ultramafic massif in Norway that contain massive amounts of spherulitic garnets (andradite), which help to constrain such processes. Vein andradite spherulites are associated with polyhedral serpentine, brucite, Ni-Fe alloy (awaruite), and magnetite indicative of low temperature (<200 °C) alteration under low fO2 and low aSiO2,aq geochemical conditions. Together with the outcrop- and micro-scale analysis geochemical reaction path modeling shows that there was limited mass transport and fluid flow over a large scale. Once opened the veins remained isolated (closed system), forming non-equilibrium microenvironments that allowed, upon a threshold supersaturation, the rapid crystallization (seconds to weeks) of spherulitic andradite. The presence of polyhedral serpentine spheres indicates that veins were initially filled with a gel-like protoserpentine phase. In addition, massive Fe oxidation associated with andradite formation could have generated as much as 600 mmol H2,aq per 100 cm3 vein. Although no carboneous matter was detected, the vein networks fulfill the reported geochemical criteria required to generate abiogenic hydrocarbons and support microbial communities. Thus, systems similar to those investigated here are of prime interest when searching for life-supporting environments within the deep subsurface.
Metal enrichment of the intracluster medium: SN-driven galactic winds
NASA Astrophysics Data System (ADS)
Baumgartner, V.; Breitschwerdt, D.
2009-12-01
% We investigate the role of supernova (SN)-driven galactic winds in the chemical enrichment of the intracluster medium (ICM). Such outflows on galactic scales have their origin in huge star forming regions and expel metal enriched material out of the galaxies into their surroundings as observed, for example, in the nearby starburst galaxy NGC 253. As massive stars in OB-associations explode sequentially, shock waves are driven into the interstellar medium (ISM) of a galaxy and merge, forming a superbubble (SB). These SBs expand in a direction perpendicular to the disk plane following the density gradient of the ISM. We use the 2D analytical approximation by Kompaneets (1960) to model the expansion of SBs in an exponentially stratified ISM. This is modified in order to describe the sequence of SN-explosions as a time-dependent process taking into account the main-sequence life-time of the SN-progenitors and using an initial mass function to get the number of massive stars per mass interval. The evolution of the bubble in space and time is calculated analytically, from which the onset of Rayleigh-Taylor instabilities in the shell can be determined. In its further evolution, the shell will break up and high-metallicity gas will be ejected into the halo of the galaxy and even into the ICM. We derive the number of stars needed for blow-out depending on the scale height and density of the ambient medium, as well as the fraction of alpha- and iron peak elements contained in the hot gas. Finally, the amount of metals injected by Milky Way-type galaxies to the ICM is calculated confirming the importance of this enrichment process.
A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics
NASA Technical Reports Server (NTRS)
Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan
2013-01-01
In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.
NASA Astrophysics Data System (ADS)
Guo, B.; Su, J.; Li, Z. H.; Wang, Y. B.; Yan, S. Q.; Li, Y. J.; Shu, N. C.; Han, Y. L.; Bai, X. X.; Chen, Y. S.; Liu, W. P.; Yamaguchi, H.; Binh, D. N.; Hashimoto, T.; Hayakawa, S.; Kahl, D.; Kubono, S.; He, J. J.; Hu, J.; Xu, S. W.; Iwasa, N.; Kume, N.; Li, Z. H.
2013-01-01
The evolution of massive stars with very low-metallicities depends critically on the amount of CNO nuclides which they produce. The 12N(p,γ)13O reaction is an important branching point in the rap processes, which are believed to be alternative paths to the slow 3α process for producing CNO seed nuclei and thus could change the fate of massive stars. In the present work, the angular distribution of the 2H(12N, 13O)n proton transfer reaction at Ec.m.=8.4 MeV has been measured for the first time. Based on the Johnson-Soper approach, the square of the asymptotic normalization coefficient (ANC) for the virtual decay of 13Og.s. → 12N+p was extracted to be 3.92±1.47 fm-1 from the measured angular distribution and utilized to compute the direct component in the 12N(p,γ)13O reaction. The direct astrophysical S factor at zero energy was then found to be 0.39±0.15 keV b. By considering the direct capture into the ground state of 13O, the resonant capture via the first excited state of 13O and their interference, we determined the total astrophysical S factors and rates of the 12N(p,γ)13O reaction. The new rate is two orders of magnitude slower than that from the REACLIB compilation. Our reaction network calculations with the present rate imply that 12N(p,γ)13O will only compete successfully with the β+ decay of 12N at higher (˜2 orders of magnitude) densities than initially predicted.
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
On the origin of high-velocity runaway stars
NASA Astrophysics Data System (ADS)
Gvaramadze, Vasilii V.; Gualandris, Alessia; Portegies Zwart, Simon
2009-06-01
We explore the hypothesis that some high-velocity runaway stars attain their peculiar velocities in the course of exchange encounters between hard massive binaries and a very massive star (either an ordinary 50-100Msolar star or a more massive one, formed through runaway mergers of ordinary stars in the core of a young massive star cluster). In this process, one of the binary components becomes gravitationally bound to the very massive star, while the second one is ejected, sometimes with a high speed. We performed three-body scattering experiments and found that early B-type stars (the progenitors of the majority of neutron stars) can be ejected with velocities of >~200-400kms-1 (typical of pulsars), while 3-4Msolar stars can attain velocities of >~300-400kms-1 (typical of the bound population of halo late B-type stars). We also found that the ejected stars can occasionally attain velocities exceeding the Milky Ways's escape velocity.
Assawincharoenkij, Thitiphan; Hauzenberger, Christoph; Ettinger, Karl; Sutthirat, Chakkaphan
2018-02-01
Waste rocks from gold mining in northeastern Thailand are classified as sandstone, siltstone, gossan, skarn, skarn-sulfide, massive sulfide, diorite, and limestone/marble. Among these rocks, skarn-sulfide and massive sulfide rocks have the potential to generate acid mine drainage (AMD) because they contain significant amounts of sulfide minerals, i.e., pyrrhotite, pyrite, arsenopyrite, and chalcopyrite. Moreover, both sulfide rocks present high contents of As and Cu, which are caused by the occurrence of arsenopyrite and chalcopyrite, respectively. Another main concern is gossan contents, which are composed of goethite, hydrous ferric oxide (HFO), quartz, gypsum, and oxidized pyroxene. X-ray maps using electron probe micro-analysis (EPMA) indicate distribution of some toxic elements in Fe-oxyhydroxide minerals in the gossan waste rock. Arsenic (up to 1.37 wt.%) and copper (up to 0.60 wt.%) are found in goethite, HFO, and along the oxidized rim of pyroxene. Therefore, the gossan rock appears to be a source of As, Cu, and Mn. As a result, massive sulfide, skarn-sulfide, and gossan have the potential to cause environmental impacts, particularly AMD and toxic element contamination. Consequently, the massive sulfide and skarn-sulfide waste rocks should be protected from oxygen and water to avoid an oxidizing environment, whereas the gossan waste rocks should be protected from the formation of AMD to prevent heavy metal contamination.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
Massive Fabrication of Polymer Microdiscs by Phase Separation and Freestanding Process.
Zhang, Hong; Fujii, Mao; Okamura, Yosuke; Zhang, Li; Takeoka, Shinji
2016-06-29
We present a facile method to fabricate polymer thin films with tens of nanometers thickness and several micrometers size (also called "microdiscs" herein) by applying phase separation of polymer blend. A water-soluble supporting layer is employed to obtain a freestanding microdisc suspension. Owing to their miniaturized size, microdiscs can be injected through a syringe needle. Herein, poly(d,l-lactic acid) microdiscs were fabricated with various thicknesses and sizes, in the range from ca. 10 to 60 nm and from ca. 1.0 to 10.0 μm, respectively. Magnetic nanoparticles were deposited on polymer microdiscs with a surface coating method. The magnetic manipulation of microdiscs in a liquid environment under an external magnetic field was achieved with controllable velocity by adjusting the microdisc dimensions and the loading amount of magnetic components. Such biocompatible polymer microdiscs are expected to serve as injectable vehicles for targeted drug delivery.
Broadband observations of the naked-eye gamma-ray burst GRB 080319B.
Racusin, J L; Karpov, S V; Sokolowski, M; Granot, J; Wu, X F; Pal'shin, V; Covino, S; van der Horst, A J; Oates, S R; Schady, P; Smith, R J; Cummings, J; Starling, R L C; Piotrowski, L W; Zhang, B; Evans, P A; Holland, S T; Malek, K; Page, M T; Vetere, L; Margutti, R; Guidorzi, C; Kamble, A P; Curran, P A; Beardmore, A; Kouveliotou, C; Mankiewicz, L; Melandri, A; O'Brien, P T; Page, K L; Piran, T; Tanvir, N R; Wrochna, G; Aptekar, R L; Barthelmy, S; Bartolini, C; Beskin, G M; Bondar, S; Bremer, M; Campana, S; Castro-Tirado, A; Cucchiara, A; Cwiok, M; D'Avanzo, P; D'Elia, V; Valle, M Della; de Ugarte Postigo, A; Dominik, W; Falcone, A; Fiore, F; Fox, D B; Frederiks, D D; Fruchter, A S; Fugazza, D; Garrett, M A; Gehrels, N; Golenetskii, S; Gomboc, A; Gorosabel, J; Greco, G; Guarnieri, A; Immler, S; Jelinek, M; Kasprowicz, G; La Parola, V; Levan, A J; Mangano, V; Mazets, E P; Molinari, E; Moretti, A; Nawrocki, K; Oleynik, P P; Osborne, J P; Pagani, C; Pandey, S B; Paragi, Z; Perri, M; Piccioni, A; Ramirez-Ruiz, E; Roming, P W A; Steele, I A; Strom, R G; Testa, V; Tosti, G; Ulanov, M V; Wiersema, K; Wijers, R A M J; Winters, J M; Zarnecki, A F; Zerbi, F; Mészáros, P; Chincarini, G; Burrows, D N
2008-09-11
Long-duration gamma-ray bursts (GRBs) release copious amounts of energy across the entire electromagnetic spectrum, and so provide a window into the process of black hole formation from the collapse of massive stars. Previous early optical observations of even the most exceptional GRBs (990123 and 030329) lacked both the temporal resolution to probe the optical flash in detail and the accuracy needed to trace the transition from the prompt emission within the outflow to external shocks caused by interaction with the progenitor environment. Here we report observations of the extraordinarily bright prompt optical and gamma-ray emission of GRB 080319B that provide diagnostics within seconds of its formation, followed by broadband observations of the afterglow decay that continued for weeks. We show that the prompt emission stems from a single physical region, implying an extremely relativistic outflow that propagates within the narrow inner core of a two-component jet.
Automatic Detection of Seizures with Applications
NASA Technical Reports Server (NTRS)
Olsen, Dale E.; Harris, John C.; Cutchis, Protagoras N.; Cristion, John A.; Lesser, Ronald P.; Webber, W. Robert S.
1993-01-01
There are an estimated two million people with epilepsy in the United States. Many of these people do not respond to anti-epileptic drug therapy. Two devices can be developed to assist in the treatment of epilepsy. The first is a microcomputer-based system designed to process massive amounts of electroencephalogram (EEG) data collected during long-term monitoring of patients for the purpose of diagnosing seizures, assessing the effectiveness of medical therapy, or selecting patients for epilepsy surgery. Such a device would select and display important EEG events. Currently many such events are missed. A second device could be implanted and would detect seizures and initiate therapy. Both of these devices require a reliable seizure detection algorithm. A new algorithm is described. It is believed to represent an improvement over existing seizure detection algorithms because better signal features were selected and better standardization methods were used.
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...
2015-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less
Standardized data collection to build prediction models in oncology: a prototype for rectal cancer.
Meldolesi, Elisa; van Soest, Johan; Damiani, Andrea; Dekker, Andre; Alitto, Anna Rita; Campitelli, Maura; Dinapoli, Nicola; Gatta, Roberto; Gambacorta, Maria Antonietta; Lanzotti, Vito; Lambin, Philippe; Valentini, Vincenzo
2016-01-01
The advances in diagnostic and treatment technology are responsible for a remarkable transformation in the internal medicine concept with the establishment of a new idea of personalized medicine. Inter- and intra-patient tumor heterogeneity and the clinical outcome and/or treatment's toxicity's complexity, justify the effort to develop predictive models from decision support systems. However, the number of evaluated variables coming from multiple disciplines: oncology, computer science, bioinformatics, statistics, genomics, imaging, among others could be very large thus making traditional statistical analysis difficult to exploit. Automated data-mining processes and machine learning approaches can be a solution to organize the massive amount of data, trying to unravel important interaction. The purpose of this paper is to describe the strategy to collect and analyze data properly for decision support and introduce the concept of an 'umbrella protocol' within the framework of 'rapid learning healthcare'.
A high-redshift IRAS galaxy with huge luminosity - Hidden quasar or protogalaxy?
NASA Technical Reports Server (NTRS)
Rowan-Robinson, M.; Broadhurst, T.; Oliver, S. J.; Taylor, A. N.; Lawrence, A.; Mcmahon, R. G.; Lonsdale, C. J.; Hacking, P. B.; Conrow, T.
1991-01-01
An emission line galaxy with the enormous far-IR luminosity of 3 x 10 to the 14th solar has been found at z = 2.286. The spectrum is very unusual, showing lines of high excitation but with very weak Lyman-alpha emission. A self-absorbed synchrotron model for the IR energy distribution cannot be ruled out, but a thermal origin seems more plausible. A radio-quiet quasar embedded in a very dusty galaxy could account for the IR emission, as might a starburst embedded in 1-10 billion solar masses of dust. The latter case demands so much dust that the object would probably be a massive galaxy in the process of formation. The presence of a large amount of dust in an object of such high redshift implies the generation of heavy elements at an early cosmological epoch.
Laboratory and software applications for clinical trials: the global laboratory environment.
Briscoe, Chad
2011-11-01
The Applied Pharmaceutical Software Meeting is held annually. It is sponsored by The Boston Society, a not-for-profit organization that coordinates a series of meetings within the global pharmaceutical industry. The meeting generally focuses on laboratory applications, but in recent years has expanded to include some software applications for clinical trials. The 2011 meeting emphasized the global laboratory environment. Global clinical trials generate massive amounts of data in many locations that must be centralized and processed for efficient analysis. Thus, the meeting had a strong focus on establishing networks and systems for dealing with the computer infrastructure to support such environments. In addition to the globally installed laboratory information management system, electronic laboratory notebook and other traditional laboratory applications, cloud computing is quickly becoming the answer to provide efficient, inexpensive options for managing the large volumes of data and computing power, and thus it served as a central theme for the meeting.
Google matrix analysis of directed networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2015-10-01
In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.
Overballe-Petersen, Søren; Willerslev, Eske
2014-01-01
Horizontal gene transfer in the form of long DNA fragments has changed our view of bacterial evolution. Recently, we discovered that such processes may also occur with the massive amounts of short and damaged DNA in the environment, and even with truly ancient DNA. Although it presently remains unclear how often it takes place in nature, horizontal gene transfer of short and damaged DNA opens up the possibility for genetic exchange across distinct species in both time and space. In this essay, we speculate on the potential evolutionary consequences of this phenomenon. We argue that it may challenge basic assumptions in evolutionary theory; that it may have distant origins in life's history; and that horizontal gene transfer should be viewed as an evolutionary strategy not only preceding but causally underpinning the evolution of sexual reproduction. PMID:25143190
Rice husk-originating silicon-graphite composites for advanced lithium ion battery anodes.
Kim, Hye Jin; Choi, Jin Hyeok; Choi, Jang Wook
2017-01-01
Rice husk is produced in a massive amount worldwide as a byproduct of rice cultivation. Rice husk contains approximately 20 wt% of mesoporous SiO 2 . We produce mesoporous silicon (Si) by reducing the rice husk-originating SiO 2 using a magnesio-milling process. Taking advantage of meso-porosity and large available quantity, we apply rice husk-originating Si to lithium ion battery anodes in a composite form with commercial graphite. By varying the mass ratio between these two components, trade-off relation between specific capacity and cycle life was observed. A controllable pre-lithiation scheme was adopted to increase the initial Coulombic efficiency and energy density. The series of electrochemical results suggest that rice husk-originating Si-graphite composites are promising candidates for high capacity lithium ion battery anodes, with the prominent advantages in battery performance and scalability.
Overballe-Petersen, Søren; Willerslev, Eske
2014-10-01
Horizontal gene transfer in the form of long DNA fragments has changed our view of bacterial evolution. Recently, we discovered that such processes may also occur with the massive amounts of short and damaged DNA in the environment, and even with truly ancient DNA. Although it presently remains unclear how often it takes place in nature, horizontal gene transfer of short and damaged DNA opens up the possibility for genetic exchange across distinct species in both time and space. In this essay, we speculate on the potential evolutionary consequences of this phenomenon. We argue that it may challenge basic assumptions in evolutionary theory; that it may have distant origins in life's history; and that horizontal gene transfer should be viewed as an evolutionary strategy not only preceding but causally underpinning the evolution of sexual reproduction. © 2014 The Authors. BioEssays Published by WILEY Periodicals, Inc.
Marincioni, Fausto
2007-12-01
A comparative survey of a diverse sample of 96 US and Italian emergency management agencies shows that the diffusion of new information technologies (IT) has transformed disaster communications. Although these technologies permit access to and the dissemination of massive amounts of disaster information with unprecedented speed and efficiency, barriers rooted in the various professional cultures still hinder the sharing of disaster knowledge. To be effective the available IT must be attuned to the unique settings and professional cultures of the local emergency management communities. Findings show that available technology, context, professional culture and interaction are key factors that affect the knowledge transfer process. Cultural filters appear to influence emergency managers' perceptions of their own professional roles, their vision of the applicability of technology to social issues, and their perspective on the transferability of disaster knowledge. Four cultural approaches to the application of IT to disaster communications are defined: technocentric; geographic,; anthropocentric; and ecocentric.
A case study for cloud based high throughput analysis of NGS data using the globus genomics system
Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha
2014-01-01
Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205
de Lange, Natascha; Schol, Pim; Lancé, Marcus; Woiski, Mallory; Langenveld, Josje; Rijnders, Robbert; Smits, Luc; Wassen, Martine; Henskens, Yvonne; Scheepers, Hubertina
2018-03-06
Postpartum hemorrhage (PPH) is associated with maternal morbidity and mortality and has an increasing incidence in high-resource countries, despite dissemination of guidelines, introduction of skills training, and correction for risk factors. Current guidelines advise the administration, as fluid resuscitation, of almost twice the amount of blood lost. This advice is not evidence-based and could potentially harm patients. All women attending the outpatient clinic who are eligible will be informed of the study; oral and written informed consent will be obtained. Where there is more than 500 ml blood loss and ongoing bleeding, patients will be randomized to care as usual, fluid resuscitation with 1.5-2 times the amount of blood loss or fluid resuscitation with 0.75-1.0 times the blood loss. Blood loss will be assessed by weighing all draping. A blood sample, for determining hemoglobin concentration, hematocrit, thrombocyte concentration, and conventional coagulation parameters will be taken at the start of the study, after 60 min, and 12-18 h after delivery. In a subgroup of women, additional thromboelastometric parameters will be obtained. Our hypothesis is that massive fluid administration might lead to a progression of bleeding due to secondary coagulation disorders. In non-pregnant individuals with massive blood loss, restrictive fluid management has been shown to prevent a progression to dilution coagulopathy. These data, however, cannot be extrapolated to women in labor. Our objective is to compare both resuscitation protocols in women with early, mild PPH (blood loss 500-750 ml) and ongoing bleeding, taking as primary outcome measure the progression to severe PPH (blood loss > 1000 ml). Netherlands Trial Register, NTR 3789 . Registered on 11 January 2013.
NASA Astrophysics Data System (ADS)
Gee, L.; Reed, B.; Mayer, L.
2002-12-01
Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.
The design and implementation of multi-source application middleware based on service bus
NASA Astrophysics Data System (ADS)
Li, Yichun; Jiang, Ningkang
2017-06-01
With the rapid development of the Internet of Things(IoT), the real-time monitoring data are increasing with different types and large amounts. Aiming at taking full advantages of the data, we designed and implemented an application middleware, which not only supports the three-layer architecture of IoT information system but also enables the flexible configuration of multiple resources access and other accessional modules. The middleware platform shows the characteristics of lightness, security, AoP (aspect-oriented programming), distribution and real-time, which can let application developers construct the information processing systems on related areas in a short period. It focuses not limited to these functions: pre-processing of data format, the definition of data entity, the callings and handlings of distributed service and massive data process. The result of experiment shows that the performance of middleware is more excellent than some message queue construction to some degree and its throughput grows better as the number of distributed nodes increases while the code is not complex. Currently, the middleware is applied to the system of Shanghai Pudong environmental protection agency and achieved a great success.
He, Tingting; Aiken, Steve; Bance, Manohar; Yin, Shankai; Wang, Jian
2012-01-01
Noise-exposure at levels low enough to avoid a permanent threshold shift has been found to cause a massive, delayed degeneration of spiral ganglion neurons (SGNs) in mouse cochleae. Damage to the afferent innervation was initiated by a loss of synaptic ribbons, which is largely irreversible in mice. A similar delayed loss of SGNs has been found in guinea pig cochleae, but at a reduced level, suggesting a cross-species difference in SGN sensitivity to noise. Ribbon synapse damage occurs “silently” in that it does not affect hearing thresholds as conventionally measured, and the functional consequence of this damage is not clear. In the present study, we further explored the effect of noise on cochlear afferent innervation in guinea pigs by focusing on the dynamic changes in ribbon counts over time, and resultant changes in temporal processing. It was found that (1) contrary to reports in mice, the initial loss of ribbons largely recovered within a month after the noise exposure, although a significant amount of residual damage existed; (2) while the response threshold fully recovered in a month, the temporal processing continued to be deteriorated during this period. PMID:23185359
Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson
2012-01-01
This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...
Calcium Oxalate Accumulation in Malpighian Tubules of Silkworm (Bombyx mori)
NASA Astrophysics Data System (ADS)
Wyman, Aaron J.; Webb, Mary Alice
2007-04-01
Silkworm provides an ideal model system for study of calcium oxalate crystallization in kidney-like organs, called Malpighian tubules. During their growth and development, silkworm larvae accumulate massive amounts of calcium oxalate crystals in their Malpighian tubules with no apparent harm to the organism. This manuscript reports studies of crystal structure in the tubules along with analyses identifying molecular constituents of tubule exudate.
Capturing Neutrinos from a Star's Final Hours
NASA Astrophysics Data System (ADS)
Hensley, Kerry
2018-04-01
What happens on the last day of a massive stars life? In the hours before the star collapses and explodes as a supernova, the rapid evolution of material in its core creates swarms of neutrinos. Observing these neutrinos may help us understand the final stages of a massive stars life but theyve never been detected.A view of some of the 1,520 phototubes within the MiniBooNE neutrino detector. Observations from this and other detectors are helping to illuminate the nature of the mysterious neutrino. [Fred Ullrich/FNAL]Silent Signposts of Stellar EvolutionThe nuclear fusion that powers stars generates tremendous amounts of energy. Much of this energy is emitted as photons, but a curious and elusive particle the neutrino carries away most of the energy in the late stages of stellar evolution.Stellar neutrinos can be created through two processes: thermal processesand beta processes. Thermal processes e.g.,pair production, in which a particle/antiparticle pair are created depend on the temperature and pressure of the stellar core. Beta processes i.e.,when a proton converts to a neutron, or vice versa are instead linked to the isotopic makeup of the stars core. This means that, if we can observe them, beta-process neutrinos may be able to tell us about the last steps of stellar nucleosynthesis in a dying star.But observing these neutrinos is not so easilydone. Neutrinos arenearly massless, neutral particles that interact only feebly with matter; out of the whopping 1060neutrinos released in a supernova explosion, even the most sensitive detectors only record the passage of just a few. Do we have a chance of detectingthe beta-process neutrinos that are released in the final few hours of a stars life, beforethe collapse?Neutrino luminosities leading up to core collapse. Shortly before collapse, the luminosity of beta-process neutrinos outshines that of any other neutrino flavor or origin. [Adapted from Patton et al. 2017]Modeling Stellar CoresTo answer this question, Kelly Patton (University of Washington) and collaborators first used a stellar evolution model to explore neutrino production in massive stars. They modeled the evolution of two massive stars 15 and 30 times the mass of our Sun from the onset of nuclear fusion to the moment of collapse.The authors found that in the last few hours before collapse, during which the material in the stars cores is rapidly upcycled into heavier elements, the flux from beta-process neutrinos rivals that of thermal neutrinos and even exceeds it at high energies. So now we know there are many beta-process neutrinos but can we spot them?Neutrino and antineutrino fluxes at Earth from the last 2 hours of a 30-solar-mass stars life compared to the flux from background sources. The rows represent calculations using two different neutrino mass hierarchies. Click to enlarge. [Patton et al. 2017]Observing Elusive NeutrinosFor an imminent supernova at a distance of 1 kiloparsec, the authors find that the presupernova electron neutrino flux rises above the background noise from the Sun, nuclear reactors, and radioactive decay within the Earth in the final two hours before collapse.Based on these calculations, current and future neutrino observatories should be able to detect tens of neutrinos from a supernova within 1 kiloparsec, about 30% of which would be beta-process neutrinos. As the distance to the star increases, the time and energy window within which neutrinos can be observed gradually narrows, until it closes for stars at a distance of about 30 kiloparsecs.Are there any nearby supergiants soon to go supernova so these predictions can be tested? At a distance of only 650 light-years, the red supergiant star Betelgeuse should produce detectable neutrinos when it explodes an exciting opportunity for astronomers in the far future!CitationKelly M. Patton et al 2017ApJ8516. doi:10.3847/1538-4357/aa95c4
The Destructive Birth of Massive Stars and Massive Star Clusters
NASA Astrophysics Data System (ADS)
Rosen, Anna; Krumholz, Mark; McKee, Christopher F.; Klein, Richard I.; Ramirez-Ruiz, Enrico
2017-01-01
Massive stars play an essential role in the Universe. They are rare, yet the energy and momentum they inject into the interstellar medium with their intense radiation fields dwarfs the contribution by their vastly more numerous low-mass cousins. Previous theoretical and observational studies have concluded that the feedback associated with massive stars' radiation fields is the dominant mechanism regulating massive star and massive star cluster (MSC) formation. Therefore detailed simulation of the formation of massive stars and MSCs, which host hundreds to thousands of massive stars, requires an accurate treatment of radiation. For this purpose, we have developed a new, highly accurate hybrid radiation algorithm that properly treats the absorption of the direct radiation field from stars and the re-emission and processing by interstellar dust. We use our new tool to perform a suite of three-dimensional radiation-hydrodynamic simulations of the formation of massive stars and MSCs. For individual massive stellar systems, we simulate the collapse of massive pre-stellar cores with laminar and turbulent initial conditions and properly resolve regions where we expect instabilities to grow. We find that mass is channeled to the massive stellar system via gravitational and Rayleigh-Taylor (RT) instabilities. For laminar initial conditions, proper treatment of the direct radiation field produces later onset of RT instability, but does not suppress it entirely provided the edges of the radiation-dominated bubbles are adequately resolved. RT instabilities arise immediately for turbulent pre-stellar cores because the initial turbulence seeds the instabilities. To model MSC formation, we simulate the collapse of a dense, turbulent, magnetized Mcl = 106 M⊙ molecular cloud. We find that the influence of the magnetic pressure and radiative feedback slows down star formation. Furthermore, we find that star formation is suppressed along dense filaments where the magnetic field is amplified. Our results suggest that the combined effect of turbulence, magnetic pressure, and radiative feedback from massive stars is responsible for the low star formation efficiencies observed in molecular clouds.
A Taxonomy of Vocabulary Learning Strategies Used in Massively Multiplayer Online Role-Playing Games
ERIC Educational Resources Information Center
Bytheway, Julie
2015-01-01
Initiated in response to informal reports of vocabulary gains from gamers at universities in New Zealand and the Netherlands, this qualitative study explored how English language learners autonomously learn vocabulary while playing massively multiplayer online role-playing games (MMORPGs). Using research processes inherent in Grounded Theory, data…
Design of a massively parallel computer using bit serial processing elements
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Khouri, Kamal S.; Piatt, Jason E.; Zheng, Jianqing
1995-01-01
A 1-bit serial processor designed for a parallel computer architecture is described. This processor is used to develop a massively parallel computational engine, with a single instruction-multiple data (SIMD) architecture. The computer is simulated and tested to verify its operation and to measure its performance for further development.
Examples from the Greenland-Project - Gentle Remediation Optiones (GROs) on Pb/zn Contaminated Sites
NASA Astrophysics Data System (ADS)
Friesl-Hanl, Wolfgang; Kidd, Petra; Siebielec, Grzegorz
2017-04-01
The GREENLAND-project brought together "best practice" examples of several field applied gentle remediation techniques (EUFP7-project "Gentle remediation of trace element-contaminated land - GREENLAND; www.greenland-project.eu) with 17 partners from 11 countries. Gentle remediation options (GRO) comprise environmentally friendly technologies that have little or no negative impact on the soil. The main technologies are • phytoextraction • in situ immobilization and • assisted phytostabilization. Mining and processing activities affecting many sites worldwide negatively. The huge amounts of moved and treated materials have led to considerable flows of wastes and emissions. Alongside the many advantages of processed ores to our society, adverse effects in nature and risks for the environment and human health are observed. Three stages of impact of Pb/Zn-ore-treatment on the environment are discussed here: (1) On sites where the ores are mined impacts are the result of crushing, grinding, concentrating activities, and where additionally parts of the installations remain after abandoning the mine, as well as by the massive amounts of remaining deposits or wastes (mine tailings). (2) On sites where smelting and processing takes place, depending on the process (Welz, Doerschel) different waste materials are deposited. The Welz process waste generally contains less Cd and Pb than the Doerschel process waste which additionally shows higher water- extractable metals. (3) On sites close to the emitting source metal contamination can be found in areas for housing, gardening, and agricultural use. Emissions consist mainly from oxides and sulfides (Zn, Cd), sulfates (Zn, Pb, and Cd), chlorides (Pb) and carbonates (Cd). All these wastes and emissions pose potential risks of dispersion of pollutants into the food chain due to erosion (wind, water), leaching and the transfer into feeding stuff and food crops. In-situ treatments have the potential for improving the situation on site and will be shown by means of field experiments in Spain, Poland and Austria. Keywords: Mining and smelting, in-situ remediation, phytomanagement, gentle remediation options
Thermophilic versus Mesophilic Anaerobic Digestion of Sewage Sludge: A Comparative Review
Gebreeyessus, Getachew D.; Jenicek, Pavel
2016-01-01
During advanced biological wastewater treatment, a huge amount of sludge is produced as a by-product of the treatment process. Hence, reuse and recovery of resources and energy from the sludge is a big technological challenge. The processing of sludge produced by Wastewater Treatment Plants (WWTPs) is massive, which takes up a big part of the overall operational costs. In this regard, anaerobic digestion (AD) of sewage sludge continues to be an attractive option to produce biogas that could contribute to the wastewater management cost reduction and foster the sustainability of those WWTPs. At the same time, AD reduces sludge amounts and that again contributes to the reduction of the sludge disposal costs. However, sludge volume minimization remains, a challenge thus improvement of dewatering efficiency is an inevitable part of WWTP operation. As a result, AD parameters could have significant impact on sludge properties. One of the most important operational parameters influencing the AD process is temperature. Consequently, the thermophilic and the mesophilic modes of sludge AD are compared for their pros and cons by many researchers. However, most comparisons are more focused on biogas yield, process speed and stability. Regarding the biogas yield, thermophilic sludge AD is preferred over the mesophilic one because of its faster biochemical reaction rate. Equally important but not studied sufficiently until now was the influence of temperature on the digestate quality, which is expressed mainly by the sludge dewateringability, and the reject water quality (chemical oxygen demand, ammonia nitrogen, and pH). In the field of comparison of thermophilic and mesophilic digestion process, few and often inconclusive research, unfortunately, has been published so far. Hence, recommendations for optimized technologies have not yet been done. The review presented provides a comparison of existing sludge AD technologies and the gaps that need to be filled so as to optimize the connection between the two systems. In addition, many other relevant AD process parameters, including sludge rheology, which need to be addressed, are also reviewed and presented. PMID:28952577
NASA Astrophysics Data System (ADS)
De Becker, Michaël; Blomme, Ronny; Micela, Giusi; Pittard, Julian M.; Rauw, Gregor; Romero, Gustavo E.; Sana, Hugues; Stevens, Ian R.
2009-05-01
Several colliding-wind massive binaries are known to be non-thermal emitters in the radio domain. This constitutes strong evidence for the fact that an efficient particle acceleration process is at work in these objects. The acceleration mechanism is most probably the Diffusive Shock Acceleration (DSA) process in the presence of strong hydrodynamic shocks due to the colliding-winds. In order to investigate the physics of this particle acceleration, we initiated a multiwavelength campaign covering a large part of the electromagnetic spectrum. In this context, the detailed study of the hard X-ray emission from these sources in the SIMBOL-X bandpass constitutes a crucial element in order to probe this still poorly known topic of astrophysics. It should be noted that colliding-wind massive binaries should be considered as very valuable targets for the investigation of particle acceleration in a similar way as supernova remnants, but in a different region of the parameter space.
GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.
Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan
2011-05-01
Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.
NASA Technical Reports Server (NTRS)
Hicks, Rebecca
2009-01-01
A fiber Bragg grating is a portion of a core of a fiber optic strand that has been treated to affect the way light travels through the strand. Light within a certain narrow range of wavelengths will be reflected along the fiber by the grating, while light outside that range will pass through the grating mostly undisturbed. Since the range of wavelengths that can penetrate the grating depends on the grating itself as well as temperature and mechanical strain, fiber Bragg gratings can be used as temperature and strain sensors. This capability, along with the light-weight nature of the fiber optic strands in which the gratings reside, make fiber optic sensors an ideal candidate for flight testing and monitoring in which temperature and wing strain are factors. The purpose of this project is to research the availability of software capable of processing massive amounts of data in both real-time and post-flight settings, and to produce software segments that can be integrated to assist in the task as well.
Studies on the hot corrosion of a nickel-base superalloy, Udimet 700
NASA Technical Reports Server (NTRS)
Misra, A. K.
1984-01-01
The hot corrosion of a nickel-base superalloy, Udimet 700, was studied in the temperature range of 884 to 965 C and with different amounts of Na2SO4. Two different modes of degradation were identified: (1) formation of Na2MoO4 - MoO3 melt and fluxing by this melt, and (2) formation of large interconnected sulfides. The dissolution of Cr2O3, TiO2 in the Na2SO4 melt does not play a significant role in the overall corrosion process. The conditions for the formation of massive interconnected sulfides were identified and a mechanism of degradation due to sulfide formation is described. The formation of Ns2MoO4 - MoO3 melt requires an induction period and various physiochemical processes during the induction period were identified. The factors affecting the length of the induction period were also examined. The melt penetration through the oxide appears to be the prime mode of degradation whether the degradation is due to the formation of sulfides or the formation of the Na2MoO4 - MoO3 melt.
Trends in Nanomaterial-Based Non-Invasive Diabetes Sensing Technologies
Makaram, Prashanth; Owens, Dawn; Aceros, Juan
2014-01-01
Blood glucose monitoring is considered the gold standard for diabetes diagnostics and self-monitoring. However, the underlying process is invasive and highly uncomfortable for patients. Furthermore, the process must be completed several times a day to successfully manage the disease, which greatly contributes to the massive need for non-invasive monitoring options. Human serums, such as saliva, sweat, breath, urine and tears, contain traces of glucose and are easily accessible. Therefore, they allow minimal to non-invasive glucose monitoring, making them attractive alternatives to blood measurements. Numerous developments regarding noninvasive glucose detection techniques have taken place over the years, but recently, they have gained recognition as viable alternatives, due to the advent of nanotechnology-based sensors. Such sensors are optimal for testing the amount of glucose in serums other than blood thanks to their enhanced sensitivity and selectivity ranges, in addition to their size and compatibility with electronic circuitry. These nanotechnology approaches are rapidly evolving, and new techniques are constantly emerging. Hence, this manuscript aims to review current and future nanomaterial-based technologies utilizing saliva, sweat, breath and tears as a diagnostic medium for diabetes monitoring. PMID:26852676
NASA Astrophysics Data System (ADS)
Fu, Yao; Zhang, Xian-Cheng; Sui, Jian-Feng; Tu, Shan-Tung; Xuan, Fu-Zhen; Wang, Zheng-Dong
2015-04-01
The aim of this paper was to develop a one-step in situ method to synthesize the TiN reinforced Al metallic matrix composite coatings on Ti6Al4V alloy. In this method, the Al powder and nitrogen gas were simultaneously fed into feeding nozzle during a laser nitriding process. The microstructure, microhardness and sliding wear resistance of TiN/Al coatings synthesized at different laser powers in laser nitriding were investigated. Results showed that the crack- and pore-free coatings can be made through the proposed method. However, the morphologies and distribution of TiN dendrites and wear resistance of coatings were strongly dependent on laser power used in nitriding. With increasing the laser power, the amount and density of massive TiN dendritic structure in the coating decreased and the elongated and narrow dendrites increased, leading to the increment of wear resistance of coating. When the laser power is high, the convectional flow pattern of the melt pool can be seen near the bottom of pool.
Comparison of Copper Scavenging Capacity between Two Different Red Mud Types
Ma, Yingqun; Si, Chunhua; Lin, Chuxia
2012-01-01
A batch experiment was conducted to compare the Cu scavenging capacity between two different red mud types: the first one was a highly basic red mud derived from a combined sintering and Bayer process, and the second one was a seawater-neutralized red mud derived from the Bayer process. The first red mud contained substantial amounts of CaCO3, which, in combination with the high OH− activity, favored the immobilization of water-borne Cu through massive formation of atacamite. In comparison, the seawater-neutralized red mud had a lower pH and was dominated by boehmite, which was likely to play a significant role in Cu adsorption. Overall, it appears that Cu was more tightly retained by the CaCO3-dominated red mud than the boehmite-dominated red mud. It is concluded that the heterogeneity of red mud has marked influences on its capacity to immobilize water-borne Cu and maintain the long-term stability of the immobilized Cu species. The research findings obtained from this study have implications for the development of Cu immobilization technology by using appropriate waste materials generated from the aluminium industry.
Trends in Nanomaterial-Based Non-Invasive Diabetes Sensing Technologies.
Makaram, Prashanth; Owens, Dawn; Aceros, Juan
2014-04-21
Blood glucose monitoring is considered the gold standard for diabetes diagnostics and self-monitoring. However, the underlying process is invasive and highly uncomfortable for patients. Furthermore, the process must be completed several times a day to successfully manage the disease, which greatly contributes to the massive need for non-invasive monitoring options. Human serums, such as saliva, sweat, breath, urine and tears, contain traces of glucose and are easily accessible. Therefore, they allow minimal to non-invasive glucose monitoring, making them attractive alternatives to blood measurements. Numerous developments regarding noninvasive glucose detection techniques have taken place over the years, but recently, they have gained recognition as viable alternatives, due to the advent of nanotechnology-based sensors. Such sensors are optimal for testing the amount of glucose in serums other than blood thanks to their enhanced sensitivity and selectivity ranges, in addition to their size and compatibility with electronic circuitry. These nanotechnology approaches are rapidly evolving, and new techniques are constantly emerging. Hence, this manuscript aims to review current and future nanomaterial-based technologies utilizing saliva, sweat, breath and tears as a diagnostic medium for diabetes monitoring.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
NASA Astrophysics Data System (ADS)
Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo
2017-08-01
We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU data processing time)
Joseph, Aneeta Mary; Snellings, Ruben; Van den Heede, Philip; Matthys, Stijn
2018-01-01
Huge amounts of waste are being generated, and even though the incineration process reduces the mass and volume of waste to a large extent, massive amounts of residues still remain. On average, out of 1.3 billion tons of municipal solid wastes generated per year, around 130 and 2.1 million tons are incinerated in the world and in Belgium, respectively. Around 400 kT of bottom ash residues are generated in Flanders, out of which only 102 kT are utilized here, and the rest is exported or landfilled due to non-conformity to environmental regulations. Landfilling makes the valuable resources in the residues unavailable and results in more primary raw materials being used, increasing mining and related hazards. Identifying and employing the right pre-treatment technique for the highest value application is the key to attaining a circular economy. We reviewed the present pre-treatment and utilization scenarios in Belgium, and the advancements in research around the world for realization of maximum utilization are reported in this paper. Uses of the material in the cement industry as a binder and cement raw meal replacement are identified as possible effective utilization options for large quantities of bottom ash. Pre-treatment techniques that could facilitate this use are also discussed. With all the research evidence available, there is now a need for combined efforts from incineration and the cement industry for technical and economic optimization of the process flow. PMID:29337887
A SOAP Web Service for accessing MODIS land product subsets
DOE Office of Scientific and Technical Information (OSTI.GOV)
SanthanaVannan, Suresh K; Cook, Robert B; Pan, Jerry Yun
2011-01-01
Remote sensing data from satellites have provided valuable information on the state of the earth for several decades. Since March 2000, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on board NASA s Terra and Aqua satellites have been providing estimates of several land parameters useful in understanding earth system processes at global, continental, and regional scales. However, the HDF-EOS file format, specialized software needed to process the HDF-EOS files, data volume, and the high spatial and temporal resolution of MODIS data make it difficult for users wanting to extract small but valuable amounts of information from the MODIS record. Tomore » overcome this usability issue, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemical Dynamics at Oak Ridge National Laboratory (ORNL) developed a Web service that provides subsets of MODIS land products using Simple Object Access Protocol (SOAP). The ORNL DAAC MODIS subsetting Web service is a unique way of serving satellite data that exploits a fairly established and popular Internet protocol to allow users access to massive amounts of remote sensing data. The Web service provides MODIS land product subsets up to 201 x 201 km in a non-proprietary comma delimited text file format. Users can programmatically query the Web service to extract MODIS land parameters for real time data integration into models, decision support tools or connect to workflow software. Information regarding the MODIS SOAP subsetting Web service is available on the World Wide Web (WWW) at http://daac.ornl.gov/modiswebservice.« less
NASA Astrophysics Data System (ADS)
Inoue, Tsuyoshi; Hennebelle, Patrick; Fukui, Yasuo; Matsumoto, Tomoaki; Iwasaki, Kazunari; Inutsuka, Shu-ichiro
2018-05-01
Recent observations suggest an that intensive molecular cloud collision can trigger massive star/cluster formation. The most important physical process caused by the collision is a shock compression. In this paper, the influence of a shock wave on the evolution of a molecular cloud is studied numerically by using isothermal magnetohydrodynamics simulations with the effect of self-gravity. Adaptive mesh refinement and sink particle techniques are used to follow the long-time evolution of the shocked cloud. We find that the shock compression of a turbulent inhomogeneous molecular cloud creates massive filaments, which lie perpendicularly to the background magnetic field, as we have pointed out in a previous paper. The massive filament shows global collapse along the filament, which feeds a sink particle located at the collapse center. We observe a high accretion rate \\dot{M}_acc> 10^{-4} M_{⊙}yr-1 that is high enough to allow the formation of even O-type stars. The most massive sink particle achieves M > 50 M_{⊙} in a few times 105 yr after the onset of the filament collapse.
Highly accurate quantitative spectroscopy of massive stars in the Galaxy
NASA Astrophysics Data System (ADS)
Nieva, María-Fernanda; Przybilla, Norbert
2017-11-01
Achieving high accuracy and precision in stellar parameter and chemical composition determinations is challenging in massive star spectroscopy. On one hand, the target selection for an unbiased sample build-up is complicated by several types of peculiarities that can occur in individual objects. On the other hand, composite spectra are often not recognized as such even at medium-high spectral resolution and typical signal-to-noise ratios, despite multiplicity among massive stars is widespread. In particular, surveys that produce large amounts of automatically reduced data are prone to oversight of details that turn hazardous for the analysis with techniques that have been developed for a set of standard assumptions applicable to a spectrum of a single star. Much larger systematic errors than anticipated may therefore result because of the unrecognized true nature of the investigated objects, or much smaller sample sizes of objects for the analysis than initially planned, if recognized. More factors to be taken care of are the multiple steps from the choice of instrument over the details of the data reduction chain to the choice of modelling code, input data, analysis technique and the selection of the spectral lines to be analyzed. Only when avoiding all the possible pitfalls, a precise and accurate characterization of the stars in terms of fundamental parameters and chemical fingerprints can be achieved that form the basis for further investigations regarding e.g. stellar structure and evolution or the chemical evolution of the Galaxy. The scope of the present work is to provide the massive star and also other astrophysical communities with criteria to evaluate the quality of spectroscopic investigations of massive stars before interpreting them in a broader context. The discussion is guided by our experiences made in the course of over a decade of studies of massive star spectroscopy ranging from the simplest single objects to multiple systems.
Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach
NASA Astrophysics Data System (ADS)
Baumann, P.
2012-04-01
Complex analytics on "big data" is one of the core challenges of current Earth science, generating strong requirements for on-demand processing and fil tering of massive data sets. Issues under discussion include flexibility, performance, scalability, and the heterogeneity of the information types invo lved. In other domains, high-level query languages (such as those offered by database systems) have proven successful in the quest for flexible, scalable data access interfaces to massive amounts of data. However, due to the lack of support for many of the Earth science data structures, database systems are only used for registries and catalogs, but not for the bulk of spatio-temporal data. One core information category in this field is given by coverage data. ISO 19123 defines coverages, simplifying, as a representation of a "space-time varying phenomenon". This model can express a large class of Earth science data structures, including rectified and non-rectified rasters, curvilinear grids, point clouds, TINs, general meshes, trajectories, surfaces, and solids. This abstract definition, which is too high-level to establish interoperability, is concretized by the OGC GML 3.2.1 Application Schema for Coverages Standard into an interoperable representation. The OGC Web Coverage Processing Service (WCPS) Standard defines a declarative query language on multi-dimensional raster-type coverages, such as 1D in-situ sensor timeseries, 2D EO imagery, 3D x/y/t image time series and x/y/z geophysical data, 4D x/y/z/t climate and ocean data. Hence, important ingredients for versatile coverage retrieval are given - however, this potential has not been fully unleashed by service architectures up to now. The EU FP7-INFRA project EarthServer, launched in September 2011, aims at enabling standards-based on-demand analytics over the Web for Earth science data based on an integration of W3C XQuery for alphanumeric data and OGC-WCPS for raster data. Ultimately, EarthServer will support all OGC coverage types. The platform used by EarthServer is the rasdaman raster database system. To exploit heterogeneous multi-parallel platforms, automatic request distribution and orchestration is being established. Client toolkits are under development which will allow to quickly compose bespoke interactive clients, ranging from mobile devices over Web clients to high-end immersive virtual reality. The EarthServer platform has been deployed in six large-scale data centres with the aim of setting up Lighthouse Applications addressing all Earth Sciences, including satellite and airborne earth observation as well as use cases from atmosphere, ocean, snow, and ice monitoring, and geology on Earth and Mars. These services, each of which will ultimately host at least 100 TB, will form a peer cloud with distributed query processing for arbitrarily mixing database and in-situ access. With its ability to directly manipulate, analyze and remix massive data, the goal of EarthServer is to lift the data providers' semantic level from data stewardship to service stewardship.
Use the High-Resolution Numerical Model to Simulate Typhoon Morakot 2009
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Shi, Jainn J.; Lin, Pay-Liam
2010-01-01
Typhoon Morakot struck Taiwan on the night of Friday August 7th, 2009 as a category 2 storm with sustained winds of 85 knots (92 mph). Although the center made landfall in Hualien county along the central cast coast of Taiwan and passed over the central northern part of the island, it was southern Taiwan that received the worst effects of the storm where locally as Much as 2200 mm (2.2 m) of rain were reported, resulting in the worst flooding there in 50 years. The result of the enormous amount of rain has been massive flooding and devastating mudslides. More than 600 people are confirmed dead. In this paper, we will present the results from high-resolution (2-km) WRF for this typhoon case. The results showed that the model captured both in terms of maximum rainfall area and intensity. The model results also showed that the heavy amounts of rain over the southern portion of the island is due to persistent southwesterly flow associated with Morakot and it's circulation was able to draw up copious amounts of moisture from the South China Sea into southern Taiwan where it was able to interact with the steep topography. In the paper, we will also present results from sensitivity test of terrain heights and SST on the precipitation processes (rainfall) associated with Typhoon Morakot (2009), In addition, we will present high-resolution visualization (36 second and 2-km) to show the evolution of Typhoon Morakot.
Cloud-based NEXRAD Data Processing and Analysis for Hydrologic Applications
NASA Astrophysics Data System (ADS)
Seo, B. C.; Demir, I.; Keem, M.; Goska, R.; Weber, J.; Krajewski, W. F.
2016-12-01
The real-time and full historical archive of NEXRAD Level II data, covering the entire United States from 1991 to present, recently became available on Amazon cloud S3. This provides a new opportunity to rebuild the Hydro-NEXRAD software system that enabled users to access vast amounts of NEXRAD radar data in support of a wide range of research. The system processes basic radar data (Level II) and delivers radar-rainfall products based on the user's custom selection of features such as space and time domain, river basin, rainfall product space and time resolution, and rainfall estimation algorithms. The cloud-based new system can eliminate prior challenges faced by Hydro-NEXRAD data acquisition and processing: (1) temporal and spatial limitation arising from the limited data storage; (2) archive (past) data ingestion and format conversion; and (3) separate data processing flow for the past and real-time Level II data. To enhance massive data processing and computational efficiency, the new system is implemented and tested for the Iowa domain. This pilot study begins by ingesting rainfall metadata and implementing Hydro-NEXRAD capabilities on the cloud using the new polarimetric features, as well as the existing algorithm modules and scripts. The authors address the reliability and feasibility of cloud computation and processing, followed by an assessment of response times from an interactive web-based system.
NASA Astrophysics Data System (ADS)
Choi, Junil; Love, David J.; Bidigare, Patrick
2014-10-01
The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.
The nature of ultra-massive lens galaxies
NASA Astrophysics Data System (ADS)
Canameras, Raoul
2017-08-01
During the past decade, strong gravitational lensing analyses have contributed tremendously to the characterization of the inner properties of massive early-type galaxies, beyond the local Universe. Here we intend to extend studies of this kind to the most massive lens galaxies known to date, well outside the mass limits investigated by previous lensing surveys. This will allow us to probe the physics of the likely descendants of the most violent episodes of star formation and of the compact massive galaxies at high redshift. We propose WFC3 imaging (F438W and F160W) of four extremely massive early-type lens galaxies at z 0.5, in order to put them into context with the evolutionary trends of ellipticals as a function of mass and redshift. These systems were discovered in the SDSS and show one single main lens galaxy with a stellar mass above 1.5x10^12 Msun and large Einstein radii. Our high-resolution spectroscopic follow-up with VLT/X-shooter provides secure lens and source redshifts, between 0.3 and 0.7 and between 1.5 and 2.5, respectively, and confirm extreme stellar velocity dispersions > 400 km/s for the lenses. The excellent angular resolution of the proposed WFC3 imaging - not achievable from the ground - is the remaining indispensable piece of information to :(1) Resolve the lens structural parameters and obtain robust measurements of their stellar mass distributions,(2) Model the amount and distribution of the lens total masses and measure their M/L ratios and stellar IMF with joint strong lensing and stellar dynamics analyses,(3) Enhance our on-going lens models through the most accurate positions and morphologies of the blue multiply-imaged sources.
Recycling rice husks for high-capacity lithium battery anodes
Jung, Dae Soo; Ryou, Myung-Hyun; Sung, Yong Joo; Park, Seung Bin; Choi, Jang Wook
2013-01-01
The rice husk is the outer covering of a rice kernel and protects the inner ingredients from external attack by insects and bacteria. To perform this function while ventilating air and moisture, rice plants have developed unique nanoporous silica layers in their husks through years of natural evolution. Despite the massive amount of annual production near 108 tons worldwide, so far rice husks have been recycled only for low-value agricultural items. In an effort to recycle rice husks for high-value applications, we convert the silica to silicon and use it for high-capacity lithium battery anodes. Taking advantage of the interconnected nanoporous structure naturally existing in rice husks, the converted silicon exhibits excellent electrochemical performance as a lithium battery anode, suggesting that rice husks can be a massive resource for use in high-capacity lithium battery negative electrodes. PMID:23836636
Recycling rice husks for high-capacity lithium battery anodes.
Jung, Dae Soo; Ryou, Myung-Hyun; Sung, Yong Joo; Park, Seung Bin; Choi, Jang Wook
2013-07-23
The rice husk is the outer covering of a rice kernel and protects the inner ingredients from external attack by insects and bacteria. To perform this function while ventilating air and moisture, rice plants have developed unique nanoporous silica layers in their husks through years of natural evolution. Despite the massive amount of annual production near 10(8) tons worldwide, so far rice husks have been recycled only for low-value agricultural items. In an effort to recycle rice husks for high-value applications, we convert the silica to silicon and use it for high-capacity lithium battery anodes. Taking advantage of the interconnected nanoporous structure naturally existing in rice husks, the converted silicon exhibits excellent electrochemical performance as a lithium battery anode, suggesting that rice husks can be a massive resource for use in high-capacity lithium battery negative electrodes.
NASA Astrophysics Data System (ADS)
Hempelmann, Nils; Ehbrecht, Carsten; Alvarez-Castro, Carmen; Brockmann, Patrick; Falk, Wolfgang; Hoffmann, Jörg; Kindermann, Stephan; Koziol, Ben; Nangini, Cathy; Radanovics, Sabine; Vautard, Robert; Yiou, Pascal
2018-01-01
Analyses of extreme weather events and their impacts often requires big data processing of ensembles of climate model simulations. Researchers generally proceed by downloading the data from the providers and processing the data files ;at home; with their own analysis processes. However, the growing amount of available climate model and observation data makes this procedure quite awkward. In addition, data processing knowledge is kept local, instead of being consolidated into a common resource of reusable code. These drawbacks can be mitigated by using a web processing service (WPS). A WPS hosts services such as data analysis processes that are accessible over the web, and can be installed close to the data archives. We developed a WPS named 'flyingpigeon' that communicates over an HTTP network protocol based on standards defined by the Open Geospatial Consortium (OGC), to be used by climatologists and impact modelers as a tool for analyzing large datasets remotely. Here, we present the current processes we developed in flyingpigeon relating to commonly-used processes (preprocessing steps, spatial subsets at continent, country or region level, and climate indices) as well as methods for specific climate data analysis (weather regimes, analogues of circulation, segetal flora distribution, and species distribution models). We also developed a novel, browser-based interactive data visualization for circulation analogues, illustrating the flexibility of WPS in designing custom outputs. Bringing the software to the data instead of transferring the data to the code is becoming increasingly necessary, especially with the upcoming massive climate datasets.
High pressure hydriding of sponge-Zr in steam-hydrogen mixtures
NASA Astrophysics Data System (ADS)
Soo Kim, Yeon; Wang, Wei-E.; Olander, D. R.; Yagnik, S. K.
1997-07-01
Hydriding kinetics of thin sponge-Zr layers metallurgically bonded to a Zircaloy disk has been studied by thermogravimetry in the temperature range 350-400°C in 7 MPa hydrogen-steam mixtures. Some specimens were prefilmed with a thin oxide layer prior to exposure to the reactant gas; all were coated with a thin layer of gold to avoid premature reaction at edges. Two types of hydriding were observed in prefilmed specimens, viz., a slow hydrogen absorption process that precedes an accelerated (massive) hydriding. At 7 MPa total pressure, the critical ratio of H 2/H 2O above which massive hydriding occurs at 400°C is ˜ 200. The critical H 2/H 20 ratio is shifted to ˜2.5 × 103 at 350°C. The slow hydriding process occurs only when conditions for hydriding and oxidation are approximately equally favorable. Based on maximum weight gain, the specimen is completely converted to δ-ZrH 2 by massive hydriding in ˜5 h at a hydriding rate of ˜10 -6 mol H/cm 2 s. Incubation times of 10-20 h prior to the onset of massive hydriding increases with prefilm oxide thickness in the range of 0-10 μm. By changing to a steam-enriched gas, massive hydriding that initially started in a steam-starved condition was arrested by re-formation of a protective oxide scale.
Massive Open Online Courses (MOOC) for Teaching Portuguese for Foreigners: A Case Study
ERIC Educational Resources Information Center
Zancanaro, Airton; Domingues, Maria Jose Carvalho de Souza
2018-01-01
Education is experiencing a period of change and the traditional models of education adopted by universities need to go through innovative processes to democratize knowledge, attract new learners and optimize resources. The use of Open Educational Resources (OERs) and Massive Open Online Courses (MOOC) can contribute to such changes. However,…
ERIC Educational Resources Information Center
Chung, Liang-Yi
2015-01-01
Massive Open Online Courses (MOOCs) are expanding the scope of online distance learning in the creation of a cross-country global learning environment. For learners worldwide, MOOCs offer a wealth of online learning resources. However, such a diversified environment makes the learning process complicated and challenging. To achieve their…
ERIC Educational Resources Information Center
Voulgari, Iro; Komis, Vassilis; Sampson, Demetrios G.
2014-01-01
Over the past decade research has recognised the learning potential of massively multiplayer online games (MMOGs). MMOGs can be used by the technology-enhanced learning research community to study and identify good educational practices that may inspire engaging, creative and motivating approaches for education and learning. To this end, in this…
Massive star winds interacting with magnetic fields on various scales
NASA Astrophysics Data System (ADS)
David-Uraz, A.; Petit, V.; Erba, C.; Fullerton, A.; Walborn, N.; MacInnis, R.
2018-01-01
One of the defining processes which govern massive star evolution is their continuous mass loss via dense, supersonic line-driven winds. In the case of those OB stars which also host a surface magnetic field, the interaction between that field and the ionized outflow leads to complex circumstellar structures known as magnetospheres. In this contribution, we review recent developments in the field of massive star magnetospheres, including current efforts to characterize the largest magnetosphere surrounding an O star: that of NGC 1624-2. We also discuss the potential of the "analytic dynamical magnetosphere" (ADM) model to interpret multi-wavelength observations. Finally, we examine the possible effects of — heretofore undetected — small-scale magnetic fields on massive star winds and compare their hypothetical consequences to existing, unexplained observations.
Stanescu, Ana; Caragea, Doina
2015-01-01
Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.
2015-01-01
Background Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Results Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. Conclusions In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework. PMID:26356316
The presence of insect at composting
NASA Astrophysics Data System (ADS)
Mudruňka, J.; Lyčková, B.; Kučerová, R.; Glogarová, V.; Závada, J.; Gibesová, B.; Takač, D.
2017-10-01
During composting biodegradable waste, microbic organisms reproduce massively, most of which belong to serious biopathogens which are able to penetrate various environmental layers. Their vector species include dipterous insect (Diptera) which reaches considerable amounts in composting plant premises as well as home composting units, mainly during summer months. Therefore measures must be taken to eliminate or reduce this unwanted phenomenon (sanitisation, disinfection). For evaluating obtained results, relative abundance calculation was chosen.
Implications of an Independent Kosovo for Russia’s Near Abroad
2007-10-01
as Chair of the Standing Committee on International Law and Ethics of the World Association for Disaster and Emergency Medicine . He is admitted to...of Kosovar Albanians converted to Islam, while Serbs remained Serbian Orthodox. More than 500 years later, Serbia and Montenegro regained the...massive amounts of funding would be needed to build new plants and/or overhaul existing structures. Without final status resolution, investors are
Anatomy of an online misinformation network.
Shao, Chengcheng; Hui, Pik-Mai; Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Menczer, Filippo; Ciampaglia, Giovanni Luca
2018-01-01
Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.
Anatomy of an online misinformation network
Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Ciampaglia, Giovanni Luca
2018-01-01
Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network. PMID:29702657
Radwan, Alzahraa; Kleinwächter, Maik; Selmar, Dirk
2017-09-01
In previous experiments, we demonstrated that the amount of monoterpenes in sage is increased massively by drought stress. Our current study is aimed to elucidate whether this increase is due, at least in part, to elevated activity of the monoterpene synthases responsible for the biosynthesis of essential oils in sage. Accordingly, the transcription rates of the monoterpene synthases were analyzed. Salvia officinalis plants were cultivated under moderate drought stress. The concentrations of monoterpenes as well as the expression of the monoterpene synthases were analyzed. The amount of monoterpenes massively increased in response to drought stress; it doubled after just two days of drought stress. The observed changes in monoterpene content mostly match with the patterns of monoterpene synthase expressions. The expression of bornyl diphosphate synthase was strongly up-regulated; its maximum level was reached after two days. Sabinene synthase increased gradually and reached a maximum after two weeks. In contrast, the transcript level of cineole synthase continuously declined. This study revealed that the stress related increase of biosynthesis is not only due to a "passive" shift caused by the stress related over-reduced status, but also is due - at least in part-to an "active" up-regulation of the enzymes involved. Copyright © 2017 Elsevier Ltd. All rights reserved.
CD-ROM And Knowledge Integration
NASA Astrophysics Data System (ADS)
Rann, Leonard S.
1988-06-01
As the title of this paper suggests, it is about CD-ROM technology and the structuring of massive databases. Even more, it is about the impact CD-ROM has had on the publication of massive amounts of information, and the unique qualities of the medium that allows for the most sophisticated computer retrieval techniques that have ever been used. I am not drawing on experience as a pedant in the educational field, but rather as a software and database designer who has worked with CD-ROM since its inception. I will be giving examples from my company's current applications, as well as discussing some of the challenges that face information publishers in the future. In particular I have a belief about what the most valuable outlet can be created using CD-ROM will be: The CD-ROM is particularly suited for the mass delivery of information systems and databases that either require or utilize a large amount of computational preprocessing to allow a real-time or interactive response to be achieved. Until the advent of CD-ROM technology this level of sophistication in publication was virtually impossible. I will further explain this later in this paper. First, I will discuss the salient features of CD-ROM that make it unique in the world of data storage for electronic publishing.
PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.
Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin
2016-10-01
Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
Transfer of interferon alfa into human breast milk.
Kumar, A R; Hale, T W; Mock, R E
2000-08-01
Originally assumed to be antiviral substances, the efficacy of interferons in a number of pathologies, including malignancies, multiple sclerosis, and other immune syndromes, is increasingly recognized. This study provides data on the transfer of interferon alfa (2B) into human milk of a patient receiving massive intravenous doses for the treatment of malignant melanoma. Following an intravenous dose of 30 million IU, the amount of interferon transferred into human milk was only slightly elevated (1551 IU/mL) when compared to control milk (1249 IU/mL). These data suggest that even following enormous doses, interferon is probably too large in molecular weight to transfer into human milk in clinically relevant amounts.
Ecoinformatics: supporting ecology as a data-intensive science.
Michener, William K; Jones, Matthew B
2012-02-01
Ecology is evolving rapidly and increasingly changing into a more open, accountable, interdisciplinary, collaborative and data-intensive science. Discovering, integrating and analyzing massive amounts of heterogeneous data are central to ecology as researchers address complex questions at scales from the gene to the biosphere. Ecoinformatics offers tools and approaches for managing ecological data and transforming the data into information and knowledge. Here, we review the state-of-the-art and recent advances in ecoinformatics that can benefit ecologists and environmental scientists as they tackle increasingly challenging questions that require voluminous amounts of data across disciplines and scales of space and time. We also highlight the challenges and opportunities that remain. Copyright © 2011 Elsevier Ltd. All rights reserved.
A novel multi-band SAR data technique for fully automatic oil spill detection in the ocean
NASA Astrophysics Data System (ADS)
Del Frate, Fabio; Latini, Daniele; Taravat, Alireza; Jones, Cathleen E.
2013-10-01
With the launch of the Italian constellation of small satellites for the Mediterranean basin observation COSMO-SkyMed and the German TerraSAR-X missions, the delivery of very high-resolution SAR data to observe the Earth day or night has remarkably increased. In particular, also taking into account other ongoing missions such as Radarsat or those no longer working such as ALOS PALSAR, ERS-SAR and ENVISAT the amount of information, at different bands, available for users interested in oil spill analysis has become highly massive. Moreover, future SAR missions such as Sentinel-1 are scheduled for launch in the very next years while additional support can be provided by Uninhabited Aerial Vehicle (UAV) SAR systems. Considering the opportunity represented by all these missions, the challenge is to find suitable and adequate image processing multi-band procedures able to fully exploit the huge amount of data available. In this paper we present a new fast, robust and effective automated approach for oil-spill monitoring starting from data collected at different bands, polarizations and spatial resolutions. A combination of Weibull Multiplicative Model (WMM), Pulse Coupled Neural Network (PCNN) and Multi-Layer Perceptron (MLP) techniques is proposed for achieving the aforementioned goals. One of the most innovative ideas is to separate the dark spot detection process into two main steps, WMM enhancement and PCNN segmentation. The complete processing chain has been applied to a data set containing C-band (ERS-SAR, ENVISAT ASAR), X-band images (Cosmo-SkyMed and TerraSAR-X) and L-band images (UAVSAR) for an overall number of more than 200 images considered.
Genetic risk prediction using a spatial autoregressive model with adaptive lasso.
Wen, Yalu; Shen, Xiaoxi; Lu, Qing
2018-05-31
With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.
Rapid Freeform Sheet Metal Forming: Technology Development and System Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiridena, Vijitha; Verma, Ravi; Gutowski, Timothy
The objective of this project is to develop a transformational RApid Freeform sheet metal Forming Technology (RAFFT) in an industrial environment, which has the potential to increase manufacturing energy efficiency up to ten times, at a fraction of the cost of conventional technologies. The RAFFT technology is a flexible and energy-efficient process that eliminates the need for having geometry-specific forming dies. The innovation lies in the idea of using the energy resource at the local deformation area which provides greater formability, process control, and process flexibility relative to traditional methods. Double-Sided Incremental Forming (DSIF), the core technology in RAFFT, ismore » a new concept for sheet metal forming. A blank sheet is clamped around its periphery and gradually deformed into a complex 3D freeform part by two strategically aligned stylus-type tools that follow a pre-described toolpath. The two tools, one on each side of the blank, can form a part with sharp features for both concave and convex shapes. Since deformation happens locally, the forming force at any instant is significantly decreased when compared to traditional methods. The key advantages of DSIF are its high process flexibility, high energy-efficiency, low capital investment, and the elimination of the need for massive amounts of die casting and machining. Additionally, the enhanced formability and process flexibility of DSIF can open up design spaces and result in greater weight savings.« less
Forming spectroscopic massive protobinaries by disc fragmentation
NASA Astrophysics Data System (ADS)
Meyer, D. M.-A.; Kuiper, R.; Kley, W.; Johnston, K. G.; Vorobyov, E.
2018-01-01
The surroundings of massive protostars constitute an accretion disc which has numerically been shown to be subject to fragmentation and responsible for luminous accretion-driven outbursts. Moreover, it is suspected to produce close binary companions which will later strongly influence the star's future evolution in the Hertzsprung-Russel diagram. We present three-dimensional gravitation-radiation-hydrodynamic numerical simulations of 100 M⊙ pre-stellar cores. We find that accretion discs of young massive stars violently fragment without preventing the (highly variable) accretion of gaseous clumps on to the protostars. While acquiring the characteristics of a nascent low-mass companion, some disc fragments migrate on to the central massive protostar with dynamical properties showing that its final Keplerian orbit is close enough to constitute a close massive protobinary system, having a young high- and a low-mass components. We conclude on the viability of the disc fragmentation channel for the formation of such short-period binaries, and that both processes - close massive binary formation and accretion bursts - may happen at the same time. FU-Orionis-type bursts, such as observed in the young high-mass star S255IR-NIRS3, may not only indicate ongoing disc fragmentation, but also be considered as a tracer for the formation of close massive binaries - progenitors of the subsequent massive spectroscopic binaries - once the high-mass component of the system will enter the main-sequence phase of its evolution. Finally, we investigate the Atacama Large (sub-)Millimeter Array observability of the disc fragments.
Pseudoangiomatous stromal hyperplasia causing massive breast enlargement
Bourke, Anita Geraldine; Tiang, Stephen; Harvey, Nathan; McClure, Robert
2015-01-01
Pseudoangiomatous stromal hyperplasia (PASH) of the breast is a benign mesenchymal proliferative process, initially described by Vuitch et al. We report an unusual case of a 46-year-old woman who presented with a 6-week history of bilateral massive, asymmetrical, painful enlargement of her breasts, without a history of trauma. On clinical examination, both breasts were markedly enlarged and oedematous, but there were no discrete palpable masses. Preoperative image-guided core biopsies and surgery showed PASH. PASH is increasingly recognised as an incidental finding on image-guided core biopsy performed for screen detected lesions. There are a few reported cases of PASH presenting as rapid breast enlargement. In our case, the patient presented with painful, asymmetrical, massive breast enlargement. Awareness needs to be raised of this entity as a differential diagnosis in massive, painful breast enlargement. PMID:26475873
The singular behavior of massive QCD amplitudes
NASA Astrophysics Data System (ADS)
Mitov, Alexander; Moch, Sven-Olaf
2007-05-01
We discuss the structure of infrared singularities in on-shell QCD amplitudes with massive partons and present a general factorization formula in the limit of small parton masses. The factorization formula gives rise to an all-order exponentiation of both, the soft poles in dimensional regularization and the large collinear logarithms of the parton masses. Moreover, it provides a universal relation between any on-shell amplitude with massive external partons and its corresponding massless amplitude. For the form factor of a heavy quark we present explicit results including the fixed-order expansion up to three loops in the small mass limit. For general scattering processes we show how our constructive method applies to the computation of all singularities as well as the constant (mass-independent) terms of a generic massive n-parton QCD amplitude up to the next-to-next-to-leading order corrections.
Ground ice formed after underground thermo-erosion of the permafrost in Alaska
NASA Astrophysics Data System (ADS)
Fortier, D.; Kanevskiy, M.; Yuri, S.
2007-12-01
Cryostratigraphic studies realized in the CRREL permafrost tunnel (¡Ö 64 57 N, 147 37 W) located near Fairbanks, Alaska revealed the presence of multi-directional reticulate ice veins and massive ice bodies in the permafrost. We propose that this reticulate-chaotic cryostructure and the massive ice bodies were formed by inward closed-system freezing of pools of water and saturated sediments trapped in underground tunnels cut in the permafrost by thermo-erosion. The massive ice and the multi-directional reticulate ice veins were likely formed after the cessation of the underground flow, either by tunnel blockage or collapse, or cessation of runoff infiltration in the permafrost. The observed tunnels were slightly inclined and could often be traced for several meters. The properties of the sediments filling these tunnels differed from the enclosing original syngenetic Pleistocene permafrost. The latter was made of ice-rich loess with abundant rootlets and was characterized by a well developed micro-lenticular cryostructure whereas the tunnels were filled with massive ice and/or organic- poor, stratified silts, sands and gravels sediments. The water content of the original syngenetic loess was about twice the water content of the sediments in the underground tunnels. The contact between the original syngenetic loess and the sediments in the tunnels was manifestly discordant and outlined by erosion lag. Release of latent heat from the poll of water and water of the saturated sediments created thaw unconformities at the tunnel boundary. Similar types of massive ice and reticulate-chaotic cryostructures were observed in Holocene to Pleistocene permafrost exposures along the Beaufort Sea Coast, on the Seward Peninsula, on the North Slope and in the Alaskan interior. The massive ice bodies and reticulate-chaotic cryostructures were always associated with, or incorporated within, ice wedges that showed signs of thermo-erosion. This indicates that the process of underground thermo-erosion has occurred widely in Alaska. On Bylot Island in the Canadian Arctic archipelago, Fortier et al. (2007) observed that extensive gullying of the permafrost resulted from the process of underground thermo-erosion. More studies are needed to determine the role of this process in the evolution of ice-wedge polygons landscape in Alaska. Fortier, D., Allard, M., Shur, Y. 2007. Observation of rapid drainage system development by thermal erosion of ice wedges on Bylot Island, Canadian Arctic Archipelago. Permafrost and Periglacial Processes 18 (3): 229-243.
Robben, Antonius C G M
2014-01-01
This article uses the dual process model (DPM) in an analysis of the national mourning of tens of thousands of disappeared in Chile and Argentina by adapting the model from the individual to the collective level where society as a whole is bereaved. Perpetrators are also involved in the national mourning process as members of a bereaved society. This article aims to (a) demonstrate the DPMs significance for the analysis of national mourning in post-conflict societies and (b) explain oscillations between loss orientation and restoration orientation in coping with massive losses that seem contradictory from a grief work perspective.
[The importance of genealogy applied to genetic research in Costa Rica].
Meléndez Obando, Mauricio O
2004-09-01
The extensive development of genealogical studies based on archival documents has provided powerful support for genetic research in Costa Rica over the past quarter century. As a result, several questions of population history have been answered, such as those involving hereditary illnesses, suggesting additional avenues and questions as well. Similarly, the preservation of massive amounts of historical documentation highlights the major advantages that the Costa Rican population offers to genetic research.
Journal of Special Operations Medicine. Volume 7, Edition 4, Fall 2007
2007-01-01
which demonstrated massive amounts of pericardial fat , but no blood (a false positive FAST). Exploratory laparatomy revealed a catastrophic supra...ARTERIAL GAS EMBOLISM An additional concern in the unconscious diver is barotrauma and arterial gas embolism (AGE). Boyle’s law states that as pressure...ment in 32 cases of air embolism (abs). Proceedings: Joint Meeting on Diving andHyperbaric Medicine, 11-18 August. Amsterdam, The Netherlands, pg. 90. 13
Litterfall Production Prior to and during Hurricanes Irma and Maria in Four Puerto Rican Forests
Xianbin Liu; Xiucheng Zeng; Xiaoming Zou; Grizelle González; Chao Wang; Si Yang
2018-01-01
Hurricanes Irma and Maria struck Puerto Rico on the 6th and 20th of September 2017, respectively. These two powerful Cat 5 hurricanes severely defoliated forest canopy and deposited massive amounts of litterfall in the forests across the island. We established a 1-ha research plot in each of four forests (Guánica State Forest, RÃo Abajo State Forest, Guayama Research...
14. Photographic copy of photograph, dated 21 July 1971 (original ...
14. Photographic copy of photograph, dated 21 July 1971 (original print in possession of U.S. Space & Strategic Defense Command Historic Office CSSD-HO, Huntsville, AL). Photographer unknown. View of missile site control building turret wall during early construction, illustrating the massive amount of rebar utilized in the project. - Stanley R. Mickelsen Safeguard Complex, Missile Site Control Building, Northeast of Tactical Road; southeast of Tactical Road South, Nekoma, Cavalier County, ND
The shadow world of superstring theories
NASA Technical Reports Server (NTRS)
Kolb, E. W.; Turner, M. S.; Seckel, D.
1985-01-01
Some possible astrophysical and cosmological implications of 'shadow matter', a form of matter which only interacts gravitationally with ordinary matter and which may or may not be identical in its properties to ordinary matter, are considered. The possible existence, amount, and location of shadow matter in the solar system are discussed, and the significance of shadow matter for primordial nucleosynthesis, macroscopic asymmetry, baryogenesis, double-bubble inflation, and asymmetric microphysics is addressed. Massive shadow states are discussed.
Ultra-fast outflows (aka UFOs) in AGNs and their relevance for feedback
NASA Astrophysics Data System (ADS)
Cappi, Massimo; Tombesi, F.; Giustini, M.; Dadina, M.; Braito, V.; Kaastra, J.; Reeves, J.; Chartas, G.; Gaspari, M.; Vignali, C.; Gofford, J.; Lanzuisi, G.
2012-09-01
During the last decade, several observational evidences have been accumulated for the existence of massive, high velocity winds/outflows (aka UFOs) in nearby AGNs and, possibly, distant quasars. I will review here such evidences, present some of the latest results in this field, and discuss the relevance of UFOs for both understanding the physics of accretion/ejection flows on supermassive black holes, and for quantifying the amount of AGN feedback.
America’s Achilles Heel: Defense Against High-altitude Electromagnetic Pulse-policy vs. Practice
2014-12-12
Directives SCADA Supervisory Control and Data Acquisition Systems SHIELD Act Secure High-voltage Infrastructure for Electricity from Lethal Damage Act...take place, it is important to understand the effects of the components of EMP from a high-altitude nuclear detonation. The requirements for shielding ...Mass Ejection (CME). A massive, bubble-shaped burst of plasma expanding outward from the Sun’s corona, in which large amounts of superheated
The radiation asymmetry in MGI rapid shutdown on J-TEXT tokamak
NASA Astrophysics Data System (ADS)
Tong, Ruihai; Chen, Zhongyong; Huang, Duwei; Cheng, Zhifeng; Zhang, Xiaolong; Zhuang, Ge; J-TEXT Team
2017-10-01
Disruptions, the sudden termination of tokamak fusion plasmas by instabilities, have the potential to cause severe material wall damage to large tokamaks like ITER. The mitigation of disruption damage is an essential part of any fusion reactor system. Massive gas injection (MGI) rapid shutdown is a technique in which large amounts of noble gas are injected into the plasma in order to safely radiate the plasma energy evenly over the entire plasma-facing first wall. However, the radiated energy during the thermal quench (TQ) in massive gas injection (MGI) induced disruptions is found toroidal asymmetric, and the degrees of asymmetry correlate with the gas penetration and MGI induced magnetohydrodynamics (MHD) activities. A toroidal and poloidal array of ultraviolet photodiodes (AXUV) has been developed to investigate the radiation asymmetry on J-TEXT tokamak. Together with the upgraded mirnov probe arrays, the relation between MGI triggered MHD activities with radiation asymmetry is studied.
Minimization of Roll Firings for Optimal Propellant Maneuvers
NASA Astrophysics Data System (ADS)
Leach, Parker C.
Attitude control of the International Space Station (ISS) is critical for operations, impacting power, communications, and thermal systems. The station uses gyroscopes and thrusters for attitude control, and reorientations are normally assisted by thrusters on docked vehicles. When the docked vehicles are unavailable, the reduction in control authority in the roll axis results in frequent jet firings and massive fuel consumption. To improve this situation, new guidance and control schemes are desired that provide control with fewer roll firings. Optimal control software was utilized to solve for potential candidates that satisfied desired conditions with the goal of minimizing total propellant. An ISS simulation too was then used to test these solutions for feasibility. After several problem reformulations, multiple candidate solutions minimizing or completely eliminating roll firings were found. Flight implementation would not only save massive amounts of fuel and thus money, but also reduce ISS wear and tear, thereby extending its lifetime.
Template based parallel checkpointing in a massively parallel computer system
Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN
2009-01-13
A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.
Deuchi, K; Kanauchi, O; Shizukuishi, M; Kobayashi, E
1995-07-01
We investigated the effects of continuous and massive intake of chitosan with sodium ascorbate (AsN) on the mineral and the fat-soluble vitamin status in male Sprague-Dawley rats fed on a high-fat diet. The apparent fat digestibility in the chitosan-receiving group was significantly lower than that in the cellulose- or glucosamine-receiving group. Chitosan feeding for 2 weeks caused a decrease in mineral absorption and bone mineral content, and it was necessary to administer twice the amount of Ca in the AIN-76 formula, which was supplemented with AsN, to prevent such a decrease in the bone mineral content. Moreover, the ingestion of chitosan along with AsN led to a marked and rapid decrease in the serum vitamin E level, while such a loss in vitamin E was not observed for rats given glucosamine monomer instead of chitosan.
Data, Meet Compute: NASA's Cumulus Ingest Architecture
NASA Technical Reports Server (NTRS)
Quinn, Patrick
2018-01-01
NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has looked to the cloud to address these needs, building its Cumulus system to manage the ingest of diverse data in a wide variety of formats into the cloud. In this talk, we look at what Cumulus is from a high level and then take a deep dive into how it manages complexity and versioning associated with multiple AWS Lambda and ECS microservices communicating through AWS Step Functions across several disparate installations
Grid Computing for Earth Science
NASA Astrophysics Data System (ADS)
Renard, Philippe; Badoux, Vincent; Petitdidier, Monique; Cossu, Roberto
2009-04-01
The fundamental challenges facing humankind at the beginning of the 21st century require an effective response to the massive changes that are putting increasing pressure on the environment and society. The worldwide Earth science community, with its mosaic of disciplines and players (academia, industry, national surveys, international organizations, and so forth), provides a scientific basis for addressing issues such as the development of new energy resources; a secure water supply; safe storage of nuclear waste; the analysis, modeling, and mitigation of climate changes; and the assessment of natural and industrial risks. In addition, the Earth science community provides short- and medium-term prediction of weather and natural hazards in real time, and model simulations of a host of phenomena relating to the Earth and its space environment. These capabilities require that the Earth science community utilize, both in real and remote time, massive amounts of data, which are usually distributed among many different organizations and data centers.
Three waves for quantum gravity
NASA Astrophysics Data System (ADS)
Calmet, Xavier; Latosh, Boris
2018-03-01
Using effective field theoretical methods, we show that besides the already observed gravitational waves, quantum gravity predicts two further massive classical fields leading to two new massive waves. We set a limit on the masses of these new modes using data from the Eöt-Wash experiment. We point out that the existence of these new states is a model independent prediction of quantum gravity. We then explain how these new classical fields could impact astrophysical processes and in particular the binary inspirals of neutron stars or black holes. We calculate the emission rate of these new states in binary inspirals astrophysical processes.
Long-Lived Inverse Chirp Signals from Core-Collapse in Massive Scalar-Tensor Gravity
NASA Astrophysics Data System (ADS)
Sperhake, Ulrich; Moore, Christopher J.; Rosca, Roxana; Agathos, Michalis; Gerosa, Davide; Ott, Christian D.
2017-11-01
This Letter considers stellar core collapse in massive scalar-tensor theories of gravity. The presence of a mass term for the scalar field allows for dramatic increases in the radiated gravitational wave signal. There are several potential smoking gun signatures of a departure from general relativity associated with this process. These signatures could show up within existing LIGO-Virgo searches.
Physical properties of Southern infrared dark clouds
NASA Astrophysics Data System (ADS)
Vasyunina, T.; Linz, H.; Henning, Th.; Stecklum, B.; Klose, S.; Nyman, L.-Å.
2009-05-01
Context: What are the mechanisms by which massive stars form? What are the initial conditions for these processes? It is commonly assumed that cold and dense Infrared Dark Clouds (IRDCs) represent the birth-sites of massive stars. Therefore, these clouds have been receiving an increasing amount of attention, and their analysis offers the opportunity to tackle the afore mentioned questions. Aims: To enlarge the sample of well-characterised IRDCs in the southern hemisphere, where ALMA will play a major role in the near future, we have developed a program to study the gas and dust of southern infrared dark clouds. The present paper attempts to characterize the continuum properties of this sample of IRDCs. Methods: We cross-correlated 1.2 mm continuum data from SIMBA bolometer array mounted on SEST telescope with Spitzer/GLIMPSE images to establish the connection between emission sources at millimeter wavelengths and the IRDCs that we observe at 8 μm in absorption against the bright PAH background. Analysing the dust emission and extinction enables us to determine the masses and column densities, which are important quantities in characterizing the initial conditions of massive star formation. We also evaluated the limitations of the emission and extinction methods. Results: The morphology of the 1.2 mm continuum emission is in all cases in close agreement with the mid-infrared extinction. The total masses of the IRDCs were found to range from 150 to 1150 M_⊙ (emission data) and from 300 to 1750 M_⊙ (extinction data). We derived peak column densities of between 0.9 and 4.6 × 1022 cm-2 (emission data) and 2.1 and 5.4 × 1022 cm-2 (extinction data). We demonstrate that the extinction method is unreliable at very high extinction values (and column densities) beyond AV values of roughly 75 mag according to the Weingartner & Draine (2001) extinction relation RV = 5.5 model B (around 200 mag when following the common Mathis (1990, ApJ, 548, 296) extinction calibration). By taking the spatial resolution effects into account and restoring the column densities derived from the dust emission to a linear resolution of 0.01 pc, peak column densities of 3-19 × 1023 cm-2 are obtained, which are much higher than typical values for low-mass cores. Conclusions: Taking into account the spatial resolution effects, the derived column densities are beyond the column density threshold of 3.0 × 1023 cm-2 required by theoretical considerations for massive star formation. We conclude that the values of column densities derived for the selected IRDC sample imply that these objects are excellent candidates for objects in the earliest stages of massive star formation.
Evans, James P; Wilhelmsen, Kirk C; Berg, Jonathan; Schmitt, Charles P; Krishnamurthy, Ashok; Fecho, Karamarie; Ahalt, Stanley C
2016-01-01
In genomics and other fields, it is now possible to capture and store large amounts of data in electronic medical records (EMRs). However, it is not clear if the routine accumulation of massive amounts of (largely uninterpretable) data will yield any health benefits to patients. Nevertheless, the use of large-scale medical data is likely to grow. To meet emerging challenges and facilitate optimal use of genomic data, our institution initiated a comprehensive planning process that addresses the needs of all stakeholders (e.g., patients, families, healthcare providers, researchers, technical staff, administrators). Our experience with this process and a key genomics research project contributed to the proposed framework. We propose a two-pronged Genomic Clinical Decision Support System (CDSS) that encompasses the concept of the "Clinical Mendeliome" as a patient-centric list of genomic variants that are clinically actionable and introduces the concept of the "Archival Value Criterion" as a decision-making formalism that approximates the cost-effectiveness of capturing, storing, and curating genome-scale sequencing data. We describe a prototype Genomic CDSS that we developed as a first step toward implementation of the framework. The proposed framework and prototype solution are designed to address the perspectives of stakeholders, stimulate effective clinical use of genomic data, drive genomic research, and meet current and future needs. The framework also can be broadly applied to additional fields, including other '-omics' fields. We advocate for the creation of a Task Force on the Clinical Mendeliome, charged with defining Clinical Mendeliomes and drafting clinical guidelines for their use.
Seghatchian, Jerard; Samama, Meyer Michel
2012-10-01
Massive transfusion (MT) is an empiric mode of treatment advocated for uncontrolled bleeding and massive haemorrhage, aiming at optimal resuscitation and aggressive correction of coagulopathy. Conventional guidelines recommend early administration of crystalloids and colloids in conjunction with red cells, where the red cell also plays a critical haemostatic function. Plasma and platelets are only used in patients with microvascular bleeding with PT/APTT values >1.5 times the normal values and if PLT counts are below 50×10(9)/L. Massive transfusion carries a significant mortality rate (40%), which increases with the number of volume expanders and blood components transfused. Controversies still exist over the optimal ratio of blood components with respect to overall clinical outcomes and collateral damage. While inadequate transfusion is believed to be associated with poor outcomes but empirical over transfusion results in unnecessary donor exposure with an increased rate of sepsis, transfusion overload and infusion of variable amounts of some biological response modifiers (BRMs), which have the potential to cause additional harm. Alternative strategies, such as early use of tranexamic acid are helpful. However in trauma settings the use of warm fresh whole blood (WFWB) instead of reconstituted components with a different ratio of stored components might be the most cost effective and safer option to improve the patient's survival rate and minimise collateral damage. This manuscript, after a brief summary of standard medical intervention in massive transfusion focuses on the main characteristics of various substances currently available to overcome massive transfusion coagulopathy. The relative levels of some BRMs in fresh and aged blood components of the same origin are highlighted and some myths and unresolved issues related to massive transfusion practice are discussed. In brief, the coagulopathy in MT is a complex phenomenon, often complicated by chronic activation of coagulation, platelets, complement and vascular endothelial cells, where haemolysis, microvesiculation, exposure of phosphatidyl serine positive cells, altered red cells with reduced adhesive proteins and the presence of some BRM, could play a pivotal role in the coagulopathy and untoward effects. The challenges of improving the safety of massive transfusion remain as numerous and as varied as ever. The answer may reside in appropriate studies on designer whole blood, combined with new innovative tools to diagnosis a coagulopathy and an evidence based mode of therapy to establish the optimal survival benefit of patients, always taking into account the concept of harm reduction and reduction of collateral damage. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
Streamlining environmental product declarations: a stage model
NASA Astrophysics Data System (ADS)
Lefebvre, Elisabeth; Lefebvre, Louis A.; Talbot, Stephane; Le Hen, Gael
2001-02-01
General public environmental awareness and education is increasing, therefore stimulating the demand for reliable, objective and comparable information about products' environmental performances. The recently published standard series ISO 14040 and ISO 14025 are normalizing the preparation of Environmental Product Declarations (EPDs) containing comprehensive information relevant to a product's environmental impact during its life cycle. So far, only a few environmentally leading manufacturing organizations have experimented the preparation of EPDs (mostly from Europe), demonstrating its great potential as a marketing weapon. However the preparation of EPDs is a complex process, requiring collection and analysis of massive amounts of information coming from disparate sources (suppliers, sub-contractors, etc.). In a foreseeable future, the streamlining of the EPD preparation process will require product manufacturers to adapt their information systems (ERP, MES, SCADA) in order to make them capable of gathering, and transmitting the appropriate environmental information. It also requires strong functional integration all along the product supply chain in order to ensure that all the information is made available in a standardized and timely manner. The goal of the present paper is two fold: first to propose a transitional model towards green supply chain management and EPD preparation; second to identify key technologies and methodologies allowing to streamline the EPD process and subsequently the transition toward sustainable product development
Fortes, Ana M; Santos, Filipa; Pais, Maria S
2010-01-01
The usage of Humulus lupulus for brewing increased the demand for high-quality plant material. Simultaneously, hop has been used in traditional medicine and recently recognized with anticancer and anti-infective properties. Tissue culture techniques have been reported for a wide range of species, and open the prospect for propagation of disease-free, genetically uniform and massive amounts of plants in vitro. Moreover, the development of large-scale culture methods using bioreactors enables the industrial production of secondary metabolites. Reliable and efficient tissue culture protocol for shoot regeneration through organogenic nodule formation was established for hop. The present review describes the histological, and biochemical changes occurring during this morphogenic process, together with an analysis of transcriptional and metabolic profiles. We also discuss the existence of common molecular factors among three different morphogenic processes: organogenic nodules and somatic embryogenesis, which strictly speaking depend exclusively on intrinsic developmental reprogramming, and legume nitrogen-fixing root nodules, which arises in response to symbiosis. The review of the key factors that participate in hop nodule organogenesis and the comparison with other morphogenic processes may have merit as a study presenting recent advances in complex molecular networks occurring during morphogenesis and together, these provide a rich framework for biotechnology applications.
TransAtlasDB: an integrated database connecting expression data, metadata and variants
Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J
2018-01-01
Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361
Synthesis of Organic Matter of Prebiotic Chemistry at the Protoplanetary Disc
NASA Astrophysics Data System (ADS)
Snytnikov, Valeriy; Stoynovskaya, Olga; Rudina, Nina
We have carried out scanning electron microscopic examination of CM carbonaceous chondrites meteorites Migey, Murchison, Staroe Boriskino aged more than 4.56 billion years (about 50 million years from the beginning of the formation of the Solar system). Our study confirmed the conclusion of Rozanov, Hoover and other researchers about the presence of microfossils of bacterial origin in the matrix of all these meteorites. Since the time of the Solar system formation is 60 - 100 million years, the primary biocenosis emerged in the protoplanetary disc of the Solar system before meteorites or simultaneously with them. It means that prebiological processes and RNA world appeared even earlier in the circumsolar protoplanetary disc. Most likely, this appearance of prebiotic chemistry takes place nowday in massive and medium-massive discs of the observed young stellar objects (YSO) class 0 and I. The timescale of the transition from chemical to biological evolution took less than 50 million years for the Solar system. Further evolution of individual biocenosis in a protoplanetary disc associated with varying physico-chemical conditions during the formation of the Solar system bodies. Biocenosis on these bodies could remove or develop under the influence of many cosmic factors and geological processes in the case of Earth. To complete the primary biosphere formation in short evolution time - millions of years - requires highly efficient chemical syntheses. In industrial chemistry for the efficient synthesis of ammonia, hydrogen cyanide, methanol and other organic species, that are the precursors to obtain prebiotic compounds, catalytic reactors of high pressure are used. Thus (1) necessary amount of the proper catalyst in (2) high pressure areas of the disc can trigger these intense syntheses. The disc contains the solids with the size from nanoparticle to pebble. Iron and magnesium is catalytically active ingredient for such solids. The puzzle is a way to provide hydrogen pressure inside the disc from tens to hundred atmospheres. We simulated unsteady processes in massive circumstellar discs around YSO class O and I. In the computational experiments, we have shown that at a certain stage of its evolution the circumstellar discs of gas and solids produces local areas of high pressure. According to the classical heterogeneous catalysis, a wide range of organic and prebiotic compounds could have been synthesized in these areas. Can we capture these areas of high pressure synthesis in observation of circumstellar discs? Due to the small sizes of such areas they can be hardly ever resolved even with the modern telescopes such as ALMA. However, we can try to detect their signatures in the disc, since the gas of the disc keep the set of organic synthesis products. The idea is to define the signature of the process using laboratory experiments. Varying gas temperature and pressure in laboratory setup we can carry out the catalytic high pressure syntheses and specify the set of gaseous products. These sets of organic compounds observed in the discs may serve as indicators of the emergence of high-pressure areas of prebiotic chemistry. Thus, there is a special interest to the study of YSO class 0 and I by means of observational astronomy. For these objects, first data on the presence of individual organic compounds in massive hydrogen-helium component of the discs appear. The origin of the organic compounds that are associated with chemical reactions in the discs should be separated from the set of organic compounds of the initial molecular cloud.
Supersonic gas streams enhance the formation of massive black holes in the early universe.
Hirano, Shingo; Hosokawa, Takashi; Yoshida, Naoki; Kuiper, Rolf
2017-09-29
The origin of super-massive black holes in the early universe remains poorly understood. Gravitational collapse of a massive primordial gas cloud is a promising initial process, but theoretical studies have difficulty growing the black hole fast enough. We report numerical simulations of early black hole formation starting from realistic cosmological conditions. Supersonic gas motions left over from the Big Bang prevent early gas cloud formation until rapid gas condensation is triggered in a protogalactic halo. A protostar is formed in the dense, turbulent gas cloud, and it grows by sporadic mass accretion until it acquires 34,000 solar masses. The massive star ends its life with a catastrophic collapse to leave a black hole-a promising seed for the formation of a monstrous black hole. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Ghosts, strong coupling, and accidental symmetries in massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deffayet, C.; GReCO/IAP, 98 bis boulevard Arago, 75014 Paris; Rombouts, J.-W.
2005-08-15
We show that the strong self-interaction of the scalar polarization of a massive graviton can be understood in terms of the propagation of an extra ghostlike degree of freedom, thus relating strong coupling to the sixth degree of freedom discussed by Boulware and Deser in their Hamiltonian analysis of massive gravity. This enables one to understand the Vainshtein recovery of solutions of massless gravity as being due to the effect of the exchange of this ghost, which gets frozen at distances larger than the Vainshtein radius. Inside this region, we can trust the two-field Lagrangian perturbatively, while at larger distancesmore » one can use the higher derivative formulation. We also compare massive gravity with other models, namely, deconstructed theories of gravity, as well as the Dvali-Gabadadze-Porrati model. In the latter case, we argue that the Vainshtein recovery process is of a different nature, not involving a ghost degree of freedom.« less
Nucleosynthesis in the first massive stars
NASA Astrophysics Data System (ADS)
Choplin, Arthur; Meynet, Georges; Maeder, André; Hirschi, Raphael; Chiappini, Cristina
2018-01-01
The nucleosynthesis in the first massive stars may be constrained by observing the surface composition of long-lived very iron-poor stars born around 10 billion years ago from material enriched by their ejecta. Many interesting clues on physical processes having occurred in the first stars can be obtained based on nuclear aspects. First, in these first massive stars, mixing must have occurred between the H-burning and the He-burning zone during their nuclear lifetimes; Second, only the outer layers of these massive stars have enriched the material from which the very iron-poor stars, observed today in the halo of the MilkyWay, have formed. These two basic requirements can be obtained by rotating stellar models at very low metallicity. In the present paper, we discuss the arguments supporting this view and illustrate the sensitivity of the results concerning the [Mg/Al] ratio on the rate of the reaction 23Na(p,γ)24Mg.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Plasma transfusion for patients with severe hemorrhage: what is the evidence?
Callum, Jeannie L; Rizoli, Sandro
2012-05-01
The following review will detail the current knowledge in massive hemorrhage with regard to the pathophysiology of the coagulation disturbance, the role of plasma, the role of alternatives to plasma, and the clinical value of having a massive transfusion protocol. The coagulation disturbance in trauma patients is more than just the result of consumption of clotting factors at sites of injury and dilution from the infusion of intravenous fluids and red blood cells (RBCs). Even before substantial amounts of fluid resuscitation and RBC transfusion, one-quarter of trauma patients already have abnormal coagulation variables. There is an apparent role for the activation of protein C, hypofibrinogenemia, and fibrin(gen)olysis in the coagulation disturbance after trauma and massive hemorrhage. None of these three disturbances would be completely mitigated by the use of plasma alone, suggesting that there may be an opportunity to improve care of these patients with alternative strategies, such as fibrinogen concentrates and antifibrinolytics. Despite numerous retrospective cohort studies evaluating 1:1 plasma to RBC formula-driven resuscitation, the overall clinical value of this approach is unclear. Studies have even raised concerns regarding a potential increase in morbidity associated with this approach, particularly for patients overtriaged to 1:1 where a massive transfusion is unlikely. We also do not have sufficient evidence to recommend either goal-directed therapy with thromboelastography or early use of fibrinogen replacement, with either cryoprecipitate or fibrinogen concentrates. We have high-quality data that argue against the role for recombinant Factor VIIa that should prompt removal of this strategy from existing protocols. In contrast, we have high-level evidence that all bleeding trauma patients should receive tranexamic acid as soon as possible after injury. This therapy must be included in hemorrhage protocols. If we are to improve the care of massively bleeding patients on a firm scientific ground, we will need large-scale randomized trials to delineate the role of coagulation replacement and the utility of laboratory monitoring. But even until these trials are completed, it is clear that a massive transfusion protocol is needed in all hospitals that manage bleeding patients, to ensure a prompt and coordinated response to hemorrhage. © 2012 American Association of Blood Banks.
A method of fast mosaic for massive UAV images
NASA Astrophysics Data System (ADS)
Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong
2014-11-01
With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.
NASA Astrophysics Data System (ADS)
Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.
2017-12-01
The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.
Programmable DNA-Mediated Multitasking Processor.
Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin
2015-04-30
Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.
Health plans keeping drug cost increases in check with programs that promote generics.
2002-07-01
To counter the massive amount of drug company detailing and marketing that is partly responsible for driving up pharmaceutical costs, health plans and some independent practice associations are promoting the use of generics to physicians in their networks. While most physicians in capitated contracts don't directly benefit from the movement to encourage generics unless they have pharmacy risk, some health plans are paying physicians financial incentives to increase generic prescribing.
Delivery of Fuel and Construction Materials to South Pole Station
1993-07-01
AID-A270 431 Delivery of Fuel and Construction Materials to South Pole Station Stephen L. DenHartog and George L. Blaisdell July 993 DTIC ELECT S OCT...South Pole Station, ideally with minimal impact on the current science and operational program. The new station will require the delivery of massive...amounts of construction materials to this remote site. The existing means of delivering material and fuel to the South Pole include the use of specialized
2017-12-08
When two black holes collide, they release massive amounts of energy in the form of gravitational waves that last a fraction of a second and can be "heard" throughout the universe - if you have the right instruments. Today we learned that the #LIGO project heard the telltale chirp of black holes colliding, fulfilling Einstein's General Theory of Relativity. NASA's LISA mission will look for direct evidence of gravitational waves. go.nasa.gov/23ZbqoE This video illustrates what that collision might look like.
Probiotics as control agents in aquaculture
NASA Astrophysics Data System (ADS)
Geovanny D, Gómez R.; Balcázar, José Luis; Ma, Shen
2007-01-01
Infectious diseases constitute a limiting factor in the development of the aquaculture production, and control has solely concentrated on the use of antibiotics. However, the massive use of antibiotics for the control of diseases has been questioned by acquisition of antibiotic resistance and the need of alternative is of prime importance. Probiotics, live microorganisms administered in adequate amounts that confer a healthy effect on the host, are emerging as significant microbial food supplements in the field of prophylaxis.
Ultra-fast outflows (aka UFOs) from AGNs and QSOs
NASA Astrophysics Data System (ADS)
Cappi, M.; Tombesi, F.; Giustini, M.
During the last decade, strong observational evidence has been accumulated for the existence of massive, high velocity winds/outflows (aka Ultra Fast Outflows, UFOs) in nearby AGNs and in more distant quasars. Here we briefly review some of the most recent developments in this field and discuss the relevance of UFOs for both understanding the physics of accretion disk winds in AGNs, and for quantifying the global amount of AGN feedback on the surrounding medium.
A physical model of mass ejection in failed supernovae
NASA Astrophysics Data System (ADS)
Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel
2018-06-01
During the core collapse of massive stars, the formation of the proto-neutron star is accompanied by the emission of a significant amount of mass energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analysing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜ 20-100 per cent of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.
Zhou, Xiaolu
2015-01-01
The growing number of bike sharing systems (BSS) in many cities largely facilitates biking for transportation and recreation. Most recent bike sharing systems produce time and location specific data, which enables the study of travel behavior and mobility of each individual. However, despite a rapid growth of interest, studies on massive bike sharing data and the underneath travel pattern are still limited. Few studies have explored and visualized spatiotemporal patterns of bike sharing behavior using flow clustering, nor examined the station functional profiles based on over-demand patterns. This study investigated the spatiotemporal biking pattern in Chicago by analyzing massive BSS data from July to December in 2013 and 2014. The BSS in Chicago gained more popularity. About 15.9% more people subscribed to this service. Specifically, we constructed bike flow similarity graph and used fastgreedy algorithm to detect spatial communities of biking flows. By using the proposed methods, we discovered unique travel patterns on weekdays and weekends as well as different travel trends for customers and subscribers from the noisy massive amount data. In addition, we also examined the temporal demands for bikes and docks using hierarchical clustering method. Results demonstrated the modeled over-demand patterns in Chicago. This study contributes to offer better knowledge of biking flow patterns, which was difficult to obtain using traditional methods. Given the trend of increasing popularity of the BSS and data openness in different cities, methods used in this study can extend to examine the biking patterns and BSS functionality in different cities. PMID:26445357
Summers, Thomas; Johnson, Viviana V; Stephan, John P; Johnson, Gloria J; Leonard, George
2009-08-01
Massive transfusion of D- trauma patients in the combat setting involves the use of D+ red blood cells (RBCs) or whole blood along with suboptimal pretransfusion test result documentation. This presents challenges to the transfusion service of tertiary care military hospitals who ultimately receive these casualties because initial D typing results may only reflect the transfused RBCs. After patients are stabilized, mixed-field reaction results on D typing indicate the patient's true inherited D phenotype. This case series illustrates the utility of automated gel column agglutination in detecting mixed-field reactions in these patients. The transfusion service test results, including the automated gel column agglutination D typing results, of four massively transfused D- patients transfused D+ RBCs is presented. To test the sensitivity of the automated gel column agglutination method in detecting mixed-field agglutination reactions, a comparative analysis of three automated technologies using predetermined mixtures of D+ and D- RBCs is also presented. The automated gel column agglutination method detected mixed-field agglutination in D typing in all four patients and in the three prepared control specimens. The automated microwell tube method identified one of the three prepared control specimens as indeterminate, which was subsequently manually confirmed as a mixed-field reaction. The automated solid-phase method was unable to detect any mixed fields. The automated gel column agglutination method provides a sensitive means for detecting mixed-field agglutination reactions in the determination of the true inherited D phenotype of combat casualties transfused massive amounts of D+ RBCs.
Walsh, Mike J; Tharratt, Steven R; Offerman, Steven R
2010-06-01
Liquid nitrogen (LN) ingestion is unusual, but may be encountered by poison centers, emergency physicians, and general surgeons. Unique properties of LN produce a characteristic pattern of injury. A 19-year-old male college student presented to the Emergency Department complaining of abdominal pain and "bloating" after drinking LN. His presentation vital signs were remarkable only for mild tachypnea and tachycardia. On physical examination, he had mild respiratory difficulty due to abdominal distention. His abdomen was tense and distended. Abdominal X-ray studies revealed a massive pneumoperitoneum. At laparotomy, he was found to have a large amount of peritoneal gas. No perforation was identified. After surgery, the patient made an uneventful recovery and was discharged 5 days later. At 2-week clinic follow-up, he was doing well without complications. Nitrogen is a colorless, odorless gas at room temperature. Due to its low boiling point (-195 degrees C), LN rapidly evaporates when in contact with body surface temperatures. Therefore, ingested LN causes damage by two mechanisms: rapid freezing injury upon mucosal contact and rapid volume expansion as nitrogen gas is formed. Patients who ingest LN may develop gastrointestinal perforation and massive pneumoperitoneum. Because rapid gas formation may allow large volumes to escape from tiny perforations, the exact site of perforation may never be identified. In cases of LN ingestion, mucosal injury and rapid gas formation can cause massive pneumoperitoneum. Although laparotomy is recommended for all patients with signs of perforation, the site of injury may never be identified. Copyright 2010 Elsevier Inc. All rights reserved.
Zhou, Xiaolu
2015-01-01
The growing number of bike sharing systems (BSS) in many cities largely facilitates biking for transportation and recreation. Most recent bike sharing systems produce time and location specific data, which enables the study of travel behavior and mobility of each individual. However, despite a rapid growth of interest, studies on massive bike sharing data and the underneath travel pattern are still limited. Few studies have explored and visualized spatiotemporal patterns of bike sharing behavior using flow clustering, nor examined the station functional profiles based on over-demand patterns. This study investigated the spatiotemporal biking pattern in Chicago by analyzing massive BSS data from July to December in 2013 and 2014. The BSS in Chicago gained more popularity. About 15.9% more people subscribed to this service. Specifically, we constructed bike flow similarity graph and used fastgreedy algorithm to detect spatial communities of biking flows. By using the proposed methods, we discovered unique travel patterns on weekdays and weekends as well as different travel trends for customers and subscribers from the noisy massive amount data. In addition, we also examined the temporal demands for bikes and docks using hierarchical clustering method. Results demonstrated the modeled over-demand patterns in Chicago. This study contributes to offer better knowledge of biking flow patterns, which was difficult to obtain using traditional methods. Given the trend of increasing popularity of the BSS and data openness in different cities, methods used in this study can extend to examine the biking patterns and BSS functionality in different cities.
No obvious sympathetic excitation after massive levothyroxine overdose: A case report.
Xue, Jianxin; Zhang, Lei; Qin, Zhiqiang; Li, Ran; Wang, Yi; Zhu, Kai; Li, Xiao; Gao, Xian; Zhang, Jianzhong
2018-06-01
Thyrotoxicosis from an overdose of medicinal thyroid hormone is a condition that may be associated with a significant delay in onset of toxicity. However, limited literature is available regarding thyrotoxicosis attributed to excessive ingestion of exogenous thyroid hormone and most cases described were pediatric clinical researches. Herein, we presented the course of a patient who ingested a massive amount of levothyroxine with no obvious sympathetic excited symptoms exhibited and reviewed feasible treatment options for such overdoses. A 41-year-old woman patient with ureteral calculus ingested a massive amount of levothyroxine (120 tablets, equal to 6 mg in total) during her hospitalization. Her transient vital signs were unremarkable after ingestion except for significantly accelerated breathing rate of 45 times per minute. Initial laboratory findings revealed evidently elevated serum levels of thyroxine (T4) >320 nmol/L, free triiodothyronine (fT3) 10.44 pmol/L, and free thyroxine (fT4) >100 pmol/L. The patient had a history of hypothyroidism, which was managed with thyroid hormone replacement (levothyroxine 100 μg per day). Besides, she also suffered from systemic lupus erythematosus and chronic pancreatitis. This is a case of excessive ingestion of exogenous thyroid hormone in an adult. The interventions included use propranolol to prevent heart failure; utilize hemodialysis to remove redundant thyroid hormone from blood; closely monitor the vital signs, basal metabolic rate, blood biochemical indicators, and serum levels of thyroid hormone. The woman had no obvious symptoms of thyrotoxicosis. After 4 weeks, the results of thyroid function indicated that serum thyroid hormone levels were completely recovered to pre-ingestion levels. Accordingly, the levothyroxine was used again as before. Adults often exhibit more severe symptoms after intaking overdose levothyroxine due to their complex medical history and comorbidities than children. As for them, hemodialysis should be considered as soon as possible. Besides, diverse treatments according to specific symptoms and continuously monitoring were indispensable.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Debris Discs: Modeling/theory review
NASA Astrophysics Data System (ADS)
Thébault, P.
2012-03-01
An impressive amount of photometric, spectroscopic and imaging observations of circumstellar debris discs has been accumulated over the past 3 decades, revealing that they come in all shapes and flavours, from young post-planet-formation systems like Beta-Pic to much older ones like Vega. What we see in these systems are small grains, which are probably only the tip of the iceberg of a vast population of larger (undetectable) collisionally-eroding bodies, leftover from the planet-formation process. Understanding the spatial structure, physical properties, origin and evolution of this dust is of crucial importance, as it is our only window into what is going on in these systems. Dust can be used as a tracer of the distribution of their collisional progenitors and of possible hidden massive pertubers, but can also allow to derive valuable information about the disc's total mass, size distribution or chemical composition. I will review the state of the art in numerical models of debris disc, and present some important issues that are explored by current modelling efforts: planet-disc interactions, link between cold (i.e. Herschel-observed) and hot discs, effect of binarity, transient versus continuous processes, etc. I will finally present some possible perspectives for the development of future models.
NASA Technical Reports Server (NTRS)
Moore, Jeffrey
2012-01-01
Titan may have acquired its massive atmosphere relatively recently in solar system history. The warming sun may have been key to generating Titan's atmosphere over time, starting from a thin atmosphere with condensed surface volatiles like Triton, with increased luminosity releasing methane, and then large amounts of nitrogen (perhaps suddenly), into the atmosphere. This thick atmosphere, initially with much more methane than at present, resulted in global fluvial erosion that has over time retreated towards the poles with the removal of methane from the atmosphere. Basement rock, as manifested by bright, rough, ridges, scarps, crenulated blocks, or aligned massifs, mostly appears within 30 degrees of the equator. This landscape was intensely eroded by fluvial processes as evidenced by numerous valley systems, fan-like depositional features and regularly-spaced ridges (crenulated terrain). Much of this bedrock landscape, however, is mantled by dunes, suggesting that fluvial erosion no longer dominates in equatorial regions. High midlatitude regions on Titan exhibit dissected sedimentary plains at a number of localities, suggesting deposition (perhaps by sediment eroded from equatorial regions) followed by erosion. The polar regions are mainly dominated by deposits of fluvial and lacustrine sediment. Fluvial processes are active in polar areas as evidenced by alkane lakes and occasional cloud cover.
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Lagkouvardos, Ilias; Joseph, Divya; Kapfhammer, Martin; Giritli, Sabahattin; Horn, Matthias; Haller, Dirk; Clavel, Thomas
2016-09-23
The SRA (Sequence Read Archive) serves as primary depository for massive amounts of Next Generation Sequencing data, and currently host over 100,000 16S rRNA gene amplicon-based microbial profiles from various host habitats and environments. This number is increasing rapidly and there is a dire need for approaches to utilize this pool of knowledge. Here we created IMNGS (Integrated Microbial Next Generation Sequencing), an innovative platform that uniformly and systematically screens for and processes all prokaryotic 16S rRNA gene amplicon datasets available in SRA and uses them to build sample-specific sequence databases and OTU-based profiles. Via a web interface, this integrative sequence resource can easily be queried by users. We show examples of how the approach allows testing the ecological importance of specific microorganisms in different hosts or ecosystems, and performing targeted diversity studies for selected taxonomic groups. The platform also offers a complete workflow for de novo analysis of users' own raw 16S rRNA gene amplicon datasets for the sake of comparison with existing data. IMNGS can be accessed at www.imngs.org.
Sebaa, Abderrazak; Chikh, Fatima; Nouicer, Amina; Tari, AbdelKamel
2018-02-19
The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.
Slack, J.F.; Coad, P.R.
1989-01-01
The tourmalines and chlorites record a series of multiple hydrothermal and metamorphic events. Paragenetic studies suggest that tourmaline was deposited during several discrete stages of mineralization, as evidence by brecciation and cross-cutting relationships. Most of the tourmalines have two concentric growth zones defined by different colours (green, brown, blue, yellow). Some tourmalines also display pale discordant rims that cross-cut and embay the inner growth zones and polycrystalline, multiple-extinction domains. Late sulphide veinlets (chalcopyrite, pyrrhotite) transect the inner growth zones and pale discordant rims of many crystals. The concentric growth zones are interpreted as primary features developed by the main ore-forming hydrothermal system, whereas the discordant rims, polycrystalline domains, and cross-cutting sulphide veinlets reflect post-ore metamorphic processes. Variations in mineral proportions and mineral chemistry within the deposit mainly depend on fluctuations in temperature, pH, water/rock ratios, and amounts of entrained seawater. -from Authors
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Challenge Paper: Validation of Forensic Techniques for Criminal Prosecution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erbacher, Robert F.; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.
2007-04-10
Abstract: As in many domains, there is increasing agreement in the user and research community that digital forensics analysts would benefit from the extension, development and application of advanced techniques in performing large scale and heterogeneous data analysis. Modern digital forensics analysis of cyber-crimes and cyber-enabled crimes often requires scrutiny of massive amounts of data. For example, a case involving network compromise across multiple enterprises might require forensic analysis of numerous sets of network logs and computer hard drives, potentially involving 100?s of gigabytes of heterogeneous data, or even terabytes or petabytes of data. Also, the goal for forensic analysismore » is to not only determine whether the illicit activity being considered is taking place, but also to identify the source of the activity and the full extent of the compromise or impact on the local network. Even after this analysis, there remains the challenge of using the results in subsequent criminal and civil processes.« less
Chemoprevention of cancer: an ongoing saga.
Kellen, J A
1999-01-01
The trivial adage "an ounce of prevention..." is certainly appropriate in oncology; cancer has and continues to have an enormous impact on morbidity, suffering, socioeconomics and mortality. Curative therapy is elusive--cancer remains a mainly lethal disease, which makes the objective of prevention even more important and attractive. Sober estimates put the potentially avoidable or preventable cancers in the Western World at 80% (1): the effects of smoking and alcohol, being overweight, diet promiscuity and other lifestyle choices are well known, yet at the individual level, corrective measures are disappointingly ignored. Lately, this issue is being further weakened by our acceptance of inherited susceptibility--why change our habits and indulgences, if we can not escape our genetic destiny? However, there is a massive and growing amount of information on chemoprevention which needs to be carefully evaluated, in the hope that someday, we should be able to avoid or at least delay cancer by the use of natural or synthetic compounds which intervene in the early precancerous process.
Metal release from stainless steel particles in vitro-influence of particle size.
Midander, K; Pan, J; Wallinder, I Odnevall; Leygraf, C
2007-01-01
Human inhalation of airborne metallic particles is important for health risk assessment. To study interactions between metallic particles and the human body, metal release measurements of stainless steel powder particles were performed in two synthetic biological media simulating lung-like environments. Particle size and media strongly influence the metal release process. The release rate of Fe is enhanced compared with Cr and Ni. In artificial lysosomal fluid (ALF, pH 4.5), the accumulated amounts of released metal per particle loading increase drastically with decreasing particle size. The release rate of Fe per unit surface area increases with decreasing particle size. Compared with massive sheet metal, fine powder particles (<4 microm) show similar release rates of Cr and Ni, but a higher release rate of Fe. Release rates in Gamble's solution (pH 7.4), for all powders investigated, are significantly lower compared to ALF. No clear trend is seen related to particle size in Gamble's solution.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zammarchi, E.; Donati, M.A.; Morrone, A.
Few patients with the early-infantile form of galactosialidosis have been described to date. Presented here is the first Italian case. Fetal hydrops was detected by ultrasound at week 24 of gestation. At birth, the infant presented with hypotonial, massive edema, a flattened coarse facies. telangiectasias, and hepatosplenomegaly, but no dysostosis multiplex. The patient died 72 days postpartum. Excessive sialyloligosaccharides in urine, as well as vacuolation of lymphocytes and eosinophilic granulocytes in peripheral blood, were indicative of a lysosomal storage disease. In the patient`s fibroblasts, both {alpha}-neuraminidase and {beta}-galactosidase activities were severely reduced, and cathepsin A activity was <1% of controlmore » levels, confirming the biochemical diagnosis of galactosialidosis. However, in contrast to previously reported early-infantile cases, a normal amount of protective protein/cathepsin A mRNA was detected on Northern blots. This mutant transcript was translated into a precursor protein that was not processed into the mature enzyme and lacked both protective and catalytic activities. 28 refs., 4 figs., 1 tab.« less
The mystery of a supposed massive star exploding in a brightest cluster galaxy
NASA Astrophysics Data System (ADS)
Hosseinzadeh, Griffin
2017-08-01
Most of the diversity of core-collapse supernovae results from late-stage mass loss by their progenitor stars. Supernovae that interact with circumstellar material (CSM) are a particularly good probe of these last stages of stellar evolution. Type Ibn supernovae are a rare and poorly understood class of hydrogen-poor explosions that show signs of interaction with helium-rich CSM. The leading hypothesis is that they are explosions of very massive Wolf-Rayet stars in which the supernova ejecta excites material previously lost by stellar winds. These massive stars have very short lifetimes, and therefore should only found in actively star-forming galaxies. However, PS1-12sk is a Type Ibn supernova found on the outskirts of a giant elliptical galaxy. As this is extraordinary unlikely, we propose to obtain deep UV images of the host environment of PS1-12sk in order to map nearby star formation and/or find a potential unseen star-forming host. If star formation is detected, its amount and location will provide deep insights into the progenitor picture for the poorly-understood Type Ibn class. If star formation is still not detected, these observations would challenge the well-accepted hypothesis that these are core-collapse supernovae at all.
Endoscopic management of massive mercury ingestion
Zag, Levente; Berkes, Gábor; Takács, Irma F; Szepes, Attila; Szabó, István
2017-01-01
Abstract Rationale: Ingestion of a massive amount of metallic mercury was thought to be harmless until the last century. After that, in a number of cases, mercury ingestion has been associated with appendicitis, impaired liver function, memory deficits, aspiration leading to pneumonitis and acute renal failure. Treatment includes gastric lavage, giving laxatives and chelating agents, but rapid removal of metallic mercury with gastroscopy has not been used. Patient concerns: An 18-year-old man was admitted to our emergency department after drinking 1000 g of metallic mercury as a suicide attempt. Diagnosis: Except from mild umbilical tenderness, he had no other symptoms. Radiography showed a metallic density in the area of the stomach. Intervention: Gastroscopy was performed to remove the mercury. One large pool and several small droplets of mercury were removed from the stomach. Outcomes: Blood and urine mercury levels of the patient remained low during hospitalization. No symptoms of mercury intoxication developed during the follow-up period. Lessons: Massive mercury ingestion may cause several symptoms, which can be prevented with prompt treatment. We used endoscopy to remove the mercury, which shortened the exposure time and minimized the risk of aspiration. This is the first case where endoscopy was used for the management of mercury ingestion. PMID:28562544
[NEII] Line Velocity Structure of Ultracompact HII Regions
NASA Astrophysics Data System (ADS)
Okamoto, Yoshiko K.; Kataza, Hirokazu; Yamashita, Takuya; Miyata, Takashi; Sako, Shigeyuki; Honda, Mitsuhiko; Onaka, Takashi; Fujiyoshi, Takuya
Newly formed massive stars are embedded in their natal molecular clouds and are observed as ultracompact HII regions. They emit strong ionic lines such as [NeII] 12.8 micron. Since Ne is ionized by UV photons of E>21.6eV which is higher than the ionization energy of hydrogen atoms the line probes the ionized gas near the ionizing stars. This enables to probe gas motion in the vicinity of recently-formed massive stars. High angular and spectral resolution observations of the [NeII] line will thus provide siginificant information on structures (e.g. disks and outflows) generated through massive star formation. We made [NeII] spectroscopy of ultracompact HII regions using the Cooled Mid-Infrared Camera and Spectrometer (COMICS) on the 8.2m Subaru Telescope in July 2002. Spatial and spectral resolutions were 0.5"" and 10000 respectively. Among the targets G45.12+0.13 shows the largest spatial variation in velocity. The brightest area of G45.12+0.13 has the largest line width in the object. The total velocity deviation amounts to 50km/s (peak to peak value) in the observed area. We report the velocity structure of [NeII] emission of G45.12+0.13 and discuss the gas motion near the ionizing star.
Resource Planning for Massive Number of Process Instances
NASA Astrophysics Data System (ADS)
Xu, Jiajie; Liu, Chengfei; Zhao, Xiaohui
Resource allocation has been recognised as an important topic for business process execution. In this paper, we focus on planning resources for a massive number of process instances to meet the process requirements and cater for rational utilisation of resources before execution. After a motivating example, we present a model for planning resources for process instances. Then we design a set of heuristic rules that take both optimised planning at build time and instance dependencies at run time into account. Based on these rules we propose two strategies, one is called holistic and the other is called batched, for resource planning. Both strategies target a lower cost, however, the holistic strategy can achieve an earlier deadline while the batched strategy aims at rational use of resources. We discuss how to find balance between them in the paper with a comprehensive experimental study on these two approaches.
A novel explosive process is required for the gamma-ray burst GRB 060614.
Gal-Yam, A; Fox, D B; Price, P A; Ofek, E O; Davis, M R; Leonard, D C; Soderberg, A M; Schmidt, B P; Lewis, K M; Peterson, B A; Kulkarni, S R; Berger, E; Cenko, S B; Sari, R; Sharon, K; Frail, D; Moon, D-S; Brown, P J; Cucchiara, A; Harrison, F; Piran, T; Persson, S E; McCarthy, P J; Penprase, B E; Chevalier, R A; MacFadyen, A I
2006-12-21
Over the past decade, our physical understanding of gamma-ray bursts (GRBs) has progressed rapidly, thanks to the discovery and observation of their long-lived afterglow emission. Long-duration (> 2 s) GRBs are associated with the explosive deaths of massive stars ('collapsars', ref. 1), which produce accompanying supernovae; the short-duration (< or = 2 s) GRBs have a different origin, which has been argued to be the merger of two compact objects. Here we report optical observations of GRB 060614 (duration approximately 100 s, ref. 10) that rule out the presence of an associated supernova. This would seem to require a new explosive process: either a massive collapsar that powers a GRB without any associated supernova, or a new type of 'engine', as long-lived as the collapsar but without a massive star. We also show that the properties of the host galaxy (redshift z = 0.125) distinguish it from other long-duration GRB hosts and suggest that an entirely new type of GRB progenitor may be required.
Massive Stars and Star Clusters in the Era of JWST
NASA Astrophysics Data System (ADS)
Klein, Richard
Massive stars lie at the center of the web of physical processes that has shaped the universe as we know it, governing the evolution of the interstellar medium of galaxies, producing a majority of the heavy elements, and thereby determining the evolution of galaxies. Massive stars are also important as signposts, since they produce most of the light and almost all the ionizing radiation in regions of active star formation. A significant fraction of all stars form in massive clusters, which will be observable throughout the visible universe with JWST. Their luminosities are so high that the pressure of their light on interstellar dust grains is likely the dominant feedback mechanism regulating their formation. While this process has been studied in the local Universe, much less attention has been focused on how it behaves at high redshift, where the dust abundance is much lower due to the overall lower abundance of heavy elements. The high redshift Universe also differs from the nearby one in that observations imply that high redshift star formation occurs at significantly higher densities than are typically found locally. We propose to simulate the formation of individual massive stars from the high redshift universe to the present day universe spanning metallicities ranging from 0.001 to 1.0 and column densities from 0.1to 30.0 g/cm2 focusing on how the process depends on both the dust abundance and on the density of the star-forming gas. These simulations will be among the first to treat the formation of Population II stars, which form in regions of low metallicity. Based on these results, we shall then simulate the formation of clusters of stars across also cosmic time, both of moderate mass, such as the Orion Nebula Cluster, and of high mass, such as the super star clusters seen in starburst galaxies. These state-of-the-art simulations will be carried out using our newly developed advanced techniques in our radiation-magneto-hydrodynamic AMR code ORION, for radiative transfer with both ionizing and non-ionizing radiation that accurately handle both the direct radiation from stars and the diffuse infrared radiation field that builds up when direct radiation is reprocessed by dust grains. Our simulations include all of the relevant feedback effects such as radiative heating, radiation pressure, photodissociation and photoionization, protostellar outflows and stellar winds. The challenge in simulating the formation of massive stars and massive clusters is to include all these feedback effects self-consistently as they occur collectively. We are in an excellent position to do so. The results of these simulations will be directly relevant to the interpretation of observations with JWST, which will probe cluster formation in both the nearby and distant universe, and with SOFIA, which can observe high-mass star formation in the Galaxy. We shall make direct comparison with observations of massive protostars in the Galactic disk. We shall also compare with observations of star clusters that form in dense environments, such as the Galactic Center and in merging galaxies (e.g., the Antennae), and in low metallicity environments, such as the dwarf starburst galaxy I Zw 18. Once our simulations have been benchmarked with observations of massive protostars in the Galaxy and massive protoclusters in the local universe, they will provide the theoretical basis for interpreting observations of the formation of massive star clusters at high redshift with JWST. What determines the maximum mass of a star? How does stellar feedback affect the formation of individual stars and the formation of massive star clusters and how the answers to these questions evolve with cosmic time. The proposed research will provide high-resolution input to the study of stellar feedback on galaxy formation with a significantly more accurate treatment of the physics, particularly the radiative transfer that is so important for feedback.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions.
Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros
2014-06-25
The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions
2014-01-01
Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
Systems and methods for rapid processing and storage of data
Stalzer, Mark A.
2017-01-24
Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
Massive Black Hole Binary Mergers and their Gravitational Waves
NASA Astrophysics Data System (ADS)
Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto
2017-01-01
Gravitational Waves (GW) from stellar-mass BH binaries have recently been observed by LIGO, but GW from their supermassive counterparts have remained elusive. Recent upper limits from Pulsar Timing Arrays (PTA) have excluded significant portions of the predicted parameter space. Most previous studies, however, have assumed that most or all Massive Black Hole (MBH) Binaries merge effectively and quickly. I will present results derived—for the first time—from cosmological, hydrodynamic simulations with self-consistently coevolved populations of MBH particles. We perform post-processing simulations of the MBH merger process, using realistic galactic environments, including models of dynamical friction, stellar scattering, gas drag from a circumbinary disk, and GW emission—with no assumptions of merger fractions or timescales. We find that despite only the most massive systems merging effectively (and still on gigayear timescales), the GW Background is only just below current detection limits with PTA. Our models suggest that PTA should make detections within the next decade, and will provide information about MBH binary populations, environments, and even eccentricities. I’ll also briefly discuss prospects for observations of dual-AGN, and the possible importance of MBH triples in the merger process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elmegreen, Bruce G., E-mail: bge@us.ibm.com
The self-enrichment of massive star clusters by p -processed elements is shown to increase significantly with increasing gas density as a result of enhanced star formation rates and stellar scatterings compared to the lifetime of a massive star. Considering the type of cloud core where a globular cluster (GC) might have formed, we follow the evolution and enrichment of the gas and the time dependence of stellar mass. A key assumption is that interactions between massive stars are important at high density, including interactions between massive stars and massive-star binaries that can shred stellar envelopes. Massive-star interactions should also scattermore » low-mass stars out of the cluster. Reasonable agreement with the observations is obtained for a cloud-core mass of ∼4 × 10{sup 6} M {sub ⊙} and a density of ∼2 × 10{sup 6} cm{sup −3}. The results depend primarily on a few dimensionless parameters, including, most importantly, the ratio of the gas consumption time to the lifetime of a massive star, which has to be low, ∼10%, and the efficiency of scattering low-mass stars per unit dynamical time, which has to be relatively large, such as a few percent. Also for these conditions, the velocity dispersions of embedded GCs should be comparable to the high gas dispersions of galaxies at that time, so that stellar ejection by multistar interactions could cause low-mass stars to leave a dwarf galaxy host altogether. This could solve the problem of missing first-generation stars in the halos of Fornax and WLM.« less
Constraining the Final Fates of Massive Stars by Oxygen and Iron Enrichment History in the Galaxy
NASA Astrophysics Data System (ADS)
Suzuki, Akihiro; Maeda, Keiichi
2018-01-01
Recent observational studies of core-collapse supernovae suggest that only stars with zero-age main-sequence masses smaller than 16–18 {M}ȯ explode when they are red supergiants, producing Type IIP supernovae. This may imply that more massive stars produce other types of supernovae or they simply collapse to black holes without giving rise to bright supernovae. This failed supernova hypothesis can lead to significantly inefficient oxygen production because oxygen abundantly produced in inner layers of massive stars with zero-age main-sequence masses around 20–30 {M}ȯ might not be ejected into the surrounding interstellar space. We first assume an unspecified population of oxygen injection events related to massive stars and obtain a model-independent constraint on how much oxygen should be released in a single event and how frequently such events should happen. We further carry out one-box galactic chemical enrichment calculations with different mass ranges of massive stars exploding as core-collapse supernovae. Our results suggest that the model assuming that all massive stars with 9–100 {M}ȯ explode as core-collapse supernovae is still most appropriate in explaining the solar abundances of oxygen and iron and their enrichment history in the Galaxy. The oxygen mass in the Galaxy is not explained when assuming that only massive stars with zero-age main-sequence masses in the range of 9–17 {M}ȯ contribute to the galactic oxygen enrichment. This finding implies that a good fraction of stars more massive than 17 {M}ȯ should eject their oxygen layers in either supernova explosions or some other mass-loss processes.
Vesta and Ceres as Seen by Dawn
NASA Astrophysics Data System (ADS)
Russell, C. T.; Nathues, A.; De Sanctis, M. C.; Prettyman, T. H.; Konopliv, A. S.; Park, R. S.; Jaumann, R.; McSween, H. Y., Jr.; Raymond, C. A.; Pieters, C. M.; McCord, T. B.; Marchi, S.; Schenk, P.; Buczkowski, D.
2015-12-01
Ceres and Vesta are the most massive bodies in the main asteroid belt. They have witnessed 4.6 Ga of solar system history. Dawn's objective is to interview these two witnesses. These bodies are relatively simple protoplanets, with a modest amount of thermal evolution and geochemical alteration. They are our best archetypes of the early building blocks of the terrestrial planets. In particular siderophile elements in the Earth's core were probably first segregated in Vesta-like bodies, and its water was likely first condensed in Ceres-like bodies. Vesta has provided copious meteorites for geochemical analysis. This knowledge was used to infer the constitution of the parent body. Dawn verified that Vesta was consistent with being that body, confirming the geochemical inferences from these samples on the formation and evolution of the solar system. Ceres has not revealed itself with a meteoritic record nor an asteroid family. While the surface is scarred with craters, it is probable that the ejecta from the crater-forming events created little competent material from the icy crust and any such ejected material that reached Earth might have disintegrated upon entry into the Earth's atmosphere. Ceres' surface differs greatly from Vesta's. Plastic or fluidized mass wasting is apparent as are many irregularly shaped craters, including many polygonal crater forms. There are many central-pit craters possibly caused by volatilization of the crust in the center of the impact. There are many central-peak craters but are these due to rebound or pingo-like formation processes? Bright spots, possibly salt deposits, dot the landscape, evidence of fluvial processes beneath the crust. Observations of the largest region of bright spots may suggest sublimation from the surface of the bright area, consistent with Herschel water vapor observations. Ceres is not only the most massive body in the asteroid belt but also possibly the most active occupant of the main belt.
Yet Another Model for the Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Leonard, P. J. T.
2000-05-01
We consider whether a gamma-ray burst can result from a merger between a neutron star and a massive main-sequence star in a binary system following a supernova explosion. The scenario for how this can happen is outlined in Leonard, Hills & Dewey 1994, ApJ, 423, L19-L22. The initially more massive star in a massive binary system evolves and undergoes core collapse to produce a neutron star and supernova. Since the outer layers of the originally more massive star have been transferred to the other star, then the supernova may be hydrogen deficient. The newly-formed neutron star receives a random kick during the explosion. In a small fraction of the cases, the kick has the appropriate direction and amplitude to remove most of the orbital angular momentum of the post-supernova binary system. The result is an orbit with a pericenter smaller than the radius of the non-exploding star. The neutron star rather quickly becomes embedded in the other star, and sinks to its center, giving the envelope of the merged object a lot of rotational angular momentum in the process. Leonard, Hills & Dewey estimate the rate of this process in the Galaxy to be 0.06 per square kpc per Myr for secondaries more massive than 15 solar masses. The fate of the merged object has been the source of much speculation, and we shall assume that a collapsar-like scenario results. That is, the neutron star experiences runaway accretion, collapses into a black hole, which continues to accrete, and produces a pair of jets that bore their way out of the merged object. Observers who lie in the direction of either jet will see a gamma-ray burst. Roughly 1% of supernovae in massive binary systems result in neutron stars quickly becoming embedded in the secondaries, and of those which produce black holes, only 1% would be observable as gamma-ray bursts, if the jets are beamed into 1% of the sky.
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
NASA Technical Reports Server (NTRS)
Meixner, Margaret; Panuzzo, P.; Roman-Duval, J.; Engelbracht, C.; Babler, B.; Seale, J.; Hony, S.; Montiel, E.; Sauvage, M.; Gordon, K.;
2013-01-01
We present an overview or the HERschel Inventory of The Agents of Galaxy Evolution (HERITAGE) in the Magellanic Clouds project, which is a Herschel Space Observatory open time key program. We mapped the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC) at 100, 160, 250, 350, and 500 micron with the Spectral and Photometric Imaging Receiver (SPIRE) and Photodetector Array Camera and Spectrometer (PACS) instruments on board Herschel using the SPIRE/PACS parallel mode. The overriding science goal of HERITAGE is to study the life cycle of matter as traced by dust in the LMC and SMC. The far-infrared and submillimeter emission is an effective tracer of the interstellar medium (ISM) dust, the most deeply embedded young stellar objects (YSOs), and the dust ejected by the most massive stars. We describe in detail the data processing, particularly for the PACS data, which required some custom steps because of the large angular extent of a single observational unit and overall the large amount of data to be processed as an ensemble. We report total global fluxes for LMC and SMC and demonstrate their agreement with measurements by prior missions. The HERITAGE maps of the LMC and SMC are dominated by the ISM dust emission and bear most resemblance to the tracers of ISM gas rather than the stellar content of the galaxies. We describe the point source extraction processing and the critetia used to establish a catalog for each waveband for the HERITAGE program. The 250 micron band is the most sensitive and the source catalogs for this band have approx. 25,000 objects for the LMC and approx. 5500 objects for the SMC. These data enable studies of ISM dust properties, submillimeter excess dust emission, dust-to-gas ratio, Class 0 YSO candidates, dusty massive evolved stars, supemova remnants (including SN1987A), H II regions, and dust evolution in the LMC and SMC. All images and catalogs are delivered to the Herschel Science Center as part of the conummity support aspects of the project. These HERITAGE images and catalogs provide an excellent basis for future research and follow up with other facilities.
Food supplies of stream-dwelling salmonids
Wipfli, Mark S.
2009-01-01
Much is known about the importance of the physical characteristics of salmonid habitat in Alaska and the Pacific Northwest, with far less known about the food sources and trophic processes within these habitats, and the role they play in regulating salmonid productivity. Freshwater food webs supporting salmonids in Alaska rely heavily on nutrient, detritus and prey subsidies from both marine and terrestrial ecosystems. Adult salmon provide a massive input of marine biomass to riverine ecosystems each year when they spawn, die, and decompose, and are a critical food source for young salmon in late summer and fall; riparian forests provide terrestrial invertebrates to streams, which at times comprise over half of the food ingested by stream-resident salmonids; and up-slope, fishless headwater streams are a year-round source of invertebrates and detritus for fish downstream. The quantity of these food resources vary widely depending on source, season, and spatial position within a watershed. Terrestrial invertebrate inputs from riparian habitats are generally the most abundant food source in summer. Juvenile salmonids in streams consume roughly equal amounts of freshwater and terrestrially-derived invertebrates during most of the growing season, but ingest substantial amounts of marine resources (salmon eggs and decomposing salmon tissue) when these food items are present. Quantity, quality, and timing of food resources all appear to be important driving forces in aquatic food web dynamics, community nutrition, and salmonid growth and survival in riverine ecosystems.
NASA Astrophysics Data System (ADS)
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
Black hole growth in the early Universe is self-regulated and largely hidden from view.
Treister, Ezequiel; Schawinski, Kevin; Volonteri, Marta; Natarajan, Priyamvada; Gawiser, Eric
2011-06-15
The formation of the first massive objects in the infant Universe remains impossible to observe directly and yet it sets the stage for the subsequent evolution of galaxies. Although some black holes with masses more than 10(9) times that of the Sun have been detected in luminous quasars less than one billion years after the Big Bang, these individual extreme objects have limited utility in constraining the channels of formation of the earliest black holes; this is because the initial conditions of black hole seed properties are quickly erased during the growth process. Here we report a measurement of the amount of black hole growth in galaxies at redshift z = 6-8 (0.95-0.7 billion years after the Big Bang), based on optimally stacked, archival X-ray observations. Our results imply that black holes grow in tandem with their host galaxies throughout cosmic history, starting from the earliest times. We find that most copiously accreting black holes at these epochs are buried in significant amounts of gas and dust that absorb most radiation except for the highest-energy X-rays. This suggests that black holes grew significantly more during these early bursts than was previously thought, but because of the obscuration of their ultraviolet emission they did not contribute to the re-ionization of the Universe.
"Big data" and the electronic health record.
Ross, M K; Wei, W; Ohno-Machado, L
2014-08-15
Implementation of Electronic Health Record (EHR) systems continues to expand. The massive number of patient encounters results in high amounts of stored data. Transforming clinical data into knowledge to improve patient care has been the goal of biomedical informatics professionals for many decades, and this work is now increasingly recognized outside our field. In reviewing the literature for the past three years, we focus on "big data" in the context of EHR systems and we report on some examples of how secondary use of data has been put into practice. We searched PubMed database for articles from January 1, 2011 to November 1, 2013. We initiated the search with keywords related to "big data" and EHR. We identified relevant articles and additional keywords from the retrieved articles were added. Based on the new keywords, more articles were retrieved and we manually narrowed down the set utilizing predefined inclusion and exclusion criteria. Our final review includes articles categorized into the themes of data mining (pharmacovigilance, phenotyping, natural language processing), data application and integration (clinical decision support, personal monitoring, social media), and privacy and security. The increasing adoption of EHR systems worldwide makes it possible to capture large amounts of clinical data. There is an increasing number of articles addressing the theme of "big data", and the concepts associated with these articles vary. The next step is to transform healthcare big data into actionable knowledge.
Pregalactic black holes - A new constraint
NASA Technical Reports Server (NTRS)
Barrow, J. D.; Silk, J.
1979-01-01
Pregalactic black holes accrete matter in the early universe and produce copious amounts of X radiation. By using observations of the background radiation in the X and gamma wavebands, a strong constraint is imposed upon their possible abundance. If pregalactic black holes are actually present, several outstanding problems of cosmogony can be resolved with typical pregalactic black hole masses of 100 solar masses. Significantly more massive holes cannot constitute an appreciable mass fraction of the universe and are limited by a specific mass-density bound.
Pericardial Effusion with Cardiac Tamponade as a form of presentation of Primary Hypothyroidism.
Agarwal, Arun; Chowdhury, Nikhil; Mathur, Ankit; Sharma, Samiksha; Agarwal, Aakanksha
2016-12-01
Hypothyroidism is a rare cause of pericardial effusion (PE). Pericardial effusion secondary to hypothyroidism remains a diagnostic challenge for clinicians because of its inconsistency between symptoms and amount of pericardial effusion. We report an atypical case that presented with ascites and was diagnosed to have cardiac tamponade secondary to primary hypothyroidism. Besides repeated pericardiocentesis she eventually required surgical management and optimization of medical therapy to manage the massive pericardial effusion. © Journal of the Association of Physicians of India 2011.
Radioactivities and gamma-rays from supernovae
NASA Technical Reports Server (NTRS)
Woosley, S. E.
1991-01-01
An account is given of the implications of several calculations relevant to the estimation of gamma-ray signals from various explosive astronomical phenomena. After discussing efforts to constrain the amounts of Ni-57 and Ti-44 produced in SN 1987A, attention is given to the production of Al-27 in massive stars and SNs. A 'delayed detonation' model of type Ia SNs is proposed, and the gamma-ray signal which may be expected when a bare white dwarf collapses directly into a neutron star is discussed.
BREAD LOAF ROADLESS AREA, VERMONT.
Slack, John F.; Bitar, Richard F.
1984-01-01
On the basis of mineral-resource survey the Bread Loaf Roadless Area, Vermont, is considered to have probable resource potential for the occurrence of volcanogenic massive sulfide deposits of copper, zinc, and lead, particularly in the north and northeastern section of the roadless area. Nonmetallic commodities include minor deposits of sand and gravel, and abundant rock suitable for crushing. However, large amounts of these materials in more accessible locations are available outside the roadless area. A possibility exists that oil or natural gas resources may be present at great depth.
2011-09-15
Networks (VPNs), TLS protects massive amounts of private information, and protecting this data from Man-in-the-Middle ( MitM ) attacks is imperative to...keeping the information secure. This thesis illustrates how an attacker can successfully perform a MitM attack against a TLS connection without alerting...mechanism a user has against a MitM . The goal for this research is to determine if a time threshold exists that can indicate the presence of a MitM in this
Privacy Challenges of Genomic Big Data.
Shen, Hong; Ma, Jian
2017-01-01
With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.
B vitamins in the nervous system.
Bender, D A
1984-01-01
The coenzyme functions of the B vitamins in intermediatry metabolism are well established; nevertheless, for none of them is it possible to determine precisely the connection between the biochemical lesions associated with deficiency and the neurological consequences. Although there is convincing evidence of a neurospecific role for thiamin and other B vitamins, in no case has this role been adequately described. Similarly, the neurochemical sequelae of intoxication by massive amounts of vitamins (so-called mega-vitamin therapy or orthomolecular medicine) remain largely unexplained.
1982-01-01
massive propaganda war, based on lies. Patriotic Lebanese attack Israeli forces. • Israelis increase repression and terror against Lebanese. - For...e Israelis increase repression and terror against Lebanese. An analysis of the amount of space by topicp devoted to articles about Israel and Lebanon...Israeli repressions/ terror .............. 21% (4) United States aid/interactions .......... 4% 100% *Represents percent of space in Red Star for Israel
Ogura, Takayuki; Nakamura, Yoshihiko; Nakano, Minoru; Izawa, Yoshimitsu; Nakamura, Mitsunobu; Fujizuka, Kenji; Suzukawa, Masayuki; Lefor, Alan T
2014-05-01
The ability to easily predict the need for massive transfusion may improve the process of care, allowing early mobilization of resources. There are currently no clear criteria to activate massive transfusion in severely injured trauma patients. The aims of this study were to create a scoring system to predict the need for massive transfusion and then to validate this scoring system. We reviewed the records of 119 severely injured trauma patients and identified massive transfusion predictors using statistical methods. Each predictor was converted into a simple score based on the odds ratio in a multivariate logistic regression analysis. The Traumatic Bleeding Severity Score (TBSS) was defined as the sum of the component scores. The predictive value of the TBSS for massive transfusion was then validated, using data from 113 severely injured trauma patients. Receiver operating characteristic curve analysis was performed to compare the results of TBSS with the Trauma-Associated Severe Hemorrhage score and the Assessment of Blood Consumption score. In the development phase, five predictors of massive transfusion were identified, including age, systolic blood pressure, the Focused Assessment with Sonography for Trauma scan, severity of pelvic fracture, and lactate level. The maximum TBSS is 57 points. In the validation study, the average TBSS in patients who received massive transfusion was significantly greater (24.2 [6.7]) than the score of patients who did not (6.2 [4.7]) (p < 0.01). The area under the receiver operating characteristic curve, sensitivity, and specificity for a TBSS greater than 15 points was 0.985 (significantly higher than the other scoring systems evaluated at 0.892 and 0.813, respectively), 97.4%, and 96.2%, respectively. The TBSS is simple to calculate using an available iOS application and is accurate in predicting the need for massive transfusion. Additional multicenter studies are needed to further validate this scoring system and further assess its utility. Prognostic study, level III.
NASA Astrophysics Data System (ADS)
Yano, Taihei; JASMINE-WG
2018-04-01
Small-JASMINE (hearafter SJ), infrared astrometric satellite, will measure the positions and the proper motions which are located around the Galactic center, by operating at near infrared wave-lengths. SJ will clarify the formation process of the super massive black hole (hearafter SMBH) at the Galactic center. In particular, SJ will determine whether the SMBH was formed by a sequential merging of multiple black holes. The clarification of this formation process of the SMBH will contribute to a better understanding of merging process of satellite galaxies into the Galaxy, which is suggested by the standard galaxy formation scenario. A numerical simulation (Tanikawa and Umemura, 2014) suggests that if the SMBH was formed by the merging process, then the dynamical friction caused by the black holes have influenced the phase space distribution of stars. The phase space distribution measured by SJ will make it possible to determine the occurrences of the merging process.
NASA Astrophysics Data System (ADS)
Csengeri, T.; Leurini, S.; Wyrowski, F.; Urquhart, J. S.; Menten, K. M.; Walmsley, M.; Bontemps, S.; Wienen, M.; Beuther, H.; Motte, F.; Nguyen-Luong, Q.; Schilke, P.; Schuller, F.; Zavagno, A.; Sanna, C.
2016-02-01
Context. The processes leading to the birth of high-mass stars are poorly understood. The key first step to reveal their formation processes is characterising the clumps and cores from which they form. Aims: We define a representative sample of massive clumps in different evolutionary stages selected from the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL), from which we aim to establish a census of molecular tracers of their evolution. As a first step, we study the shock tracer, SiO, mainly associated with shocks from jets probing accretion processes. In low-mass young stellar objects (YSOs), outflow and jet activity decreases with time during the star formation processes. Recently, a similar scenario was suggested for massive clumps based on SiO observations. Here we analyse observations of the SiO (2-1) and (5-4) lines in a statistically significant sample to constrain the change of SiO abundance and the excitation conditions as a function of evolutionary stage of massive star-forming clumps. Methods: We performed an unbiased spectral line survey covering the 3-mm atmospheric window between 84-117 GHz with the IRAM 30 m telescope of a sample of 430 sources of the ATLASGAL survey, covering various evolutionary stages of massive clumps. A smaller sample of 128 clumps has been observed in the SiO (5-4) transition with the APEX telescope to complement the (2-1) line and probe the excitation conditions of the emitting gas. We derived detection rates to assess the star formation activity of the sample, and we estimated the column density and abundance using both an LTE approximation and non-LTE calculations for a smaller subsample, where both transitions have been observed. Results: We characterise the physical properties of the selected sources, which greatly supersedes the largest samples studied so far, and show that they are representative of different evolutionary stages. We report a high detection rate of >75% of the SiO (2-1) line and a >90% detection rate from the dedicated follow-ups in the (5-4) transition. Up to 25% of the infrared-quiet clumps exhibit high-velocity line wings, suggesting that molecular tracers are more efficient tools to determine the level of star formation activity than infrared colour criteria. We also find infrared-quiet clumps that exhibit only a low-velocity component (FWHM ~ 5-6 km s-1) SiO emission in the (2-1) line. In the current picture, where this is attributed to low-velocity shocks from cloud-cloud collisions, this can be used to pinpoint the youngest, thus, likely prestellar massive structures. Using the optically thin isotopologue (29SiO), we estimate that the (2-1) line is optically thin towards most of the sample. Furthermore, based on the line ratio of the (5-4) to the (2-1) line, our study reveals a trend of changing excitation conditions that lead to brighter emission in the (5-4) line towards more evolved sources. Our models show that a proper treatment of non-LTE effects and beam dilution is necessary to constrain trends in the SiO column density and abundance. Conclusions: We conclude that the SiO (2-1) line with broad line profiles and high detection rates is a powerful probe of star formation activity in the deeply embedded phase of the evolution of massive clumps. The ubiquitous detection of SiO in all evolutionary stages suggests a continuous star formation process in massive clumps. Our analysis delivers a more robust estimate of SiO column density and abundance than previous studies and questions the decrease of jet activity in massive clumps as a function of age. The observed increase of excitation conditions towards the more evolved clumps suggests a higher pressure in the shocked gas towards more evolved or more massive clumps in our sample. Full Tables 4, 6, 7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/586/A149
Wide-Field Infrared Survey Explorer Observations of the Evolution of Massive Star-Forming Regions
NASA Technical Reports Server (NTRS)
Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Assef, R. J.
2011-01-01
We present the results of a mid-infrared survey of 11 outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars.We dub this process the "fireworks hypothesis" since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks.We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars.
Wide-Field Infrared Survey Explorer Observations of the Evolution of Massive Star-Forming Regions
NASA Technical Reports Server (NTRS)
Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Asslef, R. J.
2012-01-01
We present the results of a mid-infrared survey of II outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars. We dub this process the "fireworks hypothesis" since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks. We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars.
High-velocity runaway stars from three-body encounters
NASA Astrophysics Data System (ADS)
Gvaramadze, V. V.; Gualandris, A.; Portegies Zwart, S.
2010-01-01
We performed numerical simulations of dynamical encounters between hard, massive binaries and a very massive star (VMS; formed through runaway mergers of ordinary stars in the dense core of a young massive star cluster) to explore the hypothesis that this dynamical process could be responsible for the origin of high-velocity (≥ 200 - 400 km s-1) early or late B-type stars. We estimated the typical velocities produced in encounters between very tight massive binaries and VMSs (of mass of ≥ 200 M⊙) and found that about 3 - 4% of all encounters produce velocities ≥ 400 km s-1, while in about 2% of encounters the escapers attain velocities exceeding the Milky Ways's escape velocity. We therefore argue that the origin of high-velocity (≥ 200 - 400 km s-1) runaway stars and at least some so-called hypervelocity stars could be associated with dynamical encounters between the tightest massive binaries and VMSs formed in the cores of star clusters. We also simulated dynamical encounters between tight massive binaries and single ordinary 50 - 100 M⊙ stars. We found that from 1 to ≃ 4% of these encounters can produce runaway stars with velocities of ≥ 300 - 400 km s-1 (typical of the bound population of high-velocity halo B-type stars) and occasionally (in less than 1% of encounters) produce hypervelocity (≥ 700 km s-1) late B-type escapers.
NASA Astrophysics Data System (ADS)
Matsumoto, R.; Snyder, G. T.; Hiruta, A.; Kakizaki, Y.; Huang, C. Y.; Shen, C. C.
2017-12-01
The geological and geophysical exploration of gas hydrate in the Sea of Japan has revealed that hydrates occur as thick massive deposits within gas chimneys which often give rise to pingo-like hydrate mounds on the seafloor. We examine one case in which LWD has demonstrated anomalous profiles including both very low natural gamma ray (<10 API) and high acoustic velocities (2.5 to 3.5 km/s) extending down to 120mbsf, the base of gas hydrate stability (BGHS)[1]. Both conventional and pressure coring have confirmed thick, massive deposits of pure-gas hydrates. Hydrates in the shallow subsurface (< 20mbsf) are characterized by high H2S concentrations corresponding to AOM-induced production of HS-. The deeper hydrates generally have negligible amounts of H2S, with occasional exceptions in which H2S is moderately high. These observations lead us to conclude that both the re-equilibration and growth of hydrates in high CH4 and low to zero H2S conditions has continued during burial, and that this ongoing growth is an essential processes involved in the development of massive hydrates in the Sea of Japan.Regardless of depth, the Japan Sea gas hydrates are closely associated with 13-C depleted, methane-derived authigenic carbonates (MDACs). These MDACs are considered to have been formed at near-SMT depths as a response to increased alkalinity caused by AOM and, as such, MDACs are assumed to represent approximate paleo-seafloor at times of enhanced methane flux and intensive accumulation of gas hydrate in shallow subsurface. U-Th ages of MDACs collected from various depths in a mound-chimney system in central Joetsu Spur have revealed that the paleo-seafloor of 300 ka is presently situated at 30 to 55 mbsf within the gas chimney, in contrast to off-mound sites where it is situated at 100 mbsf. This suggests that at 300 ka the mound stood as a "hydrate-pingo" of 70 m high relative to the surrounding sea floor. At this time, the BGHS shoaled upwards 10m due to eustatic sea level fall, resulting in the dissociation of gas hydrates right above the BGHS and an enhancement of methane flux through the gas chimney. This study was conducted under the commission from AIST as a part of the methane hydrate research project funded by METI. Reference [1] Matsumoto et al. (2017), Fire in the Ice, 17, 1-6.
Proxy-equation paradigm: A strategy for massively parallel asynchronous computations
NASA Astrophysics Data System (ADS)
Mittal, Ankita; Girimaji, Sharath
2017-09-01
Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.
Very Massive Stars in the local Universe
NASA Astrophysics Data System (ADS)
Vink, Jorick S.; Heger, Alexander; Krumholz, Mark R.; Puls, Joachim; Banerjee, S.; Castro, N.; Chen, K.-J.; Chenè, A.-N.; Crowther, P. A.; Daminelli, A.; Gräfener, G.; Groh, J. H.; Hamann, W.-R.; Heap, S.; Herrero, A.; Kaper, L.; Najarro, F.; Oskinova, L. M.; Roman-Lopes, A.; Rosen, A.; Sander, A.; Shirazi, M.; Sugawara, Y.; Tramper, F.; Vanbeveren, D.; Voss, R.; Wofford, A.; Zhang, Y.
2015-03-01
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M ⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M ⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
Massive binary stars as a probe of massive star formation
NASA Astrophysics Data System (ADS)
Kiminki, Daniel C.
2010-10-01
Massive stars are among the largest and most influential objects we know of on a sub-galactic scale. Binary systems, composed of at least one of these stars, may be responsible for several types of phenomena, including type Ib/c supernovae, short and long gamma ray bursts, high-velocity runaway O and B-type stars, and the density of the parent star clusters. Our understanding of these stars has met with limited success, especially in the area of their formation. Current formation theories rely on the accumulated statistics of massive binary systems that are limited because of their sample size or the inhomogeneous environments from which the statistics are collected. The purpose of this work is to provide a higher-level analysis of close massive binary characteristics using the radial velocity information of 113 massive stars (B3 and earlier) and binary orbital properties for the 19 known close massive binaries in the Cygnus OB2 Association. This work provides an analysis using the largest amount of massive star and binary information ever compiled for an O-star rich cluster like Cygnus OB2, and compliments other O-star binary studies such as NGC 6231, NGC 2244, and NGC 6611. I first report the discovery of 73 new O or B-type stars and 13 new massive binaries by this survey. This work involved the use of 75 successful nights of spectroscopic observation at the Wyoming Infrared Observatory in addition to observations obtained using the Hydra multi-object spectrograph at WIYN, the HIRES echelle spectrograph at KECK, and the Hamilton spectrograph at LICK. I use these data to estimate the spectrophotometric distance to the cluster and to measure the mean systemic velocity and the one-sided velocity dispersion of the cluster. Finally, I compare these data to a series of Monte Carlo models, the results of which indicate that the binary fraction of the cluster is 57 +/- 5% and that the indices for the power law distributions, describing the log of the periods, mass-ratios, and eccentricities, are --0.2 +/- 0.3, 0.3 +/- 0.3, and --0.8 +/- 0.3 respectively (or not consistent with a simple power law distribution). The observed distributions indicate a preference for short period systems with nearly circular orbits and companions that are not likely drawn from a standard initial mass function, as would be expected from random pairing. An interesting and unexpected result is that the period distribution is inconsistent with a standard power-law slope stemming mainly from an excess of periods between 3 and 5 days and an absence of periods between 7 and 14 days. One possible explanation of this phenomenon is that the binary systems with periods from 7--14 days are migrating to periods of 3--5 days. In addition, the binary distribution here is not consistent with previous suggestions in the literature that 45% of OB binaries are members of twin systems (mass ratio near 1).
Heaviest Stellar Black Hole Discovered in Nearby Galaxy
NASA Astrophysics Data System (ADS)
2007-10-01
Astronomers have located an exceptionally massive black hole in orbit around a huge companion star. This result has intriguing implications for the evolution and ultimate fate of massive stars. The black hole is part of a binary system in M33, a nearby galaxy about 3 million light years from Earth. By combining data from NASA's Chandra X-ray Observatory and the Gemini telescope on Mauna Kea, Hawaii, the mass of the black hole, known as M33 X-7, was determined to be 15.7 times that of the Sun. This makes M33 X-7 the most massive stellar black hole known. A stellar black hole is formed from the collapse of the core of a massive star at the end of its life. Chandra X-ray Image of M33 X-7 Chandra X-ray Image of M33 X-7 "This discovery raises all sorts of questions about how such a big black hole could have been formed," said Jerome Orosz of San Diego State University, lead author of the paper appearing in the October 18th issue of the journal Nature. M33 X-7 orbits a companion star that eclipses the black hole every three and a half days. The companion star also has an unusually large mass, 70 times that of the Sun. This makes it the most massive companion star in a binary system containing a black hole. Hubble Optical Image of M33 X-7 Hubble Optical Image of M33 X-7 "This is a huge star that is partnered with a huge black hole," said coauthor Jeffrey McClintock of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. "Eventually, the companion will also go supernova and then we'll have a pair of black holes." The properties of the M33 X-7 binary system - a massive black hole in a close orbit around a massive companion star - are difficult to explain using conventional models for the evolution of massive stars. The parent star for the black hole must have had a mass greater than the existing companion in order to have formed a black hole before the companion star. Gemini Optical Image of M33 X-7 Gemini Optical Image of M33 X-7 Such a massive star would have had a radius larger than the present separation between the stars, so the stars must have been brought closer while sharing a common outer atmosphere. This process typically results in a large amount of mass being lost from the system, so much that the parent star should not have been able to form a 15.7 solar-mass black hole. The black hole's progenitor must have shed gas at a rate about 10 times less than predicted by models before it exploded. If even more massive stars also lose very little material, it could explain the incredibly luminous supernova seen recently as SN 2006gy. The progenitor for SN 2006gy is thought to have been about 150 times the mass of the Sun when it exploded. Artist's Illustration of M33 X-7 Artist's Illustration of M33 X-7 "Massive stars can be much less extravagant than people think by hanging onto a lot more of their mass toward the end of their lives," said Orosz. "This can have a big effect on the black holes that these stellar time-bombs make." Coauthor Wolfgang Pietsch was also the lead author of an article in the Astrophysical Journal that used Chandra observations to report that M33 X-7 is the first black hole in a binary system observed to undergo eclipses. The eclipsing nature enables unusually accurate estimates for the mass of the black hole and its companion. "Because it's eclipsing and because it has such extreme properties, this black hole is an incredible test-bed for studying astrophysics," said Pietsch. The length of the eclipse seen by Chandra gives information about the size of the companion. The scale of the companion's motion, as inferred from the Gemini observations, gives information about the mass of the black hole and its companion. Other observed properties of the binary were used to constrain the mass estimates. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass. Gemini is an international partnership managed by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation.
Probing Massive Black Hole Populations and Their Environments with LISA
NASA Astrophysics Data System (ADS)
Katz, Michael; Larson, Shane
2018-01-01
With the adoption of the LISA Mission Proposal by the European Space Agency in response to its call for L3 mission concepts, gravitational wave measurements from space are on the horizon. With data from the Illustris large-scale cosmological simulation, we provide analysis of LISA detection rates accompanied by characterization of the merging Massive Black Holes (MBH) and their host galaxies. MBHs of total mass $\\sim10^6-10^9 M_\\odot$ are the main focus of this study. Using a precise treatment of the dynamical friction evolutionary process prior to gravitational wave emission, we evolve MBH simulation particle mergers from $\\sim$kpc scales until coalescence to achieve a merger distribution. Using the statistical basis of the Illustris output, we Monte-carlo synthesize many realizations of the merging massive black hole population across space and time. We use those realizations to build mock LISA detection catalogs to understand the impact of LISA mission configurations on our ability to probe massive black hole merger populations and their environments throughout the visible Universe.
Gravitational Wave Signals from the First Massive Black Hole Seeds
NASA Astrophysics Data System (ADS)
Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.
2018-05-01
Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.
Massive Star Goes Out With a Whimper Instead of a Bang (Artist's Concept)
2017-05-25
Every second a star somewhere out in the universe explodes as a supernova. But some extremely massive stars go out with a whimper instead of a bang. When they do, they can collapse under the crushing tug of gravity and vanish out of sight, only to leave behind a black hole. The doomed star N6946-BH1 was 25 times as massive as our sun. It began to brighten weakly in 2009. But, by 2015, it appeared to have winked out of existence. By a careful process of elimination, based on observations by the Large Binocular Telescope and NASA's Hubble and Spitzer space telescopes, researchers eventually concluded that the star must have become a black hole. This may be the fate for extremely massive stars in the universe. This illustration shows the final stages in the life of a supermassive star that fails to explode as a supernova, but instead implodes to form a black hole. https://photojournal.jpl.nasa.gov/catalog/PIA21466
NASA Astrophysics Data System (ADS)
Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.
1999-04-01
A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.
Progress on Magnetism in Massive Stars (MiMeS)
NASA Astrophysics Data System (ADS)
Neiner, C.; Alecian, E.; Mathis, S.
2011-12-01
We present the MiMeS project, which aims at studying all aspects of magnetism in massive stars to understand their characteristics, origin, incidence, evolution, and impact on other physical processes. We show examples of recent observational results obtained within this project on pulsating B stars (β Cephei and SPB stars) as well as Herbig Ae/Be stars. Recent theoretical progress obtained within MiMeS on the configuration and stability of magnetic fields is also summarized.
A survey of extended H2 emission from massive YSOs
NASA Astrophysics Data System (ADS)
Navarete, F.; Damineli, A.; Barbosa, C. L.; Blum, R. D.
2015-07-01
We present the results from a survey, designed to investigate the accretion process of massive young stellar objects (MYSOs) through near-infrared narrow-band imaging using the H2 ν=1-0 S(1) transition filter. A sample of 353 MYSO candidates was selected from the Red MSX Source survey using photometric criteria at longer wavelengths (infrared and submillimetre) and chosen with positions throughout the Galactic plane. Our survey was carried out at the Southern Astrophysical Research Telescope Telescope in Chile and Canada-France-Hawaii Telescope in Hawaii covering both hemispheres. The data reveal that extended H2 emission is a good tracer of outflow activity, which is a signpost of accretion process on young massive stars. Almost half of the sample exhibit extended H2 emission and 74 sources (21 per cent) have polar morphology, suggesting collimated outflows. The polar-like structures are more likely to appear on radio-quiet sources, indicating these structures occur during the pre-UCH II phase. We also found an important fraction of sources associated with fluorescent H2 diffuse emission that could be due to a more evolved phase. The images also indicate only ˜23 per cent (80) of the sample is associated with extant (young) stellar clusters. These results support the scenario in which massive stars are formed by accretion discs, since the merging of low-mass stars would not produce outflow structures.
Late Wenlock (middle Silurian) bio-events: Caused by volatile boloid impact/s
NASA Technical Reports Server (NTRS)
Berry, W. B. N.; Wilde, P.
1988-01-01
Late Wenlockian (late mid-Silurian) life is characterized by three significant changes or bioevents: sudden development of massive carbonate reefs after a long interval of limited reef growth; sudden mass mortality among colonial zooplankton, graptolites; and origination of land plants with vascular tissue (Cooksonia). Both marine bioevents are short in duration and occur essentially simultaneously at the end of the Wenlock without any recorded major climatic change from the general global warm climate. These three disparate biologic events may be linked to sudden environmental change that could have resulted from sudden infusion of a massive amount of ammonia into the tropical ocean. Impact of a boloid or swarm of extraterrestrial bodies containing substantial quantities of a volatile (ammonia) component could provide such an infusion. Major carbonate precipitation (formation), as seen in the reefs as well as, to a more limited extent, in certain brachiopods, would be favored by increased pH resulting from addition of a massive quantity of ammonia into the upper ocean. Because of the buffer capacity of the ocean and dilution effects, the pH would have returned soon to equilibrium. Major proliferation of massive reefs ceased at the same time. Addition of ammonia as fertilizer to terrestrial environments in the tropics would have created optimum environmental conditions for development of land plants with vascular, nutrient-conductive tissue. Fertilization of terrestrial environments thus seemingly preceded development of vascular tissue by a short time interval. Although no direct evidence of impact of a volatile boloid may be found, the bioevent evidence is suggestive that such an impact in the oceans could have taken place. Indeed, in the case of an ammonia boloid, evidence, such as that of the Late Wenlockian bioevents may be the only available data for impact of such a boloid.
NASA Astrophysics Data System (ADS)
Zhang, Tianxi
2014-01-01
Slightly modifying the standard big bang theory, the author has recently developed a new cosmological model called black hole universe, which is consistent with Mach’s principle, governed by Einstein’s general theory of relativity, and able to explain all observations of the universe. Previous studies accounted for the origin, structure, evolution, expansion, cosmic microwave background radiation, and acceleration of the black hole universe, which grew from a star-like black hole with several solar masses through a supermassive black hole with billions of solar masses to the present state with hundred billion-trillions of solar masses by accreting ambient matter and merging with other black holes. This study investigates the emissions of dynamic black holes according to the black hole universe model and provides a self-consistent explanation for the observations of gamma ray bursts (GRBs), X-ray flares, and quasars as emissions of dynamic star-like, massive, and supermassive black holes. It is shown that a black hole, when it accretes its ambient matter or merges with other black holes, becomes dynamic. Since the event horizon of a dynamic black hole is broken, the inside hot (or high-frequency) blackbody radiation leaks out. The leakage of the inside hot blackbody radiation leads to a GRB if it is a star-like black hole, an X-ray flare if it is a massive black hole like the one at the center of the Milky Way, or a quasar if it is a supermassive black hole like an active galactic nucleus (AGN). The energy spectra and amount of emissions produced by the dynamic star-like, massive, and supermassive black holes can be consistent with the measurements of GRBs, X-ray flares, and quasars.
Hubble Witnesses Massive Comet-Like Object Pollute Atmosphere of a White Dwarf
2017-12-08
For the first time, scientists using NASA’s Hubble Space Telescope have witnessed a massive object with the makeup of a comet being ripped apart and scattered in the atmosphere of a white dwarf, the burned-out remains of a compact star. The object has a chemical composition similar to Halley’s Comet, but it is 100,000 times more massive and has a much higher amount of water. It is also rich in the elements essential for life, including nitrogen, carbon, oxygen, and sulfur. These findings are evidence for a belt of comet-like bodies orbiting the white dwarf, similar to our solar system’s Kuiper Belt. These icy bodies apparently survived the star’s evolution as it became a bloated red giant and then collapsed to a small, dense white dwarf. Caption: This artist's concept shows a massive, comet-like object falling toward a white dwarf. New Hubble Space Telescope findings are evidence for a belt of comet-like bodies orbiting the white dwarf, similar to our solar system's Kuiper Belt. The findings also suggest the presence of one or more unseen surviving planets around the white dwarf, which may have perturbed the belt to hurl icy objects into the burned-out star. Credits: NASA, ESA, and Z. Levay (STScI) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Simulation of radioactive tracer transport using IsoRSM and uncertainty analysis
NASA Astrophysics Data System (ADS)
SAYA, A.; Chang, E.; Yoshimura, K.; Oki, T.
2013-12-01
Due to the massive earthquakes and tsunami on March 11 2011 in Eastern Japan, Fukushima Daiichi nuclear power plant was severely damaged and some reactors were exploded. To know how the radioactive materials were spread and how much they were deposited into the land, it is important to enhance the accuracy of radioactive transport simulation model. However, there are uncertainties in the models including dry and wet deposition process in the models, meteorological field and release amount of radioactive materials. In this study we analyzed these uncertainties aiming for higher accuracy in the simulation. We modified the stable isotope mode of Regional Spectral Model (IsoRSM, Yoshimura et al., 2009) to enable to simulate the transport of the radioactive tracers, namely iodine 131 and cesium 137, by including the dry and wet deposition processes. With this model, we conducted a set of sensitivity experiments using different parameters in the deposition processes, different diffusivity in advection processes, and different domain sizes. The control experiment with 10km resolution covering most of Japan and surrounding oceans (132.7oE-151.5oE &28.3oN-46.7oN) and the emission estimated by Chino et al. (2011) showed reasonable temporal results for Toukatsu area (eastern part of Tokyo metropolis and western part of Chiba prefecture where low-level contamination was occurred), i.e., on 22 March, the tracers from Fukushima were reached and precipitated in a significant amount as wet deposition. Thus we conducted 4 experimental simulations to analyze the simulation uncertainty due to 1) different meteorological pattern, different parameters for 2) wet and 3) dry deposition and 4) diffusion. Though the temporal patterns of deposition of radioactive particles were somewhat similar each other in all experiments, we revealed that the impacts to the area mean deposition were large. Results of the simulations with different diffusivity and different domain size showed that the patterns of precipitation amount and distribution, and deposition amount were affected. The new transport scheme, semi-lagrangian scheme could show some improvement in the simulated meteorological field. Furthermore, we have begun the inversion estimation combined with IsoRSM and the monitoring data from the Nuclear regulation Agency. Preliminary results with consecutive two week simulations starting every day with daily unit release will be shown at the conference. References 1. Yoshimura, K., Kanamitsu. M. and Dettinger. M.: Regional downscaling for stable water isotopes: A case study of an atmospheric river event, Journal of geophysical research, Vol.15, D18114, doi:10.1029/2010JD014032, 2010 2. Chino, M., Nakayama. H., Nagai. H., Terada. H., Katata. G. and Yamazawa. H.: Preliminary estimation of release amounts of 131I and 137Cs accidentally discharged from the Fukushima Daiichi Nuclear Power Plant into the atmosphere, Journal of Nuclear Science and Technology, Vol.48, No.7, p.1129-1134, 2011
Hot Gas and AGN Feedback in Galaxies and Nearby Groups
NASA Astrophysics Data System (ADS)
Jones, Christine; Forman, William; Bogdan, Akos; Randall, Scott; Kraft, Ralph; Churazov, Eugene
2013-07-01
Massive galaxies harbor a supermassive black hole at their centers. At high redshifts, these galaxies experienced a very active quasar phase, when, as their black holes grew by accretion, they produced enormous amounts of energy. At the present epoch, these black holes still undergo occasional outbursts, although the mode of their energy release is primarily mechanical rather than radiative. The energy from these outbursts can reheat the cooling gas in the galaxy cores and maintain the red and dead nature of the early-type galaxies. These outbursts also can have dramatic effects on the galaxy-scale hot coronae found in the more massive galaxies. We describe research in three areas related to the hot gas around galaxies and their supermassive black holes. First we present examples of galaxies with AGN outbursts that have been studied in detail. Second, we show that X-ray emitting low-luminosity AGN are present in 80% of the galaxies studied. Third, we discuss the first examples of extensive hot gas and dark matter halos in optically faint galaxies.
NASA Astrophysics Data System (ADS)
Yang, S. C.; Ho, C. E.; Chang, C. W.; Kao, C. R.
2007-04-01
Massive spalling of intermetallic compounds has been reported in the literature for several solder/substrate systems, including SnAgCu soldered on Ni substrate, SnZn on Cu, high-Pb PbSn on Cu, and high-Pb PbSn on Ni. In this work, a unified thermodynamic argument is proposed to explain this rather unusual phenomenon. According to this argument, two necessary conditions must be met. The number one condition is that at least one of the reactive constituents of the solder must be present in a limited amount, and the second condition is that the soldering reaction has to be very sensitive to its concentration. With the growth of intermetallic, more and more atoms of this constituent are extracted out of the solder and incorporated into the intermetallic. As the concentration of this constituent decreases, the original intermetallic at the interface becomes a nonequilibrium phase, and the spalling of the original intermetallic occurs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, S. C.; Ho, C. E.; Chang, C. W.
2007-04-15
Massive spalling of intermetallic compounds has been reported in the literature for several solder/substrate systems, including SnAgCu soldered on Ni substrate, SnZn on Cu, high-Pb PbSn on Cu, and high-Pb PbSn on Ni. In this work, a unified thermodynamic argument is proposed to explain this rather unusual phenomenon. According to this argument, two necessary conditions must be met. The number one condition is that at least one of the reactive constituents of the solder must be present in a limited amount, and the second condition is that the soldering reaction has to be very sensitive to its concentration. With themore » growth of intermetallic, more and more atoms of this constituent are extracted out of the solder and incorporated into the intermetallic. As the concentration of this constituent decreases, the original intermetallic at the interface becomes a nonequilibrium phase, and the spalling of the original intermetallic occurs.« less
3-D readout-electronics packaging for high-bandwidth massively paralleled imager
Kwiatkowski, Kris; Lyke, James
2007-12-18
Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.
Mammalian evolution may not be strictly bifurcating.
Hallström, Björn M; Janke, Axel
2010-12-01
The massive amount of genomic sequence data that is now available for analyzing evolutionary relationships among 31 placental mammals reduces the stochastic error in phylogenetic analyses to virtually zero. One would expect that this would make it possible to finally resolve controversial branches in the placental mammalian tree. We analyzed a 2,863,797 nucleotide-long alignment (3,364 genes) from 31 placental mammals for reconstructing their evolution. Most placental mammalian relationships were resolved, and a consensus of their evolution is emerging. However, certain branches remain difficult or virtually impossible to resolve. These branches are characterized by short divergence times in the order of 1-4 million years. Computer simulations based on parameters from the real data show that as little as about 12,500 amino acid sites could be sufficient to confidently resolve short branches as old as about 90 million years ago (Ma). Thus, the amount of sequence data should no longer be a limiting factor in resolving the relationships among placental mammals. The timing of the early radiation of placental mammals coincides with a period of climate warming some 100-80 Ma and with continental fragmentation. These global processes may have triggered the rapid diversification of placental mammals. However, the rapid radiations of certain mammalian groups complicate phylogenetic analyses, possibly due to incomplete lineage sorting and introgression. These speciation-related processes led to a mosaic genome and conflicting phylogenetic signals. Split network methods are ideal for visualizing these problematic branches and can therefore depict data conflict and possibly the true evolutionary history better than strictly bifurcating trees. Given the timing of tectonics, of placental mammalian divergences, and the fossil record, a Laurasian rather than Gondwanan origin of placental mammals seems the most parsimonious explanation.
Mammalian Evolution May not Be Strictly Bifurcating
Hallström, Björn M.; Janke, Axel
2010-01-01
The massive amount of genomic sequence data that is now available for analyzing evolutionary relationships among 31 placental mammals reduces the stochastic error in phylogenetic analyses to virtually zero. One would expect that this would make it possible to finally resolve controversial branches in the placental mammalian tree. We analyzed a 2,863,797 nucleotide-long alignment (3,364 genes) from 31 placental mammals for reconstructing their evolution. Most placental mammalian relationships were resolved, and a consensus of their evolution is emerging. However, certain branches remain difficult or virtually impossible to resolve. These branches are characterized by short divergence times in the order of 1–4 million years. Computer simulations based on parameters from the real data show that as little as about 12,500 amino acid sites could be sufficient to confidently resolve short branches as old as about 90 million years ago (Ma). Thus, the amount of sequence data should no longer be a limiting factor in resolving the relationships among placental mammals. The timing of the early radiation of placental mammals coincides with a period of climate warming some 100–80 Ma and with continental fragmentation. These global processes may have triggered the rapid diversification of placental mammals. However, the rapid radiations of certain mammalian groups complicate phylogenetic analyses, possibly due to incomplete lineage sorting and introgression. These speciation-related processes led to a mosaic genome and conflicting phylogenetic signals. Split network methods are ideal for visualizing these problematic branches and can therefore depict data conflict and possibly the true evolutionary history better than strictly bifurcating trees. Given the timing of tectonics, of placental mammalian divergences, and the fossil record, a Laurasian rather than Gondwanan origin of placental mammals seems the most parsimonious explanation. PMID:20591845
Evans, James P.; Wilhelmsen, Kirk C.; Berg, Jonathan; Schmitt, Charles P.; Krishnamurthy, Ashok; Fecho, Karamarie; Ahalt, Stanley C.
2016-01-01
Introduction: In genomics and other fields, it is now possible to capture and store large amounts of data in electronic medical records (EMRs). However, it is not clear if the routine accumulation of massive amounts of (largely uninterpretable) data will yield any health benefits to patients. Nevertheless, the use of large-scale medical data is likely to grow. To meet emerging challenges and facilitate optimal use of genomic data, our institution initiated a comprehensive planning process that addresses the needs of all stakeholders (e.g., patients, families, healthcare providers, researchers, technical staff, administrators). Our experience with this process and a key genomics research project contributed to the proposed framework. Framework: We propose a two-pronged Genomic Clinical Decision Support System (CDSS) that encompasses the concept of the “Clinical Mendeliome” as a patient-centric list of genomic variants that are clinically actionable and introduces the concept of the “Archival Value Criterion” as a decision-making formalism that approximates the cost-effectiveness of capturing, storing, and curating genome-scale sequencing data. We describe a prototype Genomic CDSS that we developed as a first step toward implementation of the framework. Conclusion: The proposed framework and prototype solution are designed to address the perspectives of stakeholders, stimulate effective clinical use of genomic data, drive genomic research, and meet current and future needs. The framework also can be broadly applied to additional fields, including other ‘-omics’ fields. We advocate for the creation of a Task Force on the Clinical Mendeliome, charged with defining Clinical Mendeliomes and drafting clinical guidelines for their use. PMID:27195307
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stökl, Alexander; Dorfi, Ernst A.; Johnstone, Colin P.
2016-07-10
In the early, disk-embedded phase of evolution of terrestrial planets, a protoplanetary core can accumulate gas from the circumstellar disk into a planetary envelope. In order to relate the accumulation and structure of this primordial atmosphere to the thermal evolution of the planetary core, we calculated atmosphere models characterized by the surface temperature of the core. We considered cores with masses between 0.1 and 5 M {sub ⊕} situated in the habitable zone around a solar-like star. The time-dependent simulations in 1D-spherical symmetry include the hydrodynamics equations, gray radiative transport, and convective energy transport. Using an implicit time integration scheme,more » we can use large time steps and and thus efficiently cover evolutionary timescales. Our results show that planetary atmospheres, when considered with reference to a fixed core temperature, are not necessarily stable, and multiple solutions may exist for one core temperature. As the structure and properties of nebula-embedded planetary atmospheres are an inherently time-dependent problem, we calculated estimates for the amount of primordial atmosphere by simulating the accretion process of disk gas onto planetary cores and the subsequent evolution of the embedded atmospheres. The temperature of the planetary core is thereby determined from the computation of the internal energy budget of the core. For cores more massive than about one Earth mass, we obtain that a comparatively short duration of the disk-embedded phase (∼10{sup 5} years) is sufficient for the accumulation of significant amounts of hydrogen atmosphere that are unlikely to be removed by later atmospheric escape processes.« less
Torres, Ednildo Andrade; Cerqueira, Gilberto S; Tiago, M Ferrer; Quintella, Cristina M; Raboni, Massimo; Torretta, Vincenzo; Urbini, Giordano
2013-12-01
In Brazil, and mainly in the State of Bahia, crude vegetable oils are widely used in the preparation of food. Street stalls, restaurants and canteens make a great use of palm oil and soybean oil. There is also some use of castor oil, which is widely cultivated in the Sertão Region (within the State of Bahia), and widely applied in industry. This massive use in food preparation leads to a huge amount of waste oil of different types, which needs either to be properly disposed of, or recovered. At the Laboratorio Energia e Gas-LEN (Energy & Gas lab.) of the Universidade Federal da Bahia, a cycle of experiments were carried out to evaluate the recovery of waste oils for biodiesel production. The experiences were carried out on a laboratory scale and, in a semi-industrial pilot plant using waste oils of different qualities. In the transesterification process, applied waste vegetable oils were reacted with methanol with the support of a basic catalyst, such as NaOH or KOH. The conversion rate settled at between 81% and 85% (in weight). The most suitable molar ratio of waste oils to alcohol was 1:6, and the amount of catalyst required was 0.5% (of the weight of the incoming oil), in the case of NaOH, and 1%, in case of KOH. The quality of the biodiesel produced was tested to determine the final product quality. The parameters analyzed were the acid value, kinematic viscosity, monoglycerides, diglycerides, triglycerides, free glycerine, total glycerine, clearness; the conversion yield of the process was also evaluated. Copyright © 2013 Elsevier Ltd. All rights reserved.
Functional brain segmentation using inter-subject correlation in fMRI.
Kauppi, Jukka-Pekka; Pajula, Juha; Niemi, Jari; Hari, Riitta; Tohka, Jussi
2017-05-01
The human brain continuously processes massive amounts of rich sensory information. To better understand such highly complex brain processes, modern neuroimaging studies are increasingly utilizing experimental setups that better mimic daily-life situations. A new exploratory data-analysis approach, functional segmentation inter-subject correlation analysis (FuSeISC), was proposed to facilitate the analysis of functional magnetic resonance (fMRI) data sets collected in these experiments. The method provides a new type of functional segmentation of brain areas, not only characterizing areas that display similar processing across subjects but also areas in which processing across subjects is highly variable. FuSeISC was tested using fMRI data sets collected during traditional block-design stimuli (37 subjects) as well as naturalistic auditory narratives (19 subjects). The method identified spatially local and/or bilaterally symmetric clusters in several cortical areas, many of which are known to be processing the types of stimuli used in the experiments. The method is not only useful for spatial exploration of large fMRI data sets obtained using naturalistic stimuli, but also has other potential applications, such as generation of a functional brain atlases including both lower- and higher-order processing areas. Finally, as a part of FuSeISC, a criterion-based sparsification of the shared nearest-neighbor graph was proposed for detecting clusters in noisy data. In the tests with synthetic data, this technique was superior to well-known clustering methods, such as Ward's method, affinity propagation, and K-means ++. Hum Brain Mapp 38:2643-2665, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Pasquali, R; Casimirri, F; Melchionda, N
1987-12-01
To assess long-term nitrogen sparing capacity of very low-calorie mixed diets, we administered two isoenergetic (2092KJ) liquid formula regimens of different composition for 8 weeks to two matched groups of massively obese patients (group 1: proteins 60 g, carbohydrate 54 g; group 2: proteins 41 g, carbohydrates 81 g). Weight loss was similar in both groups. Daily nitrogen balance (g) during the second month resulted more a negative in group 2 with respect to group 1. However, within the groups individual nitrogen sparing capacity varied markedly; only a few in group 1 and one in group 2 were able to attain nitrogen equilibrium throughout the study. Daily urine excretion of 3-methylhistidine fell significantly in group 1 but did not change in group 2. Unlike total proteins, albumins, and transferrin, serum levels of retinol-binding protein, thyroxin-binding globulin, and complement-C3 fell significantly in both groups but per cent variations of complement-C3 were more pronounced in the first group. Prealbumin levels fell persistently in group 1 and transiently in group 2. The results indicate that even with this type of diet an adequate amount of dietary protein represents the most important factor in minimizing whole body protein catabolism during long-term semistarvation in massively obese patients. Moreover, they confirm the possible role of dietary carbohydrates in the regulation of some visceral protein metabolism.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serviss, C.R.; Grout, C.M.; Hagni, R.D.
1985-01-01
Ore microscopic examination of uncommon silver-rich ores from the Edwards mine has detected three silver minerals, native silver, freibergite, and argentite, that were previously unreported in the literature from the Balmat-Edwards district. The zinc-lead ore deposits of the Balmat-Edwards District in northern New York are composed of very coarse-grained massive sulfides, principally sphalerite, galena, and pyrite. The typical ores contain small amounts of silver in solid solution galena. Galena concentrates produced from those ores have contained an average of 15 ounces of silver per ton of 60% lead concentrates. In contrast to the typical ore a silver-rich pocket, that measuredmore » three feet by three feet on the vertical mine face and was the subject of this study, contained nearly 1% silver in a zinc ore. Ore microscopic study shows that this ore is especially characterized by abundant, relatively fine-grained chalcopyrite with anhedral pyrite inclusions. Fine-grained sphalerite, native silver, argentite, freibergite and arsenopyrite occur in association with the chalcopyrite and as fracture-fillings in gangue minerals. Geochemically anomalous amounts of tin, barium, chromium, and nickel also are present in the silver-rich pocket. The silver-rich pocket may mark the locus of an early feeder vent or alternatively it may record a hydrothermal event that was superimposed upon the event responsible for the metamorphic ore textures.« less
Data List - Specifying and Acquiring Earth Science Data Measurements All at Once
NASA Astrophysics Data System (ADS)
Shie, C. L.; Teng, W. L.; Liu, Z.; Hearty, T. J., III; Shen, S.; Li, A.; Hegde, M.; Bryant, K.; Seiler, E.; Kempler, S. J.
2016-12-01
Natural phenomena, such as tropical storms (e.g., hurricane/typhoons), winter storms (e.g., blizzards) volcanic eruptions, floods, and drought, have the potential to cause immense property damage, great socioeconomic impact, and tragic losses of human life. In order to investigate and assess these natural hazards in a timely manner, there needs to be efficient searching and accessing of massive amounts of heterogeneous scientific data from, particularly, satellite and model products. This is a daunting task for most application users, decision makers, and science researchers. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has, for many years, archived and served massive amounts of Earth science data, along with value-added information and services. In order to facilitate the GES DISC users in acquiring their data of interest "all at once," with minimum effort, the GES DISC has started developing a value-added and knowledge-based data service framework. This framework allows the preparation and presentation to users of collections of data and their related resources for natural disaster events or other scientific themes. These collections of data, initially termed "Data Bundle" and then "Virtual Collections" and finally "Data Lists," contain suites of annotated Web addresses (URLs) that point to their respective data and resource addresses, "all at once" and "virtually." Because these collections of data are virtual, there is no need to duplicate the data. Currently available "Data Lists" for several natural disaster phenomena and the architecture of the data service framework will be presented.
Physical Conditions of Eta Car Complex Environment Revealed From Photoionization Modeling
NASA Technical Reports Server (NTRS)
Verner, E. M.; Bruhweiler, F.; Nielsen, K. E.; Gull, T.; Kober, G. Vieira; Corcoran, M.
2006-01-01
The very massive star, Eta Carinae, is enshrouded in an unusual complex environment of nebulosities and structures. The circumstellar gas gives rise to distinct absorption and emission components at different velocities and distances from the central source(s). Through photoionization modeling, we find that the radiation field from the more massive B-star companion supports the low ionization structure throughout the 5.54 year period. The radiation field of an evolved O-star is required to produce the higher ionization . emission seen across the broad maximum. Our studies utilize the HST/STIS data and model calculations of various regimes from doubly ionized species (T= 10,000K) to the low temperature (T = 760 K) conditions conductive to molecule formation (CH and OH). Overall analysis suggests the high depletion in C and O and the enrichment in He and N. The sharp molecular and ionic absorptions in this extensively CNO - processed material offers a unique environment for studying the chemistry, dust formation processes, and nucleosynthesis in the ejected layers of a highly evolved massive star.
Masukume, Gwinyai; Sengurayi, Elton; Moyo, Phinot; Feliu, Julio; Gandanhamo, Danboy; Ndebele, Wedu; Ngwenya, Solwayo; Gwini, Rudo
2013-08-22
We report an extremely rare case of massive hemoptysis and complete left-sided lung collapse in pregnancy due to pulmonary tuberculosis in a health care worker with good maternal and fetal outcome. A 33-year-old human immuno deficiency virus seronegative African health care worker in her fourth pregnancy with two previous second trimester miscarriages and an apparently healthy daughter from her third pregnancy presented coughing up copious amounts of blood at 18 weeks and two days of gestation. She had a cervical suture in situ for presumed cervical weakness. Computed tomography of her chest showed complete collapse of the left lung; subsequent bronchoscopy was apparently normal. Her serum β-human chorionic gonadotropin, tests for autoimmune disease and echocardiography were all normal. Her lung re-inflated spontaneously. Sputum for acid alcohol fast bacilli was positive; our patient was commenced on anti-tuberculosis medication and pyridoxine. At 41 weeks and three days of pregnancy our patient went into spontaneous labor and delivered a live born female baby weighing 2.6 kg with APGAR scores of nine and 10 at one and five minutes respectively. She and her baby are apparently doing well about 10 months after delivery. It is possible to have massive hemoptysis and complete unilateral lung collapse with spontaneous resolution in pregnancy due to pulmonary tuberculosis with good maternal and fetal outcome.
Stellar haloes in massive early-type galaxies
NASA Astrophysics Data System (ADS)
Buitrago, F.
2017-03-01
The Hubble Ultra Deep Field (HUDF) opens up an unique window to witness galaxy assembly at all cosmic distances. Thanks to its extraordinary depth, it is a privileged tool to beat the cosmological dimming, which affects any extragalactic observations and has a very strong dependence with redshift (1 +z)^4. In particular, massive (M_{stellar}>5 × 10^{10} M_⊙) Early Type Galaxies (ETGs) are the most interesting candidates for these studies, as they must grow in an inside-out fashion developing an extended stellar envelope/halo that accounts for their remarkable size evolution (˜5 times larger in the nearby Universe than at z=2-3). To this end we have analysed the 6 most massive ETGs at z <1 in the HUDF12. Because of the careful data reduction and the exhaustive treatment of the Point Spread Function (PSF), we are able to trace the galaxy surface brightness profiles up to the same levels as in the local Universe but this time at
a Snapshot Survey of X-Ray Selected Central Cluster Galaxies
NASA Astrophysics Data System (ADS)
Edge, Alastair
1999-07-01
Central cluster galaxies are the most massive stellar systems known and have been used as standard candles for many decades. Only recently have central cluster galaxies been recognised to exhibit a wide variety of small scale {<100 pc} features that can only be reliably detected with HST resolution. The most intriguing of these are dust lanes which have been detected in many central cluster galaxies. Dust is not expected to survive long in the hostile cluster environment unless shielded by the ISM of a disk galaxy or very dense clouds of cold gas. WFPC2 snapshot images of a representative subset of the central cluster galaxies from an X-ray selected cluster sample would provide important constraints on the formation and evolution of dust in cluster cores that cannot be obtained from ground-based observations. In addition, these images will allow the AGN component, the frequency of multiple nuclei, and the amount of massive-star formation in central cluster galaxies to be ass es sed. The proposed HST observatio ns would also provide high-resolution images of previously unresolved gravitational arcs in the most massive clusters in our sample resulting in constraints on the shape of the gravitational potential of these systems. This project will complement our extensive multi-frequency work on this sample that includes optical spectroscopy and photometry, VLA and X-ray images for the majority of the 210 targets.
Is there vacuum when there is mass? Vacuum and non-vacuum solutions for massive gravity
NASA Astrophysics Data System (ADS)
Martín-Moruno, Prado; Visser, Matt
2013-08-01
Massive gravity is a theory which has a tremendous amount of freedom to describe different cosmologies, but at the same time, the various solutions one encounters must fulfil some rather nontrivial constraints. Most of the freedom comes not from the Lagrangian, which contains only a small number of free parameters (typically three depending on counting conventions), but from the fact that one is in principle free to choose the reference metric almost arbitrarily—which effectively introduces a non-denumerable infinity of free parameters. In the current paper, we stress that although changing the reference metric would lead to a different cosmological model, this does not mean that the dynamics of the universe can be entirely divorced from its matter content. That is, while the choice of reference metric certainly influences the evolution of the physically observable foreground metric, the effect of matter cannot be neglected. Indeed the interplay between matter and geometry can be significantly changed in some specific models; effectively since the graviton would be able to curve the spacetime by itself, without the need of matter. Thus, even the set of vacuum solutions for massive gravity can have significant structure. In some cases, the effect of the reference metric could be so strong that no conceivable material content would be able to drastically affect the cosmological evolution. Dedicated to the memory of Professor Pedro F González-Díaz
NASA Astrophysics Data System (ADS)
2010-02-01
ESO is releasing a magnificent VLT image of the giant stellar nursery surrounding NGC 3603, in which stars are continuously being born. Embedded in this scenic nebula is one of the most luminous and most compact clusters of young, massive stars in our Milky Way, which therefore serves as an excellent "local" analogue of very active star-forming regions in other galaxies. The cluster also hosts the most massive star to be "weighed" so far. NGC 3603 is a starburst region: a cosmic factory where stars form frantically from the nebula's extended clouds of gas and dust. Located 22 000 light-years away from the Sun, it is the closest region of this kind known in our galaxy, providing astronomers with a local test bed for studying intense star formation processes, very common in other galaxies, but hard to observe in detail because of their great distance from us. The nebula owes its shape to the intense light and winds coming from the young, massive stars which lift the curtains of gas and clouds revealing a multitude of glowing suns. The central cluster of stars inside NGC 3603 harbours thousands of stars of all sorts (eso9946): the majority have masses similar to or less than that of our Sun, but most spectacular are several of the very massive stars that are close to the end of their lives. Several blue supergiant stars crowd into a volume of less than a cubic light-year, along with three so-called Wolf-Rayet stars - extremely bright and massive stars that are ejecting vast amounts of material before finishing off in glorious explosions known as supernovae. Using another recent set of observations performed with the SINFONI instrument on ESO's Very Large Telescope (VLT), astronomers have confirmed that one of these stars is about 120 times more massive than our Sun, standing out as the most massive star known so far in the Milky Way [1]. The clouds of NGC 3603 provide us with a family picture of stars in different stages of their life, with gaseous structures that are still growing into stars, newborn stars, adult stars and stars nearing the end of their life. All these stars have roughly the same age, a million years, a blink of an eye compared to our five billion year-old Sun and Solar System. The fact that some of the stars have just started their lives while others are already dying is due to their extraordinary range of masses: high-mass stars, being very bright and hot, burn through their existence much faster than their less massive, fainter and cooler counterparts. The newly released image, obtained with the FORS instrument attached to the VLT at Cerro Paranal, Chile, portrays a wide field around the stellar cluster and reveals the rich texture of the surrounding clouds of gas and dust. Notes [1] The star, NGC 3603-A1, is an eclipsing system of two stars orbiting around each other in 3.77 days. The most massive star has an estimated mass of 116 solar masses, while its companion has a mass of 89 solar masses. More information ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory and VISTA, the largest survey telescope. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".
Detection of Sagittarius A* at 330 MHz With the Very Large Array
2004-01-20
our Galaxy’s central massive black hole , at 330 MHz with the Very Large Array. Implications for the spectrum and emission processes of Sgr A * are... A East, the Sgr A West H ii region, and Sgr A *, recently established as our Galaxy’s central massive black hole (e.g., Ghez et al. 2000; Eckart et al...toward Sgr A *. This could be explained by a localized clearing of the ambient gas accomplished either through the direct influence of the black hole
Bio-inspired grasp control in a robotic hand with massive sensorial input.
Ascari, Luca; Bertocchi, Ulisse; Corradi, Paolo; Laschi, Cecilia; Dario, Paolo
2009-02-01
The capability of grasping and lifting an object in a suitable, stable and controlled way is an outstanding feature for a robot, and thus far, one of the major problems to be solved in robotics. No robotic tools able to perform an advanced control of the grasp as, for instance, the human hand does, have been demonstrated to date. Due to its capital importance in science and in many applications, namely from biomedics to manufacturing, the issue has been matter of deep scientific investigations in both the field of neurophysiology and robotics. While the former is contributing with a profound understanding of the dynamics of real-time control of the slippage and grasp force in the human hand, the latter tries more and more to reproduce, or take inspiration by, the nature's approach, by means of hardware and software technology. On this regard, one of the major constraints robotics has to overcome is the real-time processing of a large amounts of data generated by the tactile sensors while grasping, which poses serious problems to the available computational power. In this paper a bio-inspired approach to tactile data processing has been followed in order to design and test a hardware-software robotic architecture that works on the parallel processing of a large amount of tactile sensing signals. The working principle of the architecture bases on the cellular nonlinear/neural network (CNN) paradigm, while using both hand shape and spatial-temporal features obtained from an array of microfabricated force sensors, in order to control the sensory-motor coordination of the robotic system. Prototypical grasping tasks were selected to measure the system performances applied to a computer-interfaced robotic hand. Successful grasps of several objects, completely unknown to the robot, e.g. soft and deformable objects like plastic bottles, soft balls, and Japanese tofu, have been demonstrated.
USArray Imaging of North American Continental Crust
NASA Astrophysics Data System (ADS)
Ma, Xiaofei
The layered structure and bulk composition of continental crust contains important clues about its history of mountain-building, about its magmatic evolution, and about dynamical processes that continue to happen now. Geophysical and geological features such as gravity anomalies, surface topography, lithospheric strength and the deformation that drives the earthquake cycle are all directly related to deep crustal chemistry and the movement of materials through the crust that alter that chemistry. The North American continental crust records billions of years of history of tectonic and dynamical changes. The western U.S. is currently experiencing a diverse array of dynamical processes including modification by the Yellowstone hotspot, shortening and extension related to Pacific coast subduction and transform boundary shear, and plate interior seismicity driven by flow of the lower crust and upper mantle. The midcontinent and eastern U.S. is mostly stable but records a history of ancient continental collision and rifting. EarthScope's USArray seismic deployment has collected massive amounts of data across the entire United States that illuminates the deep continental crust, lithosphere and deeper mantle. This study uses EarthScope data to investigate the thickness and composition of the continental crust, including properties of its upper and lower layers. One-layer and two-layer models of crustal properties exhibit interesting relationships to the history of North American continental formation and recent tectonic activities that promise to significantly improve our understanding of the deep processes that shape the Earth's surface. Model results show that seismic velocity ratios are unusually low in the lower crust under the western U.S. Cordillera. Further modeling of how chemistry affects the seismic velocity ratio at temperatures and pressures found in the lower crust suggests that low seismic velocity ratios occur when water is mixed into the mineral matrix, and the combination of high temperature and water may point to small amounts of melt in the lower crust of Cordillera.
An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing
NASA Astrophysics Data System (ADS)
Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.
2015-07-01
Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write complex data processing code on the web directly, so they can design their own data processing algorithm.
Monitoring landscape level processes using remote sensing of large plots
Raymond L. Czaplewski
1991-01-01
Global and regional assessaents require timely information on landscape level status (e.g., areal extent of different ecosystems) and processes (e.g., changes in land use and land cover). To measure and understand these processes at the regional level, and model their impacts, remote sensing is often necessary. However, processing massive volumes of remotely sensing...
Hyperfast pulsars as the remnants of massive stars ejected from young star clusters
NASA Astrophysics Data System (ADS)
Gvaramadze, Vasilii V.; Gualandris, Alessia; Portegies Zwart, Simon
2008-04-01
Recent proper motion and parallax measurements for the pulsar PSR B1508+55 indicate a transverse velocity of ~1100kms-1, which exceeds earlier measurements for any neutron star. The spin-down characteristics of PSR B1508+55 are typical for a non-recycled pulsar, which implies that the velocity of the pulsar cannot have originated from the second supernova disruption of a massive binary system. The high velocity of PSR B1508+55 can be accounted for by assuming that it received a kick at birth or that the neutron star was accelerated after its formation in the supernova explosion. We propose an explanation for the origin of hyperfast neutron stars based on the hypothesis that they could be the remnants of a symmetric supernova explosion of a high-velocity massive star which attained its peculiar velocity (similar to that of the pulsar) in the course of a strong dynamical three- or four-body encounter in the core of dense young star cluster. To check this hypothesis, we investigated three dynamical processes involving close encounters between: (i) two hard massive binaries, (ii) a hard binary and an intermediate-mass black hole (IMBH) and (iii) a single stars and a hard binary IMBH. We find that main-sequence O-type stars cannot be ejected from young massive star clusters with peculiar velocities high enough to explain the origin of hyperfast neutron stars, but lower mass main-sequence stars or the stripped helium cores of massive stars could be accelerated to hypervelocities. Our explanation for the origin of hyperfast pulsars requires a very dense stellar environment of the order of 106- 107starspc-3. Although such high densities may exist during the core collapse of young massive star clusters, we caution that they have never been observed.
Near-Infrared Mass Loss Diagnostics for Massive Stars
NASA Technical Reports Server (NTRS)
Sonneborn, George; Bouret, J. C.
2010-01-01
Stellar wind mass loss is a key process which modifies surface abundances, luminosities, and other physical properties of hot, massive stars. Furthermore, mass loss has to be understood quantitatively in order to accurately describe and predict massive star evolution. Two urgent problems have been identified that challenge our understanding of line-driven winds, the so-called weak-wind problem and wind clumping. In both cases, mass-loss rates are drastically lower than theoretically expected (up to a factor 1001). Here we study how the expected spectroscopic capabilities of the James Webb Space Telescope (JWST), especially NIRSpec, could be used to significantly improve constraints on wind density structures (clumps) and deep-seated phenomena in stellar winds of massive stars, including OB, Wolf-Rayet and LBV stars. Since the IR continuum of objects with strong winds is formed in the wind, IR lines may sample different depths inside the wind than UV-optical lines and provide new information about the shape of the velocity field and clumping properties. One of the most important applications of IR line diagnostics will be the measurement of mass-loss rates in massive stars with very weak winds by means of the H I Bracket alpha line, which has been identified as one of the most promising diagnostics for this problem.
Saito, Kazuyuki; Takada, Aya; Kuroda, Naohito; Hara, Masaaki; Arai, Masaaki; Ro, Ayako
2009-04-01
We present an extremely rare autopsy case with traumatic dissection of the extracranial vertebral artery due to blunt injury caused by a traffic accident. The patient complained of nausea and numbness of the hands at the scene of the accident. His consciousness deteriorated and he fell into a coma within 12h, then died 4 days after the collision. Brain CT/MRI disclosed massive infratentorial cerebral infarction while MRA imaged neither of the vertebral arteries. Autopsy revealed a seatbelt mark on the right side of the lower neck, with fracture of the right transverse process of the sixth cervical vertebra. The right extracranial vertebral artery (V2) showed massive medial dissection from the portion adjacent to the fracture throughout the upper end of the extracranial part of the artery and was occluded by a thrombus. An intimal tear was confirmed near the starting point of the dissection. The brain disclosed massive infarction of posterior circulation territories with changes to the so-called respirator brain. The victim's left vertebral artery was considerably hypoplastic. We concluded that a massive infratentorial infarction was caused by dissection of the right extracranial vertebral artery and consecutive thrombus formation brought about by impact with the seatbelt at the time of the collision.
Black Hole Disk Accretion in Supernovae
NASA Astrophysics Data System (ADS)
Mineshige, Shin; Nomura, Hideko; Hirose, Masahito; Nomoto, Ken'ichi; Suzuki, Tomoharu
1997-11-01
Massive stars in a certain mass range may form low-mass black holes after supernova explosions. In such massive stars, fallback of ~0.1 M⊙ materials onto a black hole is expected because of a deep gravitational potential or a reverse shock propagating back from the outer composition interface. We study hydrodynamical disk accretion onto a newborn low-mass black hole in a supernova using the smoothed particle hydrodynamics method. If the progenitor was rotating before the explosion, the fallback material should have a certain amount of angular momentum with respect to the black hole, thus forming an accretion disk. The disk material will eventually accrete toward the central object because of viscosity at a supercritical accretion rate, Ṁ/Ṁcrit>106, for the first several tens of days. (Here, Ṁcrit is the Eddington luminosity divided by c2.) We then expect that such an accretion disk is optically thick and advection dominated; that is, the disk is so hot that the produced energy and photons are advected inward rather than being radiated away. Thus, the disk luminosity is much less than the Eddington luminosity. The disk becomes hot and dense; for Ṁ/Ṁcrit~106, for example, T ~ 109(αvis/0.01)-1/4 K and ρ ~ 103(αvis/0.01)-1 g cm-3 (with αvis being the viscosity parameter) in the vicinity of the black hole. Depending on the material mixing, some interesting nucleosynthesis processes via rapid proton and alpha-particle captures are expected even for reasonable viscosity magnitudes (αvis ~ 0.01), and some of them could be ejected in a disk wind or a jet without being swallowed by the black hole.
Diagenetic Crystal Growth in the Murray Formation, Gale Crater, Mars
NASA Technical Reports Server (NTRS)
Kah, L. C.; Kronyak, R. E.; Ming, D. W.; Grotzinger, J. P.; Schieber, J.; Sumner, D. Y.; Edgett, K. S.
2015-01-01
The Pahrump region (Gale Crater, Mars) marks a critical transition between sedimentary environments dominated by alluvial-to-fluvial materials associated with the Gale crater rim, and depositional environments fundamentally linked to the crater's central mound, Mount Sharp. At Pahrump, the Murray formation consists of an approximately 14-meter thick succession dominated by massive to finely laminated mudstone with occasional interbeds of cross-bedded sandstone, and is best interpreted as a dominantly lacustrine environment containing tongues of prograding fluvial material. Murray formation mudstones contain abundant evidence for early diagenetic mineral precipitation and its subsequent removal by later diagenetic processes. Lenticular mineral growth is particularly common within lacustrine mudstone deposits at the Pahrump locality. High-resolution MAHLI images taken by the Curiosity rover permit detailed morphological and spatial analysis of these features. Millimeter-scale lenticular features occur in massive to well-laminated mudstone lithologies and are interpreted as pseudomorphs after calcium sulfate. The distribution and orientation of lenticular features suggests deposition at or near the sediment-water (or sediment-air) interface. Retention of chemical signals similar to host rock suggests that original precipitation was likely poikilotopic, incorporating substantial amounts of the primary matrix. Although poikilotopic crystal growth is common in burial environments, it also occurs during early diagenetic crystal growth within unlithified sediment where high rates of crystal growth are common. Loss of original calcium sulfate mineralogy suggests dissolution by mildly acidic, later-diagenetic fluids. As with lenticular voids observed at Meridiani by the Opportunity Rover, these features indicate that calcium sulfate deposition may have been widespread on early Mars; dissolution of depositional and early diagenetic minerals is a likely source for both calcium and sulfate ion-enrichment in burial fluids that precipitated in ubiquitous late-stage hydrofracture veins
Bioactive Natural Products Prioritization Using Massive Multi-informational Molecular Networks.
Olivon, Florent; Allard, Pierre-Marie; Koval, Alexey; Righi, Davide; Genta-Jouve, Gregory; Neyts, Johan; Apel, Cécile; Pannecouque, Christophe; Nothias, Louis-Félix; Cachet, Xavier; Marcourt, Laurence; Roussi, Fanny; Katanaev, Vladimir L; Touboul, David; Wolfender, Jean-Luc; Litaudon, Marc
2017-10-20
Natural products represent an inexhaustible source of novel therapeutic agents. Their complex and constrained three-dimensional structures endow these molecules with exceptional biological properties, thereby giving them a major role in drug discovery programs. However, the search for new bioactive metabolites is hampered by the chemical complexity of the biological matrices in which they are found. The purification of single constituents from such matrices requires such a significant amount of work that it should be ideally performed only on molecules of high potential value (i.e., chemical novelty and biological activity). Recent bioinformatics approaches based on mass spectrometry metabolite profiling methods are beginning to address the complex task of compound identification within complex mixtures. However, in parallel to these developments, methods providing information on the bioactivity potential of natural products prior to their isolation are still lacking and are of key interest to target the isolation of valuable natural products only. In the present investigation, we propose an integrated analysis strategy for bioactive natural products prioritization. Our approach uses massive molecular networks embedding various informational layers (bioactivity and taxonomical data) to highlight potentially bioactive scaffolds within the chemical diversity of crude extracts collections. We exemplify this workflow by targeting the isolation of predicted active and nonactive metabolites from two botanical sources (Bocquillonia nervosa and Neoguillauminia cleopatra) against two biological targets (Wnt signaling pathway and chikungunya virus replication). Eventually, the detection and isolation processes of a daphnane diterpene orthoester and four 12-deoxyphorbols inhibiting the Wnt signaling pathway and exhibiting potent antiviral activities against the CHIKV virus are detailed. Combined with efficient metabolite annotation tools, this bioactive natural products prioritization pipeline proves to be efficient. Implementation of this approach in drug discovery programs based on natural extract screening should speed up and rationalize the isolation of bioactive natural products.
Relativistic diffusion processes and random walk models
NASA Astrophysics Data System (ADS)
Dunkel, Jörn; Talkner, Peter; Hänggi, Peter
2007-02-01
The nonrelativistic standard model for a continuous, one-parameter diffusion process in position space is the Wiener process. As is well known, the Gaussian transition probability density function (PDF) of this process is in conflict with special relativity, as it permits particles to propagate faster than the speed of light. A frequently considered alternative is provided by the telegraph equation, whose solutions avoid superluminal propagation speeds but suffer from singular (noncontinuous) diffusion fronts on the light cone, which are unlikely to exist for massive particles. It is therefore advisable to explore other alternatives as well. In this paper, a generalized Wiener process is proposed that is continuous, avoids superluminal propagation, and reduces to the standard Wiener process in the nonrelativistic limit. The corresponding relativistic diffusion propagator is obtained directly from the nonrelativistic Wiener propagator, by rewriting the latter in terms of an integral over actions. The resulting relativistic process is non-Markovian, in accordance with the known fact that nontrivial continuous, relativistic Markov processes in position space cannot exist. Hence, the proposed process defines a consistent relativistic diffusion model for massive particles and provides a viable alternative to the solutions of the telegraph equation.
NASA Astrophysics Data System (ADS)
Martínez-Núñez, Silvia; Kretschmar, Peter; Bozzo, Enrico; Oskinova, Lidia M.; Puls, Joachim; Sidoli, Lara; Sundqvist, Jon Olof; Blay, Pere; Falanga, Maurizio; Fürst, Felix; Gímenez-García, Angel; Kreykenbohm, Ingo; Kühnel, Matthias; Sander, Andreas; Torrejón, José Miguel; Wilms, Jörn
2017-10-01
Massive stars, at least ˜10 times more massive than the Sun, have two key properties that make them the main drivers of evolution of star clusters, galaxies, and the Universe as a whole. On the one hand, the outer layers of massive stars are so hot that they produce most of the ionizing ultraviolet radiation of galaxies; in fact, the first massive stars helped to re-ionize the Universe after its Dark Ages. Another important property of massive stars are the strong stellar winds and outflows they produce. This mass loss, and finally the explosion of a massive star as a supernova or a gamma-ray burst, provide a significant input of mechanical and radiative energy into the interstellar space. These two properties together make massive stars one of the most important cosmic engines: they trigger the star formation and enrich the interstellar medium with heavy elements, that ultimately leads to formation of Earth-like rocky planets and the development of complex life. The study of massive star winds is thus a truly multidisciplinary field and has a wide impact on different areas of astronomy. In recent years observational and theoretical evidences have been growing that these winds are not smooth and homogeneous as previously assumed, but rather populated by dense "clumps". The presence of these structures dramatically affects the mass loss rates derived from the study of stellar winds. Clump properties in isolated stars are nowadays inferred mostly through indirect methods (i.e., spectroscopic observations of line profiles in various wavelength regimes, and their analysis based on tailored, inhomogeneous wind models). The limited characterization of the clump physical properties (mass, size) obtained so far have led to large uncertainties in the mass loss rates from massive stars. Such uncertainties limit our understanding of the role of massive star winds in galactic and cosmic evolution. Supergiant high mass X-ray binaries (SgXBs) are among the brightest X-ray sources in the sky. A large number of them consist of a neutron star accreting from the wind of a massive companion and producing a powerful X-ray source. The characteristics of the stellar wind together with the complex interactions between the compact object and the donor star determine the observed X-ray output from all these systems. Consequently, the use of SgXBs for studies of massive stars is only possible when the physics of the stellar winds, the compact objects, and accretion mechanisms are combined together and confronted with observations. This detailed review summarises the current knowledge on the theory and observations of winds from massive stars, as well as on observations and accretion processes in wind-fed high mass X-ray binaries. The aim is to combine in the near future all available theoretical diagnostics and observational measurements to achieve a unified picture of massive star winds in isolated objects and in binary systems.
The USEPA, Water Environment Federation (WEF) and Water Environment Research Foundation (WERF), under a Cooperative Research and Development Agreement (CRADA), are undertaking a massive effort to produce a Solids Processing Design and Management Manual (Manual). The Manual, repr...
Climbing The Knowledge Mountain - The New Solids Processing Design And Management Manual
The USEPA, Water Environment Federation (WEF) and Water Environment Research Foundation (WERF), under a Cooperative Research and Development Agreement (CRADA), are undertaking a massive effort to produce a Solids Processing Design and Management Manual (Manual). The Manual, repr...
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan
This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less
NASA Technical Reports Server (NTRS)
Wahlgren, Glenn M.; Carpenter, Kenneth G.; Norris, Ryan P.
2008-01-01
We report on progress in the analysis of high-resolution near-IR spectra of alpha Orionis (M2 Iab) and other cool, luminous stars. Using synthetic spectrum techniques, we search for atomic absorption lines in the stellar spectra and evaluate the available line parameter data for use in our abundance analyses. Our study concentrates on the post iron-group elements copper through zirconium as a means of investigating the slow neutron-capture process of nucleosynthesis in massive stars and the mechanisms that transport recently processed material up into the photospheric region. We discuss problems with the atomic data and model atmospheres that need to be addressed before theoretically derived elemental abundances from pre-supernova nucleosynthesis calculations can be tested by comparison with abundances determined from observations of cool, massive stars.
The Portals 4.0 network programming interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin
2012-11-01
This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less
Massive stars: privileged sources of cosmic-rays for interstellar astrochemistry
NASA Astrophysics Data System (ADS)
De Becker, M.
2015-01-01
Massive stars can be considered as crucial engines for interstellar physics. They are indeed the main providers of UV radiation field, and constitute a substantial source of chemical enrichment. On their evolution time-scale (at most about 10 Myr), they typically stay close to their formation site, i.e. close to molecular clouds very rich in interstellar molecules. These stellar objects have also the property to be involved in particle acceleration processes leading to the production of high energy charged particles (cosmic-rays). After rejection in the interstellar medium, these particles will play a substantial role in processes such as those simulated in various facilities dedicated to experimental astrochemistry. This short contribution intends to put these particles, crucial for astrochemistry, in their adequate astrophysical context.
Nagasaki, Hideki; Mochizuki, Takako; Kodama, Yuichi; Saruhashi, Satoshi; Morizaki, Shota; Sugawara, Hideaki; Ohyanagi, Hajime; Kurata, Nori; Okubo, Kousaku; Takagi, Toshihisa; Kaminuma, Eli; Nakamura, Yasukazu
2013-08-01
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.